Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENHANCEMENT] Proposal to allow detection from different perspectives #60

Open
TimWalter opened this issue Jul 23, 2022 · 18 comments
Open
Labels

Comments

@TimWalter
Copy link

Hey,

I've been tinkering around the problem of being able to detect the color-checker when its shot was taken, so that the approximate rectangle of the checker is distorted.

I am unsure if the detection of such a contour is currently possible with this package, but I would like to contribute in order to enable such a functionality.

As the masking of the checker should stay fixed, the idea is to warp the detected contour to fit the mask.
When the contour is warped even the size of the mask can stay fixed.
This can be done as follows:

# the dimensions are somewhat arbitrary but should fit the aspect ratio of the checker 

def correct_distortion(distorted_cnt, x_dim, y_dim, img):
    desired_cnt = [[0,0],[x_dim,0],[x_dim,y_dim],[0, y_dim]]
    M = cv2.getPerspectiveTransform(np.float32(distorted_cnt), np.float32(desired_cnt))
    return cv2.warpPerspective(img, M,(x_dim,y_dim))

As the checker orientation matters, it is necessary to ensure that the points of the detected contour are ordered in the same way as the new coordinates e.g.

ordering

My attempt to ensure such an ordering are the following functions:

def order_clockwise(cnt):
    centroid = contour_centroid(cnt)
    angles = [math.atan2(c[1]-centroid[0][1], c[0]-centroid[0][0])*180//np.pi for corner in cnt]
    clockwise_contours = [c for c, a in sorted(zip(unpacked_cnt, angles), key=lambda pair: pair[1])]

    return clockwise_contours 

def order_cnt(cnt):
    clockwise_contour = order_clockwise(cnt)

    lowest_point_index = np.argmax([pt[0][1] for pt in clockwise_contour])
    next_index = (lowest_point_index+1) % len(clockwise_contour)
    previous_index = lowest_point_index-1

    
    lowest_point = clockwise_contour[lowest_point_index]
    next_point = clockwise_contour[next_index]
    previous_point = clockwise_contour[previous_index]
    
    next_diff_x = np.abs(lowest_point[0][0] - next_point[0][0])
    next_diff_y = np.abs(lowest_point[0][1] - next_point[0][1])
    previous_diff_x = np.abs(lowest_point[0][0] - previous_point[0][0])
    previous_diff_y = np.abs(lowest_point[0][1] - previous_point[0][1])
    
    anti_clockwise_angle = math.atan2(next_diff_y, next_diff_x) * 180 //np.pi
    clockwise_angle = math.atan2(previous_diff_y, previous_diff_x) * 180 //np.pi
    
    # Lower line is already aligned in the image
    if next_diff_y == 0 or previous_diff_y == 0:
        next_angle = previous_angle = "Aligned"
    
    # "Falling" to the right, Less rotation needed to align when rotating clockwise
    if anti_clockwise_angle > clockwise_angle: 
        first_index = next_index       
    
    # "Falling" to the left, Less rotation needed to align when rotating anti-clockwise
    if anti_clockwise_angle == "Aligned" or anti_clockwise_angle <= clockwise_angle: 
        first_index = next_index+1 if next_index+1 < len(clockwise_contour) else 0 
        if previous_diff_y == 0:
            first_index -= 1
        
    return clockwise_contour[first_index:] + clockwise_contour[:first_index]
    

This works under the assumption that the checker is not flipped but was recorded as the aspect ratio suggests. It is required to perform the ordering before the wrapping.

I've attached a notebook, with a demonstration of the described pipeline
Demo.zip

@KelSolaar
Copy link
Member

Hi @TimWalter,

I am unsure if the detection of such a contour is currently possible with this package, but I would like to contribute in order to enable such a functionality.

Not right now, I mean you could have a bit of perspective but the expectation is to shoot the chart normal to the optical axis as much as possible, and run distortion and vignette correction possibly before detection.

I don't have too much cycles to look into that right now but any improvement is appreciated! I would like to roll in the proposal from #53 too!

Cheers,

Thomas

@TimWalter
Copy link
Author

TimWalter commented Apr 5, 2023

Hey,
it's been a while but lately I've been back at the problem. Over the last month, I have been working on revising the checker detection strategy to improve detection for multiple checkers and different perspectives. I changed the strategy quiet a bit from the initial idea described earlier in this discussion, which was based on too many heuristics in hindsight. The new approach is theoretically more sound and so far in my testing works much more reliably. It basically codes the swatches centroids and tries to map the resulting graph to the one of a reference checker (which is similar to the mask generated in the current approach).

I believe that my revised strategy is far superior to the current implementation in terms of robustness of detection, although it is slower (approximately 5x). Nonetheless, being able to detect multiple checkers in one image and detecting them even if they are not strictly facing the camera, or if not all swatches are detected, is vital for certain applications

I have done significant integration work on my local branch into the directory, including integrating it into segmentation.py, documenting it, and adapting the unit test and demo. The demo is a Jupyter notebook that I have attached to this comment for discussion. However, you may need to adapt the image and template paths to use it.

The benefits of my revised strategy are significant, as it improves the robustness of checker detection. I would love to hear your feedback on the code and any further testing that you might be able to do. My hope is that we can integrate this into the library.

Demo.zip

Cheers Tim

@KelSolaar
Copy link
Member

Hi @TimWalter,

Thanks a lot! It is certainly on my radar to implement your updates, I will try to look at that in the coming weeks. Something I would like to do maybe, is use it as a fail-over:

  1. We try to detect with the opinionated current approach which is faster
  2. Checkers are detected, good, we move on
  3. Nothing it detected, we try again with your approach

Cheers,

Thomas

@TimWalter
Copy link
Author

Sounds reasonable for the robustness for sure!

But how would you handle the multiple checkers in one image approach?

In any case let me know if I can help

@KelSolaar
Copy link
Member

But how would you handle the multiple checkers in one image approach?

The current code should be able to detect many already.

@KelSolaar
Copy link
Member

Or do you mean if one is detected but not the other?

@TimWalter
Copy link
Author

Multiple in one image, do you have to call the function multiple times with same image to get multiple? Or how would you detect multiple checkers with similar aspect ratio and discriminate between them to apply the correct mask.

As an example we specifically had the problem that some checkerboards (e.g. calibrite passport2) have 2 boards with basically the same aspect ratio so discriminating between them with the current code is hard if not all swatches are detected (as Swatch count is the only other metric left)

@KelSolaar
Copy link
Member

The code should already be doing swatches analysis to isolate them, we thus reject the white balance section of the passport.

@KelSolaar
Copy link
Member

Hey @TimWalter,

I haven't forgotten about that, been busy, very busy on many different projects so this got at the bottom of the stack. I look at your Notebook though, it is great but will take some effort to get added. I also saw that there is a dependency on sklearn.DBSCAN nothing dramatic but thought I would point out!

Cheers,

Thomas

@TimWalter
Copy link
Author

Yes that is true, I just needed some clustering algorithm to remove outliers and DBSCAN seemed to work most robustly

@mexicantexan
Copy link

Not sure if this is helpful but here is a script I wrote a while back for a project. It is not production-ready by any means, but might be helpful as a reference. It is a method for finding macbeth cards built on top of @KelSolaar 's awesome library. We found it to work really well for our use case, allowing us to find rotated colorcards (even upside down/flipped). Hopefully it helps?

macBeth_color_finder.zip

@KelSolaar
Copy link
Member

Hey @TimWalter,

I'm working on this and I have a few notes / questions:

  • is_convex_quadrilateral definition is missing (it is not critical that said).
  • What value do you use for settings.greedy_heuristic?
  • I'm not quite sure how you generated the templates, however, I'm noting that the correspondences tables are massive with multiple dozen of thousands of elements. This is only for a ColorChecker Classic, I cannot imagine how many there would be for a ColorChecker SG or DC and the performance hit.

@KelSolaar
Copy link
Member

Just deleted a bunch of comments, I was not thinking clearly, long week... :)

@KelSolaar
Copy link
Member

Looking at the are_three_collinear definition should not the comparison be as follows in generate_template definition: if np.all(angle_difference > 0) and not are_three_collinear_2(points), i.e. we add a correspondence only if it has no collinear points?

Note that this is a bit faster:

def are_three_collinear_2(points):
    non_collinear = True
    for p in combinations(points, 3):
        non_collinear *= (
            p[0][0] * (p[1][1] - p[2][1])
            + p[1][0] * (p[2][1] - p[0][1])
            + p[2][0] * (p[0][1] - p[1][1])
            != 0
        )
    return not bool(non_collinear)

@KelSolaar
Copy link
Member

@TimWalter: I noted an issue in the definition that filters the stacked contours: It relies on the fact that the contours are ordered in a particular way by OpenCV. This works irrespective of the ordering in my rapid tests:

def remove_stacked_contours(
    contours: ArrayLike, keep_smallest: bool = True
) -> Tuple[NDArrayInt]:
    """
    Remove amd filter out the stacked contours from given contours keeping
    either the smallest or the largest ones.

    Parameters
    ----------
    contours
        Stacked contours to filter.
    keep_smallest
        Whether to keep the smallest contours.

    Returns
    -------
    :class:`tuple`
        Filtered contours.

    References
    ----------
    :cite:`Walter2022`

    Examples
    --------
    >>> contours = np.array([
    ...     [[0, 0], [7, 0], [7, 7], [0, 7]],
    ...     [[0, 0], [8, 0], [8, 8], [0, 8]],
    ...     [[0, 0], [10, 0], [10, 10], [0, 10]],
    ... ])
    >>> remove_stacked_contours(contours)  # doctest: +ELLIPSIS
    (array([[0, 0],
           [7, 0],
           [7, 7],
           [0, 7]]...,)
    >>> remove_stacked_contours(contours, False)  # doctest: +ELLIPSIS
    (array([[ 0,  0],
           [10,  0],
           [10, 10],
           [ 0, 10]]...,)
    """

    filtered_contours = []

    for contour in contours:
        centroid = contour_centroid(contour)

        stacked_contours = [
            filtered_contour
            for filtered_contour in filtered_contours
            if cv2.pointPolygonTest(filtered_contour, centroid, False) > 0
        ]

        if not stacked_contours:
            filtered_contours.append(contour)
        else:
            areas = as_float32_array(
                [
                    cv2.contourArea(stacked_contour)
                    for stacked_contour in stacked_contours
                ]
            )

            if keep_smallest:
                result = np.all(cv2.contourArea(contour) < areas)
                index = 0
            else:
                result = np.all(cv2.contourArea(contour) > areas)
                index = -1

            if result:
                stacked_contour = as_int32_array(stacked_contours)[
                    np.argsort(areas)
                ][0]

                index = np.argwhere(
                    np.all(
                        as_int32_array(filtered_contours) == stacked_contour,
                        axis=(1, 2),
                    )
                )[index][0]

                filtered_contours[index] = contour

    return tuple(
        as_int32_array(filtered_contour)
        for filtered_contour in filtered_contours
    )

@KelSolaar
Copy link
Member

Hey @TimWalter,

I released a new version, I haven't incorporated your changes as I focused on a machine learning inference approach. That being said, I overhauled the segmentation method and it should be easier to slot your approach in there now if you feel like so!

Cheers,

Thomas

@TimWalter
Copy link
Author

Hey @TimWalter,

I'm working on this and I have a few notes / questions:

  • is_convex_quadrilateral definition is missing (it is not critical that said).
  • What value do you use for settings.greedy_heuristic?
  • I'm not quite sure how you generated the templates, however, I'm noting that the correspondences tables are massive with multiple dozen of thousands of elements. This is only for a ColorChecker Classic, I cannot imagine how many there would be for a ColorChecker SG or DC and the performance hit.

Hey,
Sorry I am currently very occupied with other things, I might try another shot at integrating the changes into the newly released version. In the meantime let me answer some of the questions that came up.

Here is the implementation of is_convex_quadrilateral

def largest_convex_quadrilateral(contour: ArrayLike) -> Tuple[NDArray, bool]:
    
    Returns the largest convex quadrilateral contained in the given contour.

    Parameters
    ----------
    contour
        Contour to process.

    Returns
    -------
    :class:`tuple`
        (contour of the largest convex quadrilateral, convexity)

    Example:
    >>> contour = np.array([[0, 0], [0, 1], [1, 1], [1, 0], [0.5, 0.5]])
    >>> largest_convex_quadrilateral(contour)
    (array([[0, 0],
           [0, 1],
           [1, 1],
           [1, 0]]), True)
    """
    while len(contour) > 4:
        areas = {i: cv2.contourArea(np.delete(contour, i, axis=0)) for i in range(len(contour))}
        areas = dict(sorted(areas.items(), key=lambda item: item[1]))

        # delete pt, which, if excluded leaves the largest area
        contour = np.delete(contour, list(areas.keys())[-1], axis=0)

    return contour, cv2.isContourConvex(contour)


def is_convex_quadrilateral(contour: ArrayLike, tolerance: float = 0.1) -> bool:
    """
    Returns True if the given contour is a convex quadrilateral.

    Parameters
    ----------
    contour
        Contour to process.
    tolerance
        Tolerance for the ratio of the areas between the trimmed contour and the original contour.

    Returns
    -------
    :class:`bool`
        True if the given contour is a convex quadrilateral.

    Example:
    >>> contour = np.array([[0, 0], [0, 1], [1, 1], [1, 0], [0.5, 0.5]])
    >>> is_convex_quadrilateral(contour)
    True
    """
    if len(contour) >= 4:
        original_area = cv2.contourArea(contour)
        convex_contour, convexity = largest_convex_quadrilateral(contour)
        if convexity:
            convex_area = cv2.contourArea(convex_contour)
            ratio = convex_area / original_area
            return np.abs(ratio - 1) < tolerance

    return False

The greedy_heuristic value was set to 10

The templates were generated quite manually, as you've seen the colours and corresponding centroids were hard-coded, and they do get quite massive which is the main issue. One can filter and sort them somewhat, such that the most promising candidates are evaluated first, which is done but this only relieves the issue partially.

Regarding the collinear function, you are definitely right in that a valid correspondence is not allowed to include 3 collinear points. However, my function does also do that since full rank matrices confirm that but the documentation and naming of the function is wrong. But anyway since yours is faster the better!

Regarding the filtering, could you specify the issue of ordering? Moreover, I am unsure how the filtering could be done just by the size, since it isn't consistently true that the larger or smaller contour represents our swatches since there are usually invalid contours on both ends of the spectrum , which is why I went for a clustering method.

@KelSolaar
Copy link
Member

Thanks for the replies! Would be neat to also have your method :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants