Chapter 12: Image Contours

Although the aforementioned edge detection can detect the edge of the image, the edge is discontinuous, and the detected edge is not a whole. Image contouring refers to connecting edges to form a whole.

  • Edges are discontinuous, not a whole
  • Image contours connect edges to form a whole.

The image contour is an important feature information in the image. Through the operation of the image contour, we can obtain the size, position, direction and other information of the target image.

OpenCV provides functions to find contours and draw contours:

  • Find image contours: cv2.findContours(), which can find information about all contours in an image.
  • Draw image contours: cv2.drawContours()

1. Find and draw the contour:

An image contour corresponds to a series of pixel points in the image, and these points represent a curve in the image in some way. In OpenCV, the image contour is found by cv2.findContours(), and according to the parameters, the represented contour is returned in a specific way.

​ Use cv2.drawContours() to draw the contour of the found image onto the image. This function can draw contours of different styles (solid/hollow points, different thicknesses of lines, colors, etc.) on the image according to the parameters, and can draw all Contours can also draw only specified contours.

1. Find image contours:

The function to find image contours in OpenCV is CV2.findContours(), the specific syntax is as follows:

image, contours, hierarchy = cv2.findContours(image,mode,method)

The return value is:

  • image: Consistent with the original image image in the parameter.
    (The original input image, in OpenCV 4.X, this return value has been cancelled.)

  • contours: The returned contours.
    The return value is a set of contour information, and each contour is composed of several points. For example, contours[i] is the i-th contour (subscripts start from 0), and contours[i][j] is the j-th point within the i-th contour.

  • hierarchy: Topological information (contour hierarchy) of the image.

    ​ Contours within the image may be in different positions. For example, one contour is inside another contour. In this case, we refer to the outer contour as the parent contour and the inner contour as the child contour. According to the above relationship classification, a parent-child relationship is established between all contours in an image.

    According to the relationship between contours, it is possible to determine how a contour is connected to other contours. For example, determine whether a contour is a child contour of a certain contour, or a parent contour of a certain contour. The above relationship is called hierarchy (organizational structure), and the return value hierarchy contains the above hierarchical relationship.

    Each contour contours[i] corresponds to 4 elements to illustrate the hierarchical relationship of the current contour. It has the form:
    [Next,Previous,First_Child,Parent]

    The meaning of each element in the formula is:

    • Next: The index number of the next contour.
    • Previous: Index number of the previous contour.
    • First_Child: The index number of the first child contour.
    • Parent: The index number of the parent contour.

    ​ If the relationship corresponding to each of the above parameters is empty, that is, when there is no corresponding relationship, set the value corresponding to the parameter to "-1". Use the print statement to view the value of the hierarchy: print(hierarchy)

    It should be noted that the hierarchical structure of the contour is determined by the parameter mode. That is to say, using different modes, the number of contours obtained is different, and the obtained hierarchy is also different.

The parameters are:

  • image: the original image. Must be an 8-bit single-channel image, all non-zero values ​​are treated as 1, and all zero values ​​are left unchanged. That is to say, the grayscale image will be automatically processed into a binary image. In actual operation, functions such as threshold value processing can be used in advance to process the image of the contour to be searched into a binary image as required.

  • mode: Contour retrieval mode

    The parameter mode determines the extraction method of the contour, specifically as follows:

    • cv2.RETR_EXTERNAL: Only detect outer contours.

      image-20211106152649182

    • cv2.RETR_LIST: No hierarchical relationship is established for the detected contours.

      image-20211106152552364

    • cv2.RETR_CCOMP: Retrieves all contours and organizes them into a two-level hierarchy. The upper layer is the outer boundary, and the lower layer is the inner hole boundary. If there is another connected object inside the inner hole, then the boundary of this object is still at the top level.

      image-20211106152757056

    • cv2.RETR_TREE: Create an outline of a hierarchical tree structure.
      image-20211106152757056

    Since there are only two layers of contours here, the hierarchy obtained by using the parameters cv2.RETR_CCOMP and cv2.RETR_TREE is consistent.

  • method: the approximation method of the contour

    The parameter method determines how to express the contour, which can be the following values:

    • cv2.CHAIN_APPROX_NONE: Store all contour points, and the pixel position difference between two adjacent points does not exceed 1, that is, max(abs(x1-x2),abs(y2-y1))=1.
    • cv2.CHAIN_APPROX_SIMPLE: Compress elements in the horizontal direction, vertical direction, and diagonal direction, and only retain the end point coordinates of this direction. For example, in extreme cases, a rectangle only needs 4 points to hold the outline information.
    • cv2.CHAIN_APPROX_TC89_L1: Use a flavor of the teh-Chinl chain approximation algorithm.
    • cv2.CHAIN_APPROX_TC89_KCOS: Use a flavor of the teh-Chinl chain approximation algorithm.

    For example: the image on the left is a contour stored using the parameter value cv2.CHAIN_APPROX_NONE, which saves every point in the contour; the image on the right is a contour stored using the parameter value cv2.CHAIN_APPROX_SIMPLE, which only saves four points on the boundary.

    image-20211106151908023

Notice:

When using the function cv2.findContours() to find image contours, you need to pay attention to the following issues:

  • The source image to be processed must be a grayscale binary image. Therefore, under normal circumstances, it is necessary to perform threshold segmentation or edge detection processing on the image in advance, and then use it as a parameter after obtaining a satisfactory binary image.
  • In OpenCV, it is all about finding white objects from black backgrounds. Therefore, the object must be white and the background must be black.
  • In OpenCV 4.x, the function cv2.findContours() has only two return values.

2. Draw the outline of the image:

In OpenCV, the function cv2.drawContours() is used to draw the image contour. The specific function syntax is:

image=cv2.drawContours(image, contours, contourIdx, color[, thickness[, lineType[, hierarchy[, maxLevel[, offset]]]]])

Among them, the return value of the function is image, which represents the target image, that is, the original image with the edge drawn.

The function has the following parameters:

  • image: The image to draw the outline. It should be noted that the function cv2.drawContours() will draw the contour directly on the image image. That is to say, after the function is executed, the image is no longer the original image, but the image containing the outline. Therefore, if the image image has other uses, you need to make a copy in advance, and pass the copy image to the function cv2.drawContours() for use.

  • contours: the contours to be drawn. The type of this parameter is the same as the output contours of the function cv2.findContours(), both of which are list types.

  • contourIdx: The index of the edge to be drawn, telling the function cv2.drawContours() whether to draw a certain contour or all contours. If the parameter is an integer or zero, it means to draw the contour corresponding to the index number; if the value is negative (usually "-1"), it means to draw all the contours.

  • color: the color to draw, expressed in BGR format.

  • thickness: an optional parameter, indicating the thickness of the brush used when drawing the outline. If the value is set to "-1", it means to draw a solid outline.

  • lineType: optional parameter, indicating the line type used when drawing the outline.

  • hierarchy: corresponds to the hierarchical information output by the function cv2.findContours().

  • maxLevel: Controls the depth of the drawn outline hierarchy. If the value is 0, it means that only the outline of the 0th layer is drawn; if the value is other non-zero positive numbers, it means that the outline of the highest level and the same number of levels below are drawn.

  • offset: Offset parameter. This parameter offsets the outline to a different position for display.

3. Draw a contour example:

Example 1: Draw all contours in an image

import cv2
import numpy as np

img = cv2.imread('../contour.bmp')
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# print(contours)
print(hierarchy)

temp = np.zeros(img.shape, np.uint8)
cv2.drawContours(temp, contours, -1, (255, 255, 255), 5)
cv2.imshow('img', img)
cv2.imshow('rst', temp)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出层次信息
[[[ 1 -1 -1 -1]
  [ 2  0 -1 -1]
  [-1  1 -1 -1]]]

image-20211106155945133

Example 2: Extracting foreground objects using the contour drawing function

import cv2
import numpy as np


img = cv2.imread('../flower.jpeg')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
t, new_img = cv2.threshold(gray_img, 50, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(new_img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
mask = np.zeros(new_img.shape, np.uint8)
mask = cv2.drawContours(mask, contours, -1, (255, 255, 255), -1)
rst = cv2.bitwise_and(img, img, mask=mask)

cv2.imshow('img', img)
cv2.imshow('gray_img', gray_img)
cv2.imshow('new_img', new_img)
cv2.imshow('mask', mask)
cv2.imshow('rst', rst)
cv2.waitKey()
cv2.destroyAllWindows()

image-20211106165101558

2. Moment features

The easiest way to compare two contours is to compare their contour moments. Contour moments represent the global features of a contour, an image, and a set of points. Moment information includes geometric features of the corresponding object, such as size, position, angle, shape, and so on. Moment features are widely used in pattern recognition, image recognition and so on.

1. Calculation of moments: moments function

In OpenCV, the contour features of the image can be obtained through the cv2.moments() function. Usually, we refer to the obtained contour features as contour moments.

  • The profile moments describe the important features of a profile, and the profile moments can be used to compare two profiles easily

Specific syntax:
retval = cv2.moments( array, [, binaryImage] )

parameter:

  • array: It can be a point set, a grayscale image or a binary image. When array is a point set, the function will treat these point sets as vertices in the contour, and treat the entire point set as a contour instead of treating them as independent points.
  • binaryImage: When this parameter is True, all non-zero values ​​in the array are processed as 1. This parameter is only valid when the parameter array is an image.

The return value retval is a moment feature, mainly including:

  1. space moment
    • Zero moment: m00
    • First-order moments: m10, m01
    • Second moment: m20, m11, m02
    • Third moment: m30, m21, m12, m03
  2. central moment
    • Second-order central moments: mu20, mu11, mu02
    • Third-order central moments: mu30, mu21, mu12, mu03
  3. normalized central moment
    • Second-order Hu moments: nu20, nu11, nu02
    • Third-order Hu moments: nu30, nu21, nu12, nu03

The above-mentioned moment feature information seems rather abstract. But obviously, if the moment feature information of the two contours is completely consistent, then the two contours are consistent. The meaning of "m00" in the zero-order moment is relatively intuitive, which is the area of ​​a contour.

Moment feature information can be used to compare whether two contours are similar. For example, there are two contours, no matter where they appear in the image, we can use the m00 moment of the function cv2.moments() to judge whether their areas are consistent.

Central moment:

​ The translation invariance of the central moment enables it to ignore the positional relationship between the two objects and helps us compare the consistency of the two objects at different positions.

For example, in many cases we wish to compare the consistency of two objects at different locations. The way to solve this problem is to introduce the central moment. Central moments obtain translation invariance by subtracting the mean, thus enabling comparison of two objects at different locations for agreement.

Normalized central moment:

​ The normalized center distance has translation and scaling without deformation.

In addition to considering translation invariance, we also consider the consistency of objects of inconsistent size after scaling. In other words, we hope that the image can have a stable feature value before and after scaling. That is, let the image have the same eigenvalues ​​before and after scaling. Obviously, the central moment does not have this property. For example, two objects with the same shape and different sizes have different central moments. The normalized central moment is scale invariant by dividing by the total size of the object.

In OpenCV, the function cv2.moments() simultaneously calculates the above spatial moments, central moments and normalized center distances.

Example:

import cv2
import numpy as np

img = cv2.imread('../contour.bmp')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contour, hierarchy = cv2.findContours(img_gray, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
cv2.imshow('img', img)

for i in range(len(contour)):
    temp = np.zeros(img_gray.shape, np.uint8)
    rst = cv2.drawContours(temp, contour, i, 255, 3)
    cv2.imshow(f'rst_{
      
      i}', rst)

for n, item in enumerate(contour):
    print(f'轮廓{
      
      n}的矩为:\n {
      
      cv2.moments(item)}')
    print(f'轮廓{
      
      n}的面积为: {
      
      cv2.moments(item)["m00"]}')

cv2.waitKey()
cv2.destroyAllWindows()
轮廓0的矩为:
 {
    
    'm00': 9209.5, 'm10': 1017721.8333333333, 'm01': 2389982.5, 'm20': 119678938.58333333, 'm11': 264115139.7083333, 'm02': 627149283.9166666, 'm30': 14819578435.95, 'm21': 31059127282.616665, 'm12': 69306755254.31667, 'm03': 166344041770.35, 'mu20': 7212710.227465913, 'mu11': 3366.9156102240086, 'mu02': 6918397.298907757, 'mu30': -2880.996063232422, 'mu21': 174896.51574015617, 'mu12': 103129.86562001705, 'mu03': -6266.268524169922, 'nu20': 0.08504061263542004, 'nu11': 3.9697222979357786e-05, 'nu02': 0.08157055062519235, 'nu30': -3.539586534588115e-07, 'nu21': 2.1487754181991495e-05, 'nu12': 1.2670516573109425e-05, 'nu03': -7.698726136189045e-07}
轮廓0的面积为: 9209.5
轮廓1的矩为:
 {
    
    'm00': 13572.0, 'm10': 4764940.166666666, 'm01': 3239293.833333333, 'm20': 1682579697.5, 'm11': 1137215771.8333333, 'm02': 799816300.5, 'm30': 597523663576.15, 'm21': 401501587677.93335, 'm12': 280779729659.4667, 'm03': 203776751049.05002, 'mu20': 9675571.953775883, 'mu11': -55175.564665317535, 'mu02': 26678624.500047207, 'mu30': -550033.6081542969, 'mu21': -48973895.52901983, 'mu12': 1704572.9323253632, 'mu03': 145759473.83877563, 'nu20': 0.05252776773308551, 'nu11': -0.00029954293752635475, 'nu02': 0.14483573662328061, 'nu30': -2.5631829122147227e-05, 'nu21': -0.002282206947059113, 'nu12': 7.943391363704586e-05, 'nu03': 0.006792461171429966}
轮廓1的面积为: 13572.0
轮廓2的矩为:
 {
    
    'm00': 8331.0, 'm10': 1055010.8333333333, 'm01': 757410.8333333333, 'm20': 138592698.0, 'm11': 95918474.75, 'm02': 74976013.33333333, 'm30': 18814393628.350002, 'm21': 12600844964.533333, 'm12': 9495480797.366667, 'm03': 7928377343.35, 'mu20': 4989546.103385642, 'mu11': 2422.1211806088686, 'mu02': 6116192.129312888, 'mu30': -256257.12244033813, 'mu21': 110165.07764232159, 'mu12': 321106.038520813, 'mu03': -152856.36076641083, 'nu20': 0.07188971649383602, 'nu11': 3.489808519246561e-05, 'nu02': 0.08812250835798138, 'nu30': -4.045135827151561e-05, 'nu21': 1.7390061131886812e-05, 'nu12': 5.0688056135402214e-05, 'nu03': -2.4129075338702876e-05}
轮廓2的面积为: 8331.0

image-20211107155845538

2. Calculate the contour area: contourArea function

In OpenCV, the area of ​​the contour can be calculated by the function cv2.contourArea().

retval = cv2.contourArea(contour [, oriented])

  • retval: The returned area value.
  • contour: contour.
  • oriented is a boolean value. When True, the returned value contains a plus/minus sign to indicate whether the contour is clockwise or counterclockwise. The default value of this parameter is False, indicating that the returned retval is an absolute value.

Example:

import cv2
import numpy as np

img = cv2.imread('../contour.bmp')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(img_gray, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
cv2.imshow('img', img)

for i in range(len(contours)):
    temp = np.zeros(img_gray.shape, np.uint8)
    rst = cv2.drawContours(temp, contours, i, 255, 3)
    cv2.imshow(f'rst_{
      
      i}', rst)

for n, item in enumerate(contours):
    print(f'轮廓{
      
      n}的面积为: {
      
      cv2.contourArea(item)}')

cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
轮廓0的面积为: 9209.5
轮廓1的面积为: 13572.0
轮廓2的面积为: 8331.0

image-20220123043905346

3. Calculate the contour length: arcLength function

In OpenCV, the length of the contour can be calculated by the function cv2.arcLength().

retval=cv2.arcLength(curve,closed)

The return value retval is the length (perimeter) of the contour.

parameter:

  • curve is the contour.
  • closed is a Boolean value, used to indicate whether the contour is closed. When the value is True, it means that the contour is closed.
import cv2
import numpy as np

img = cv2.imread('../contour.bmp')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(img_gray, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
cv2.imshow('img', img)

for i in range(len(contours)):
    temp = np.zeros(img_gray.shape, np.uint8)
    rst = cv2.drawContours(temp, contours, i, 255, 3)
    cv2.imshow(f'rst_{
      
      i}', rst)

for n, item in enumerate(contours):
    print(f'轮廓{
      
      n}的周长为: {
      
      cv2.arcLength(item, closed=True)}')

cv2.waitKey()
cv2.destroyAllWindows()
# 输出结果
轮廓0的周长为: 381.0710676908493
轮廓1的周长为: 595.4213538169861
轮廓2的周长为: 356.9360725879669

image-20220123043825337

Three, Hu Moment

Hu moments are linear combinations of normalized central moments. The Hu moment has rotation, scaling, and translation invariance, so the Hu moment is often used to identify the features of the image.

Hu moment is a linear combination of normalized central moments, and each moment is obtained by a combination of normalized central moments.

The normalized central moments returned by the function cv2.moments() contain:

  • Second-order Hu moments: nu20, nu11, nu02
  • Third-order Hu moments: nu30, nu21, nu12, nu03

For the convenience of expression, the above letter "nu" is expressed as the letter "v", then the normalized central moment is:

  • Second-order Hu moments: v20, v11, v02

  • Third-order Hu moments: v30, v21, v12, v03

The calculation formulas of the above seven Hu moments are:

image-20211107175805248

1. Hu moment function:

In OpenCV, the Hu distance can be obtained using the function cv2.HuMoments(). The parameter of this function is the return value of the cv2.moments() function. Returns 7 Hu moment values.

Concrete syntax:

  • hu=cv2.HuMoments(m)
    • Return value hu: Indicates the returned Hu moment value;
    • Parameter m: The moment eigenvalue is calculated by the function cv2.moments().

Example 1: Verify the 0th moment h0=v20+v02 in the Hu moment

import cv2

img = cv2.imread('../contour.bmp')
img_grey = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_moment = cv2.moments(img_grey)
nu02 = img_moment['nu02']
nu20 = img_moment['nu20']
hum1 = cv2.HuMoments(cv2.moments(img_grey)).flatten()

print(f'cv2.moments(img_grey)=\n{
      
      img_moment}')
print(f'hum1=\n{
      
      hum1}')
print(f'nu20+nu02: {
      
      nu20}+{
      
      nu20} = {
      
      nu20+nu02}')
print(f'hum1[0]={
      
      hum1[0]}')
print(f'hum1[0]-(nu02+nu20) = {
      
      hum1[0]-(nu20+nu02)}')

# 输出结果
cv2.moments(img_grey)=
{
    
    'm00': 8093955.0, 'm10': 1779032490.0, 'm01': 1661859225.0, 'm20': 505157033370.0, 'm11': 389611273740.0, 'm02': 391007192445.0, 'm30': 164339255189040.0, 'm21': 115872895560000.0, 'm12': 93612106111380.0, 'm03': 98481842760465.0, 'mu20': 114129828440.44452, 'mu11': 24338480021.574287, 'mu02': 49792534874.306694, 'mu30': 3135971915955.592, 'mu21': 1454447544071.9954, 'mu12': -2324769529267.9106, 'mu03': -2247068709894.701, 'nu20': 0.0017421181018673839, 'nu11': 0.00037151117457122763, 'nu02': 0.0007600508782649917, 'nu30': 1.682558608581603e-05, 'nu21': 7.80361974403396e-06, 'nu12': -1.2473201575996879e-05, 'nu03': -1.20563095054236e-05}

hum1=
[ 2.50216898e-03  1.51653824e-06  4.20046078e-09  3.70286211e-11
 -1.41810711e-20 -2.66632116e-14  3.48673128e-21]

nu20+nu02: 0.0017421181018673839+0.0017421181018673839 = 0.0025021689801323754、
hum1[0]=0.0025021689801323754
hum1[0]-(nu02+nu20) = 0.0

2. Shape matching:

​ We can judge the consistency of two objects through the Hu moment. For example, the difference of the Hu moments of two objects was calculated before, but the result was more abstract. In order to compare the Hu moments more intuitively and conveniently, OpenCV provides the function cv2.matchShapes() to compare the Hu moments of two objects.

The function cv2.matchShapes() allows us to provide two objects and compare their Hu moments. These two objects can be contours or grayscale images. No matter what it is, cv2.matchShapes() will calculate the Hu moment value of the object in advance.

The syntax format of the function cv2.matchShapes() is:

retval=cv2.matchShapes(contour1,contour2,method,parameter)

Where retval is the return value.

This function has the following 4 parameters:

  • contour1: the first contour or grayscale image.

  • contour2: The second contour or grayscale image.

  • method: The method for comparing the Hu moments of two objects, as shown in Table 12-1.

    image-20211107183757755

    A represents object 1 and B represents object 2:

    image-20211107183834544

    where and are the Hu moments of object A and object B, respectively.

  • parameter: A specific parameter applied to the method. This parameter is an extended parameter. Currently (as of OpenCV 4.1.0), this parameter is not supported, so set the value to 0.

Example: Use the function cv2.matchShapes() to calculate the matching of three different images.

import cv2

img1 = cv2.imread('../contour2.bmp')
img1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
img3 = cv2.imread('../lena.bmp')
img3 = cv2.cvtColor(img3, cv2.COLOR_BGR2GRAY)

# 对img1进行旋转缩放
h, w = img1.shape
m = cv2.getRotationMatrix2D((w/2, h/2), 90, 0.5)
img_rotate = cv2.warpAffine(img1, m, (w, h))
cv2.imshow('img1', img1)
cv2.imshow('img2', img_rotate)
cv2.imshow('img3', img3)

contours1, hierarchy1 = cv2.findContours(img1, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
contours2, hierarchy2 = cv2.findContours(img_rotate, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
contours3, hierarchy3 = cv2.findContours(img3, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)

res1 = cv2.matchShapes(contours1[0], contours1[0], 1, 0)
res2 = cv2.matchShapes(contours1[0], contours2[0], 1, 0)
res3 = cv2.matchShapes(contours1[0], contours3[0], 1, 0)
print('相同图像的matchShapes=', res1)
print('相似图像的matchShapes=', res2)
print('不相似图像的matchShapes=', res3)

cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
相同图像的matchShapes=0.0
相似图像的matchShapes=0.0252828927661784
不相似图像的matchShapes=0.6988263089300291

image-20211107234406601

From the above results it can be seen that:

  • The Hu moments of the same image are invariant, and the difference between them is 0.
  • Even after similar images have been translated, rotated and scaled, the return value of the function cv2.matchShapes() is still relatively close.
  • The difference in the return value of the cv2.matchShapes() function for dissimilar images is large.

4. Contour fitting

When calculating the contour, sometimes the actual contour may not be needed, but only an approximate polygon close to the contour is needed. OpenCV provides a variety of methods for computing contours to approximate polygons.

1. Rectangular bounding box:

In OpenCV, the rectangular boundary of the outline can be drawn through the function cv2.boundingRect(). The specific syntax is:
retval = cv2.boundingRect(array)

return value:

  • retval: Indicates the coordinate value of the upper left corner vertex of the returned rectangle boundary and the width and height of the rectangle boundary.

parameter:

  • array: grayscale image or contour.

Example:

import cv2
import numpy as np

img = cv2.imread('../contour2.bmp')
grey_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(grey_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

x, y, w, h = cv2.boundingRect(contours[0])
print(f'顶点及宽高分别为:{
      
      x}, {
      
      y}, {
      
      w}, {
      
      h}')
new_img = img.copy()
# brcnt = np.array([[[x, y]], [[x+w, y]], [[x+w, y+h]], [[x, y+h]]])
# rst = cv2.drawContours(new_img, [brcnt], -1, (255, 255, 255), 2)
rst = cv2.rectangle(new_img, (x, y), (x+w, y+h), (255, 255, 255), 2)
cv2.imshow('img', img)
cv2.imshow('rst', rst)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
顶点及宽高分别为:165, 70, 241, 121

image-20211116161955581

2. Minimum enclosing rectangle:

In OpenCV, the minimum enclosing rectangle of the outline can be drawn through the function cv2.minAreaRect().

retval = cv2.minAreaRect( points )

return value:

  • retval: Indicates the returned rectangle feature information. The structure of the value is (center of the smallest bounding rectangle (x, y), (width, height), rotation angle).

parameter:

  • points: outline.

    Note that the return value retval does not meet the parameter structure requirements of the function cv2.drawContours(). Therefore, it must be converted to a conforming structure before it can be used. The function cv2.boxPoints() can convert the above return value retval into a structure that meets the requirements.

    The syntax format of the function cv2.boxPoints() is:

    • points = cv2.boxPoints(box)

      • points: The return value is the contour point. Parameters that can be used in the function cv2.drawContours().

      • box: is the value of the type returned by the function cv2.minAreaRect().

Example:

import cv2
import numpy as np

img = cv2.imread('../contour2.bmp')
grey_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(grey_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
rect = cv2.minAreaRect(contours[0])
print(f'返回值rect:{
      
      rect}')
points = cv2.boxPoints(rect)
points = np.int0(points)
new_img = img.copy()
rst = cv2.drawContours(new_img, [points], 0, (255, 255, 255), 2)
cv2.imshow('img', img)
cv2.imshow('rst', rst)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
返回值rect:((284.41119384765625, 132.4847412109375), (80.4281997680664, 238.48388671875), 72.19464874267578)

image-20211116170127738

3. The smallest enclosing circle:

In OpenCV, the function cv2.minEnclosingCircle() is used to construct the smallest enclosing circle of an object through an iterative algorithm. Concrete syntax:

  • center,radius=cv2.minEnclosingCircle(points)
  • center: return value, the center of the smallest enclosing circle.
  • radius: return value: the radius of the smallest enclosing circle.
  • points: outline.

Example:

import cv2
import numpy as np

img = cv2.imread('../contour2.bmp')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

(x, y), radius = cv2.minEnclosingCircle(contours[0])
center = (int(x), int(y))
radius = int(radius)
new_img = img.copy()
rst = cv2.circle(new_img, center, radius, (255, 255, 255), 2)
cv2.imshow('img', img)
cv2.imshow('rst', rst)
cv2.waitKey()
cv2.destroyAllWindows()

image-20211116171907093

4. Best fit ellipse:

In OpenCV, the function cv2.fitEllipse() can be used to construct the best fitting ellipse. The syntax of this function is:

  • retval=cv2.fitEllipse(points)
    • retval: Return value, which is a value of RotatedRect type. This is because the function returns the circumscribed rectangle of the fitted ellipse, and retval contains parameter information such as the centroid, width, height, and rotation angle of the circumscribed rectangle, which coincide with the center point, axis length, and rotation angle of the ellipse.
    • points: outline.

Example:

import cv2
import numpy as np

img = cv2.imread('../contour2.bmp')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

ellipse = cv2.fitEllipse(contours[0])
print(f'ellipse={
      
      ellipse}')
new_img = img.copy()
rst = cv2.ellipse(new_img, ellipse, (0, 0, 255), 2)
cv2.imshow('img', img)
cv2.imshow('rst', rst)
cv2.waitKey()
cv2.destroyAllWindows()

image-20211116172510672

5. Best fit straight line:

In OpenCV, the function cv2.fitLine() is used to construct the best fitting line. The syntax of this function is:

  • line=cv2.fitLine(points,distType,param,reps,aeps)

    • line: The return value, which returns the parameters of the best fitting line. (point slope)

      Slope k = line[1] / line[0]

      intercept b = line[3] - k * line[2]

      Determining a straight line by point-slope method

    • points: outline.

    • distType: distance type. When fitting a straight line, it is necessary to minimize the sum of the distances from the input point to the fitted straight line , and its type is shown in the table

      image-20211116174644734

    • param: distance parameter, related to the selected distance type. When this parameter is set to 0, the function will automatically choose the optimal value.

    • reps: It is used to indicate the radial precision required for fitting a straight line, usually the value is set to 0.01.

    • aeps: It is used to indicate the angular precision required for fitting a straight line, usually the value is set to 0.01.

import cv2
import numpy as np

img = cv2.imread('../contour2.bmp')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

rows, cols = img.shape[:2]
vx, vy, x, y = cv2.fitLine(contours[0], cv2.DIST_L2, 0, 0.01, 0.01)
# 截距b
lefty = int((-x*vy/vx)+y)
righty = int(((cols-x)*vy/vx)+y)
new_img = img.copy()
rst = cv2.line(new_img, (cols-1, righty), (0, lefty), (0, 255, 0), 2)
cv2.imshow('img', img)
cv2.imshow('rst', rst)
cv2.waitKey()
cv2.destroyAllWindows()

image-20211116175904022

6. Smallest enclosing triangle:

In OpenCV, the function cv2.minEnclosingTriangle() is used to construct the minimum enclosing triangle. The syntax of this function is:

  • retval,triangle=cv2.minEnclosingTriangle(points)
    • retval: The area of ​​the smallest enclosing triangle.
    • triangle: The set of three vertices of the smallest enclosing triangle.
    • points: outline.
import cv2
import numpy as np

img = cv2.imread('../contour2.bmp', -1)
contours, hierarchy = cv2.findContours(img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

area, trgl = cv2.minEnclosingTriangle(contours[0])
print(f'area={
      
      area}')
print(f'trgl:{
      
      trgl}')
new_img = img.copy()
for i in range(0, 3):
    cv2.line(new_img, tuple(trgl[i][0]), tuple(trgl[(i + 1) % 3][0]), (255, 255, 255), 2)

cv2.imshow('img', img)
cv2.imshow('rst', new_img)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
area=26509.7265625
area=26509.7265625
trgl:
[[[ 52.674423 207.67442 ]]
 [[443.27908  156.27907 ]]
 [[323.3256    36.325584]]]

image-20211117115928254

7. Approximate polygons:

In OpenCV, the approximation polygon curve of the specified precision is constructed by the function cv2.approxPolyDP(). The specific syntax format is:

  • approxCurve = cv2.approxPolyDP(curve,epsilon,closed)
    • approxCurve: The return value, the point set that approximates the polygon.
    • curve is the contour.
    • psilon is the precision, the maximum distance between the boundary point of the original contour and the boundary of the approximating polygon.
    • closed is a boolean value. When the value is True, the approximating polygon is closed; otherwise, the approximating polygon is not closed.

​ The function cv2.approxPolyDP() uses the Douglas-Peucker algorithm (DP algorithm). The algorithm first finds the two farthest points from the contour and connects the two points (see (b) figure). Next, find a point on the contour that is farthest from the current straight line, and connect this point with the original straight line to form a closed polygon. At this time, a triangle is obtained, as shown in figure (c).

​ The above process is iterated continuously, and the newly found point farthest from the current polygon is added to the result. When the distance from all points on the contour to the current polygon is less than the value of the parameter epsilon of the function cv2.approxPolyDP(), the iteration is stopped.

​ Through the above process, we can know that epsilon is the accuracy information of approximating polygons. Typically, this precision is set as a percentage of the total length of the profile.

Example:

import cv2

img = cv2.imread('../contour3.bmp')
grey_img = cv2.cvtColor(img, cv2.cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(grey_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cv2.imshow('img', img)

adp = img.copy()
epsilon = 0.1 * cv2.arcLength(contours[0], True)
approx = cv2.approxPolyDP(contours[0], epsilon, True)
adp = cv2.drawContours(adp, [approx], 0, (0, 255, 0), thickness=2)
cv2.imshow('rst0.1', adp)

adp = img.copy()
epsilon = 0.08 * cv2.arcLength(contours[0], True)
approx = cv2.approxPolyDP(contours[0], epsilon, True)
adp = cv2.drawContours(adp, [approx], 0, (0, 255, 0), 2)
cv2.imshow('rst0.08', adp)

adp = img.copy()
epsilon = 0.05 * cv2.arcLength(contours[0], True)
approx = cv2.approxPolyDP(contours[0], epsilon, True)
adp = cv2.drawContours(adp, [approx], 0, (0, 255, 0), 2)
cv2.imshow('rst0.05', adp)

adp = img.copy()
epsilon = 0.04 * cv2.arcLength(contours[0], True)
approx = cv2.approxPolyDP(contours[0], epsilon, True)
adp = cv2.drawContours(adp, [approx], 0, (0, 255, 0), 2)
cv2.imshow('rst0.04', adp)

adp = img.copy()
epsilon = 0.01 * cv2.arcLength(contours[0], True)
approx = cv2.approxPolyDP(contours[0], epsilon, True)
adp = cv2.drawContours(adp, [approx], 0, (0, 255, 0), 2)
cv2.imshow('rst0.01', adp)

cv2.waitKey()
cv2.destroyAllWindows()

image-20211117163727208

Five, convex hull

1. Convex hull:

  • Approximating the polygon is approximating the contour inside the contour
  • A convex hull is a convex polygon that surrounds a contour outside the contour, and is the outermost "convex" polygon of an object.

Approximating a polygon is to approximate the contour within the contour according to the specified accuracy, which is a high approximation of the contour. The convex hull is a convex polygon that surrounds the original contour outside the contour and is only composed of points on the contour. Everywhere in the convex hull is convex, that is, a straight line connecting any two points within the convex hull remains within the convex hull. In a convex hull, the interior angle of any three consecutive points is less than 180°.

​ For example, in the figure below, the outermost polygon is the convex hull of the manipulator, and the part between the edge of the hand and the convex hull is called a convex defect . The convex defect can be used to deal with problems such as gesture recognition.

image-20211118144609304

In OpenCV, the convex hull of the contour is obtained through the cv2.convexHull() function. The syntax format is:

  • hull=cv2.convexHull(points[,clockwise[,returnPoints]])
    • hull: the return value, which is the corner point of the convex hull.
    • points: outline.
    • clockwise: Boolean value. When the value is True, the corner points of the convex hull will be arranged in a clockwise direction; when the value is False, the corner points of the convex hull will be arranged in a counterclockwise direction.
    • returnPoints: Boolean value. The default value is True, the function returns the x/y axis coordinates of the corners of the convex hull; when it is False, the function returns the indices of the corners of the convex hull in the contour.
import cv2

img = cv2.imread('../grey_hand.jpg')
grey_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
t, grey_img = cv2.threshold(grey_img, 20, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(grey_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

new_img = img.copy()
cv2.drawContours(new_img, contours, 0, (0, 0, 255), 2)
hull = cv2.convexHull(contours[0])
rst = cv2.polylines(new_img, [hull], True, (0, 255, 255), 2)
cv2.imshow('img', img)
cv2.imshow('rst', rst)
cv2.waitKey()
cv2.destroyAllWindows()

image-20211118145222884

2. Convex defects:

​ The part between the convex hull and the contour is called a convex defect. Use the function cv2.convexityDefects() in OpenCV to get convex defects. Its syntax format is as follows:

  • convexityDefects=cv2.convexityDefects(contour,convexhull)

    • convexityDefects: The return value is a set of convex defect points. It is an array, and each row contains the values ​​[start point, end point, point on the contour furthest from the convex hull, approximate distance from the furthest point to the convex hull].

      Note that the first three values ​​in the returned result [start point, end point, point on the contour farthest from the convex hull, approximate distance from the furthest point to the convex hull] are the indexes of the contour points, so you need to find them in the contour points .

    • contour: contour.

    • convexhull: convex hull.

      Note that when calculating convex defects with cv2.convexityDefects(), you need to pass in the convex hull as a parameter. When looking for the convex hull, the value of the parameter returnPoints of the function cv2.convexHull() used must be False.

import cv2

img = cv2.imread('../grey_hand.jpg')
grey_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
t, grey_img = cv2.threshold(grey_img, 20, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(grey_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0]

hull = cv2.convexHull(contours, returnPoints=False)
convexity_defects = cv2.convexityDefects(contours, hull)
print(convexity_defects)
new_img = img.copy()
for i in range(convexity_defects.shape[0]):
    s, e, f, d = convexity_defects[i, 0]
    start = tuple(contours[s][0])
    end = tuple(contours[e][0])
    far = tuple(contours[f][0])
    cv2.line(new_img, start, end, [0, 255, 0], 2)
    cv2.circle(new_img, start, 5, [0, 0, 255], -1)
    cv2.circle(new_img, far, 5, [255, 0, 0], -1)

cv2.imshow('img', img)
cv2.imshow('result', new_img)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
[[[    0     2     1   162]]
 [[    2     4     3   114]]
 [[    4   258   157 39819]]
 [[  258   260   259   142]]
 [[  261   265   262   114]]
 [[  266   268   267   162]]
 [[  268   454   331  3007]]
 [[  455   578   513  5631]]
 [[  579   585   580   114]]
 [[  586   722   693 37006]]
 [[  722   724   723   162]]
 [[  724   726   725   142]]
 [[  726   910   781 47500]]
 [[  911   915   912   114]]
 [[  915   917   916   162]]
 [[  917  1105  1028 46699]]]

image-20211122155554002

3. Geometry test:

The following introduces several geometric tests related to the convex hull

(1): Test if the contour is graphical

In OpenCV, the function cv2.isContourConvex() can be used to determine whether the contour is convex. The syntax format is:
retval=cv2.isContourConvex(contour)

  • retval: return value, Boolean type. When the value is True, the contour is convex; otherwise, it is not convex.
  • contour: The contour to be judged.
import cv2

img = cv2.imread('../grey_hand.jpg')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
t, gray_img = cv2.threshold(gray_img, 20, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(gray_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

new_img = img.copy()
hull = cv2.convexHull(contours[0])
cv2.polylines(new_img, [hull], True, (0, 255, 0), 2)
print(f'判断构造出来的多边形是否是凸形的:{
      
      cv2.isContourConvex(hull)}')
cv2.imshow('rst1', new_img)

new_img = img.copy()
epsilon = 0.005 * cv2.arcLength(contours[0], True)
approx = cv2.approxPolyDP(contours[0], epsilon, True)
rst = cv2.drawContours(new_img, [approx], 0, [0, 0, 255], 2)
print(f'判断构造出来的多边形是否是凸形的:{
      
      cv2.isContourConvex(approx)}')
cv2.imshow('rst2', rst)

cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
判断构造出来的多边形是否是凸形的:True
判断构造出来的多边形是否是凸形的:False

image-20211122164938727

(2): Distance from point to contour

In OpenCV, the cv2.pointPolygonTest() function is used to calculate the shortest distance from a point to a polygon (contour) (that is, the vertical line distance). This calculation process is also called the relationship test between points and polygons. The syntax of the function is:

  • retval=cv2.pointPolygonTest(contour,pt,measureDist)
    • retval: the return value, related to the value of the parameter measureDist.
    • contour: for contour.
    • pt: the point to be determined.
    • measureDist is a Boolean value, indicating the way to determine the distance.
      • When the value is True, it means to calculate the distance from the point to the contour. If the point is outside the contour, the return value is negative; if the point is on the contour, the return value is 0; if the point is inside the contour, the return value is positive.
      • When the value is False, the distance is not calculated, and only a value among "-1", "0" and "1" is returned, indicating the positional relationship of the point relative to the contour. If the point is outside the contour, the return value is "-1"; if the point is on the contour, the return value is "0"; if the point is inside the contour, the return value is "1".
import cv2

img = cv2.imread('../contour2.bmp')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# t, gray_img = cv2.threshold(gray_img, 20, 255, cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(gray_img, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

new_img = img.copy()
hull = cv2.convexHull(contours[0])
rst = cv2.polylines(new_img, [hull], True, (0, 255, 0), 2)

dist_a = cv2.pointPolygonTest(hull, (300, 150), True)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.circle(rst, (300, 150), 5, [0, 0, 255], -1)
cv2.putText(rst, 'A', (305, 155), font, 1, (0, 255, 0), 3)
print(f'dist_a={
      
      dist_a}')

dist_b = cv2.pointPolygonTest(hull, (300, 250), True)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.circle(rst, (300, 250), 5, [0, 0, 255], -1)
cv2.putText(rst, 'B', (305, 255), font, 1, (0, 255, 0), 3)
print(f'dist_b={
      
      dist_b}')

dist_c = cv2.pointPolygonTest(hull, (405, 120), True)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.circle(rst, (405, 120), 5, [0, 0, 255], -1)
cv2.putText(rst, 'C', (410, 125), font, 1, (0, 255, 0), 3)
print(f'dist_c={
      
      dist_c}')

cv2.imshow('img', img)
cv2.imshow('rst', rst)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
dist_a=18.454874482756573
dist_b=-77.26881150094928
dist_c=-0.0

image-20211122180831798

6. Comparing contours using the shape scene algorithm

Using moment features to compare shapes in OpenCV is a very effective method. However, starting from OpenCV3, there is a more effective method. OpenCV3 provides a special module shape, and the shape scene algorithm in this module can compare shapes more efficiently.

Two ways to compare contours:

  1. moment feature
  2. Shape Scene Algorithm

Note: To use the shape scene algorithm, you need to install the opencv-contrib-python library in advance, and the version of the opencv-contrib-python library should be the same as that of opencv.

1. Calculate the shape scene distance:

The shape scene algorithm in OpenCV uses 距离as a metric for shape comparison. This is because the difference value and distance between shapes are similar, for example, both can only be zero or positive, and for example, when two shapes are exactly the same, both the distance value and the difference value are zero.

​The function cv2.createShapeContextDistanceExtractor() is used in OpenCV The "shape context algorithm" used by it will attach a "shape context" descriptor to each point when calculating the distance, so that each point can capture the distribution characteristics of other remaining points relative to it, thus providing a global discriminant feature .

The syntax format of the cv2.createShapeContextDistanceExtractor() function is:

  • retval=cv2.createShapeContextDistanceExtractor([,nAngularBins[, nRadialBins[, innerRadius[, outerRadius[, iterations[, comparer[, transformer]]]]]]] )
    • retval: return value
    • nAngularBins: Number of angular bins established for shape context descriptors used in shape matching.
    • nRadialBins: The number of radial bins established for shape context descriptors used in shape matching.
    • innerRadius: The inner radius of the shape context descriptor.
    • outerRadius: The outer radius of the shape context descriptor.
    • iterations: the number of iterations.
    • comparer: histogram cost extraction operator. This function uses the histogram cost extraction functor, and can directly use the operator of the histogram cost extraction functor as a parameter.
    • transformer: shape transformation parameters.

The parameters of the function cv2.createShapeContextDistanceExtractor() are all optional parameters, and the result is retval.

This result can be used to calculate the distance between two different shapes by the function cv2.ShapeDistanceExtractor.computeDistance(). The syntax for this function is:

  • retval=cv2.ShapeDistanceExtractor.computeDistance(contour1,contour2)
    • coutour1 and coutour2 are different contours.

Example:

  1. Constructs a shape scene algorithm object.
  2. Call the computeDistance() method to compare contours and calculate the distance between different contours.
import cv2

img1 = cv2.imread('../contour2.bmp')
cv2.imshow('img1', img1)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
contours1, hierarchy1 = cv2.findContours(gray1, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnt1 = contours1[0]


h, w = img1.shape[:2]
m = cv2.getRotationMatrix2D((w/2, h/2), 90, 0.5)
img_rotate = cv2.warpAffine(gray1, m, (w, h))
cv2.imshow('img2', img_rotate)
contours2, hierarchy2 = cv2.findContours(img_rotate, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnt2 = contours2[0]

img3 = cv2.imread('../grey_hand.jpg')
gray3 = cv2.cvtColor(img3, cv2.COLOR_BGR2GRAY)
t, gray3 = cv2.threshold(gray3, 20, 255, cv2.THRESH_BINARY)
cv2.imshow('img3', gray3)
contours3, hierarchy3 = cv2.findContours(gray3, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnt3 = contours3[0]

# 构造距离提取算子对象
sd = cv2.createShapeContextDistanceExtractor()
# 计算距离
d1 = sd.computeDistance(cnt1, cnt1)
print(f'与自身的距离d1={
      
      d1}')
d2 = sd.computeDistance(cnt1, cnt2)
print(f'与旋转缩放后的自身图像的距离d2={
      
      d2}')
d3 = sd.computeDistance(cnt1, cnt3)
print(f'与不相似对象的距离d3={
      
      d3}')

cv2.waitKey()
cv2.destroyAllWindows()

# 数据结果
与自身的距离d1=0.00035154714714735746
与旋转缩放后的自身图像的距离d2=1.3983277082443237
与不相似对象的距离d3=293.24066162109375

image-20211127214111959

Judging from the above results:

  • The shape scene distance between the same images is almost zero.
  • The shape scene distance between similar images is small.
  • The shape scene distance between different images is large

2. Calculate the Hausdorrf distance:

The calculation method of Hausdorff distance is:

(1) For each point in image A, find the shortest distance from it to image B, and use this shortest distance as Hausdorff direct distance D1.

(2) For each point in image B, find the shortest distance from it to image A, and use this shortest distance as Hausdorff direct distance D2.

(3) The larger of the above D1 and D2 is taken as the Hausdorff distance.

Usually, the Hausdorff distance H(·) is defined according to the Hausdorff direct distance h(·) between object A and object B, and the mathematical formula is as follows: H(A,B)=max(ℎ(A ,B),ℎ(B,A))

in:

image-20211128001059395

In the formula, ‖∙‖ represents a certain norm of point a and point b, usually the Euclidean distance.

Scholars Normand and Mikael of McGill University give a detailed description of Hausdorff distance at: http://cgm.cs.mcgill.ca/~godfried/teaching/cg-projects/98/normand/main.html

OpenCV provides the function cv2.createHausdorffDistanceExtractor() to calculate the Hausdorff distance.

Its syntax format is:

  • retval=cv2.createHausdorffDistanceExtractor([,distanceFlag[,rankProp]])
    • retval: return value
    • distanceFlag: The distance flag is an optional parameter.
    • rankProp: It is a proportional value ranging from 0 to 1, which is also an optional parameter.

Example:

  1. Construct the Hausdorff object by cv2.createHausdorffDistanceExtractor().
  2. Call the computeDistance() method to compare the contours and calculate the Hausdorff distance of different images.
import cv2

img1 = cv2.imread('../contour2.bmp')
cv2.imshow('img1', img1)
gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)
contours1, hierarchy1 = cv2.findContours(gray1, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnt1 = contours1[0]


h, w = img1.shape[:2]
m = cv2.getRotationMatrix2D((w/2, h/2), 90, 0.5)
img_rotate = cv2.warpAffine(gray1, m, (w, h))
cv2.imshow('img2', img_rotate)
contours2, hierarchy2 = cv2.findContours(img_rotate, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnt2 = contours2[0]

img3 = cv2.imread('../grey_hand.jpg')
gray3 = cv2.cvtColor(img3, cv2.COLOR_BGR2GRAY)
t, gray3 = cv2.threshold(gray3, 20, 255, cv2.THRESH_BINARY)
cv2.imshow('img3', gray3)
contours3, hierarchy3 = cv2.findContours(gray3, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnt3 = contours3[0]


# 构造距离提取算子对象
hd = cv2.createHausdorffDistanceExtractor()
# 计算距离
d1 = hd.computeDistance(cnt1, cnt1)
print(f'与自身的hausdorff距离d1={
      
      d1}')
d2 = hd.computeDistance(cnt1, cnt2)
print(f'与旋转缩放后的自身图像的hausdorff距离d2={
      
      d2}')
d3 = hd.computeDistance(cnt1, cnt3)
print(f'与不相似对象的hausdorff距离d3={
      
      d3}')

cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
与自身的hausdorff距离d1=0.0
与旋转缩放后的自身图像的hausdorff距离d2=42.42640686035156
与不相似对象的hausdorff距离d3=146.81961059570312

image-20220123043602252

7. Eigenvalues ​​of the contour:

The attribute characteristics of the contour itself and the characteristics of the objects surrounded by the contour are of great significance for describing the image. The following introduces several attributes of the outline itself and the characteristics of the objects it surrounds.

1. Aspect Ratio:

Use the aspect ratio (AspectRation) to describe the outline, for example, the aspect ratio of a rectangular outline is:

  • Aspect ratio = width (Width) / height (Height)
import cv2

img = cv2.imread('../contour3.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

x, y, w, h = cv2.boundingRect(contours[0])
new_img = img.copy()
cv2.rectangle(new_img, (x, y), (x+w, y+h), (255, 0, 0), 3)
aspect_ratio = float(w)/h
print(aspect_ratio)
cv2.imshow('rst', new_img)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
2.125

image-20211128012446917

2. Extent:

Use the ratio of the contour area to the area of ​​the rectangular boundary (rectangular bounding box, rectangular contour) Extend to describe the image and its contour features. The calculation method is:

image-20211128012842088

import cv2

img = cv2.imread('../contour3.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

new_img = img.copy()
x, y, w, h = cv2.boundingRect(contours[0])
cv2.drawContours(new_img, contours[0], -1, (0, 255, 255), 3)
cv2.rectangle(new_img, (x, y), (x+w, y+h), (255, 0, 0), 3)
rectArea = w * h
cntArea = cv2.contourArea(contours[0])
extend = float(cntArea) / rectArea
print(extend)
cv2.imshow('rst', new_img)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
0.6720588235294118

image-20211128013600525

3. Solidity:

Use the ratio of contour area to convex hull area Solidity to measure image, contour and convex hull features. Its calculation method is:

image-20211128015345137

import cv2

img = cv2.imread('../contour3.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)

new_img = img.copy()
cv2.drawContours(new_img, contours[0], -1, (0, 255, 255), 2)
cntArea = cv2.contourArea(contours[0])

hull = cv2.convexHull(contours[0])
hullArea = cv2.contourArea(hull)
cv2.polylines(new_img, [hull], True, (255, 0, 0), 2)
solidity = float(cntArea)/hullArea
print(solidity)

cv2.imshow('rst', new_img)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
0.8977066247605952

image-20211128015808360

4. Equivalent diameter:

The eigenvalue of the profile is measured using the equivalent diameter, which is the diameter of a circle equal to the area of ​​the profile. Its calculation formula is:

image-20211128020110494

import cv2
import numpy as np

img = cv2.imread('../contour3.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)

new_img = img.copy()
cv2.drawContours(new_img, contours[0], -1, (0, 255, 0), 2)
cntArea = cv2.contourArea(contours[0])

equiDiameter = np.sqrt(4*cntArea/np.pi)
print(equiDiameter)
cv2.circle(new_img, (100, 100), int(equiDiameter/2), (0, 255, 255), 3)

cv2.imshow('rst', new_img)
cv2.waitKey()
cv2.destroyAllWindows()

# 结果
107.87682530960664

image-20211128020640603

5. Direction:

In OpenCV, the function cv2.fitEllipse() can be used to construct the best fitting ellipse, and can also return information such as the center point, axis length, and rotation angle of the ellipse in the return value. Using this form, information such as the direction of the ellipse can be obtained more intuitively.

The syntax format of the function cv2.fitEllipse() returning each attribute value is:

(x,y), (MA,ma) ,angle=cv2.fitEllipse(cnt)

  • (x,y): The center point of the ellipse.
  • (MA,ma): The length of the horizontal and vertical axes of the ellipse.
  • angle: The rotation angle of the ellipse.
import cv2
import numpy as np

img = cv2.imread('../contour3.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)

new_img = img.copy()
cv2.drawContours(new_img, contours[0], -1, (0, 255, 0), 2)
cntArea = cv2.contourArea(contours[0])

ellipse = cv2.fitEllipse(contours[0])
cv2.ellipse(new_img, ellipse, (255,0 , 0), 2)
print(f'(x, y): {
      
      ellipse[0][0], ellipse[0][1]}')
print(f'(MA, ma): {
      
      ellipse[1][0], ellipse[1][1]}')
print(f'angle: {
      
      ellipse[2]}')

cv2.imshow('rst', new_img)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
(x, y): (256.41845703125, 137.38284301757812)
(MA, ma): (69.56119537353516, 179.77560424804688)
angle: 81.93647766113281

image-20211128021652914

6. Mask and pixels:

If you want to get the mask image of an object and its corresponding points. The solid contour of a specific object, that is, the mask of a specific object, can be obtained by setting the thickness parameter thickness of the contour width of the function cv2.drawContours() to "-1".

​ It is hoped to obtain the specific position information of the contour pixel. In general, the position of the contour pixel is the position of the non-zero pixel in the image, and the position information of the contour pixel can be obtained in two ways. One is to use Numpy functions, and the other is to use OpenCV functions.

  • Use the Numpy function to get the contour pixels:

    • The numpy.nonzero() function can find out the position of the non-zero elements in the array, but its return value displays the rows and columns separately.
      For example, apply the function numpy.nonzero() to the following array a:

      image-20211128022845126

      The position information of the non-zero elements in the returned array a is:

      (array([0,1,1,2,2,2,3,4,4],dtype=int64),array([3,2,4,2,3,4,0,0,4],dtype=int64))

      Use the numpy.transpose() function to process the above return value, and then get the coordinates of these points in the form of (x, y):

      image-20211128022934851

  • Get contour points using OpenCV functions

    • OpenCV provides the function cv2.findNonZero() to find the index of non-zero elements. The syntax of this function is:

      idx=cv2.findNonZero(src)

      • idx: is the return value, indicating the index position of the non-zero element. It should be noted that in the returned index, each element corresponds to the format of (column number, row number).
      • src: It is a parameter, indicating the image to find non-zero elements.
import cv2
import numpy as np

img = np.zeros((5, 5), dtype=np.uint8)
for _ in range(10):
    i = np.random.randint(0, 5)
    j = np.random.randint(0, 5)
    img[i, j] = 1

print(f'img:\n{
      
      img}')
loc = cv2.findNonZero(img)
print(f'img:\n内非零位置{
      
      loc}')

# 输出结果
img:
[[1 1 0 0 0]
 [0 1 0 0 1]
 [0 0 1 0 1]
 [0 1 0 0 0]
 [0 0 1 0 0]]
img内非零位置:
[[[0 0]]
 [[1 0]]
 [[1 1]]
 [[4 1]]
 [[2 2]]
 [[4 2]]
 [[1 3]]
 [[2 4]]]

7. Maximum and minimum values ​​and their positions:

OpenCV provides the function cv2.minMaxLoc(), which is used to find the maximum value, minimum value and its position within the specified object. The syntax of this function is:

  • min_val,max_val,min_loc,max_loc=cv2.minMaxLoc(imgray,mask=mask)
    • min_val: minimum value.
    • max_val: the maximum value.
    • min_loc: The location of the minimum value.
    • max_loc: The location of the maximum value.
    • imgray: Single channel image.
    • mask: mask. By using the mask image, the most value information in the area specified by the mask can be obtained.
import cv2
import numpy as np

img = cv2.imread('../lena.bmp')
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)

min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(gray_img)
print(f'min_val: {
      
      min_val}')
print(f'max_val: {
      
      max_val}')
print(f'min_loc: {
      
      min_loc}')
print(f'max_loc: {
      
      max_loc}')
new_img1 = img[min_loc[0]: min_loc[0]+100, min_loc[1]: min_loc[1]+100]
new_img2 = img[max_loc[0]: max_loc[0]+100, max_loc[1]: max_loc[1]+100]

cv2.imshow('img', img)
cv2.imshow('new_img1', new_img1)
cv2.imshow('new_img2', new_img2)
cv2.waitKey()
cv2.destroyAllWindows()

8. Average color and average grayscale:

OpenCV provides the function cv2.mean(), which is used to calculate the average color or average grayscale of an object. The syntax of this function is:

  • mean_val=cv2.mean(im,mask=mask)
    • mean_val: Indicates the average value returned.
    • im: original image.
    • mask: mask.
import cv2
import numpy as np

img = cv2.imread('../lena.bmp')
mean_val = cv2.mean(img)
print(mean_val)

cv2.imshow('img', img)
cv2.waitKey()
cv2.destroyAllWindows()

# 输出结果
(124.05046081542969, 124.05046081542969, 124.05046081542969, 0.0)

Note: The function cv2.mean() is able to calculate the mean of each channel. The above 4 values ​​are the mean values ​​of RGB and A channel (alpha channel) respectively. In this example, there is no A channel, so the A channel is 0, and the values ​​​​of the three RGB channels are the same, so the calculated average is also the same.

image-20211128030222124

9. Pole:

Sometimes, we want to obtain the extreme points within an object, such as the four points at the far left, at the far right, at the top, and at the bottom.

OpenCV provides corresponding functions to find these points, the usual syntax format is:

  • leftmost=tuple(cnt [cnt[:,:,0].argmin()] [0])
  • rightmost=tuple(cnt[cnt[:,:,0].argmax()] [0])
  • topmost=tuple(cnt[cnt[:,:,1].argmin()] [0])
  • bottommost=tuple(cnt[cnt[:,:,1].argmax()] [0])
import cv2
import numpy as np

img = cv2.imread('../contour2.bmp')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
contours, hierarchy = cv2.findContours(gray, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE)

new_img = img.copy()
cv2.drawContours(new_img, contours[0], -1, (250, 150, 150), 2)

cnt = contours[0]
leftmost = tuple(cnt[cnt[:, :, 0].argmin()][0])
rightmost = tuple(cnt[cnt[:, :, 0].argmax()][0])
topmost = tuple(cnt[cnt[:, :, 1].argmin()][0])
bottommost = tuple(cnt[cnt[:, :, 1].argmax()][0])
print(f'leftmost: {
      
      leftmost}')
print(f'rightmost: {
      
      rightmost}')
print(f'topmost: {
      
      topmost}')
print(f'bottommost: {
      
      bottommost}')

cv2.putText(new_img, 'A', leftmost, cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
cv2.putText(new_img, 'B', rightmost, cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
cv2.putText(new_img, 'C', topmost, cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
cv2.putText(new_img, 'D', bottommost, cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
cv2.imshow('rst', new_img)

cv2.waitKey()
cv2.destroyAllWindows()

# 结果
leftmost: (165, 141)
rightmost: (405, 127)
topmost: (348, 70)
bottommost: (180, 190)

mean. The above 4 values ​​are the mean values ​​of RGB and A channel (alpha channel) respectively. In this example, there is no A channel, so the A channel is 0, and the values ​​​​of the three RGB channels are the same, so the calculated average is also the same.

image-20211128031312726

Guess you like

Origin blog.csdn.net/weixin_57440207/article/details/122647019