Corner points with sub-pixel accuracy

        Sometimes we need corner detection with maximum accuracy. OpenCV provides us with the function cv2.cornerSubPix(), which can provide sub-pixel-level corner detection. Below is an example. First, we need to find the Harris corner, and then pass the center of gravity of the corner to this function for correction. Harris corners are marked with red pixels, and green pixels are rectified pixels. When using this function we need to define an iteration stop condition. The iteration stops when the number of iterations is reached or the accuracy condition is met. We also need to define the neighborhood size for the corner search.

ret, labels, stats, centroids = cv2.connectedComponentsWithStats(image, connectivity, ltype)

        image: 8-bit single-channel image

        labels: output labels

        stats:Nx5的矩阵(CV_32S):[x0, y0, width0, height0, area0; ... ; x(N-1), y(N-1), width(N-1), height(N-1), area(N-1)]

        centroids: Nx2 centroid matrix (CV_64F): [ cx0, cy0; ... ; cx(N-1), cy(N-1)]

        connectivity: the default is 8, 4- or 8-connected components

        ltype: The default is CV_32S, the label type (CV_32S or CV_16U)

cv2.cornerSubPix(image, corners, winSize, zeroZone, criteria)

        image: input image

        corners: the initial coordinates of the corner points

        winSize: half of the side length of the search window

        zeroZone: Half the length of the dead region in the middle of the search area

        criteria: Termination criteria for the iterative process

For example:

import cv2
import numpy as np
 
filename = 'test30_3.jpg'
img = cv2.imread(filename)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
 
gray = np.float32(gray)
dst = cv2.cornerHarris(gray, 2, 3, 0.04)
dst = cv2.dilate(dst, None)
ret, dst = cv2.threshold(dst, 0.01*dst.max(), 255, 0)
dst = np.uint8(dst)
 
ret, labels, stats, centroids = cv2.connectedComponentsWithStats(dst)
 
# 定义一个标准去停止迭代
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.001)
 
# 返回值由角点坐标组成的一个数组(而非图像)
corners = cv2.cornerSubPix(gray, np.float32(centroids), (5, 5), (-1, -1), criteria)
 
res = np.hstack((centroids, corners))
# np.int0 可以用来省略小数点后面的数字(非四舍五入)。
res = np.int0(res)
img[res[:, 1], res[:, 0]] = [0, 0, 255]
img[res[:, 3], res[:, 2]] = [0, 255, 0]
 
cv2.imwrite('subpixe.png', img)

The result is as follows:

  It can be seen that green is more accurate than red.

Guess you like

Origin blog.csdn.net/weixin_34910922/article/details/128193740