Corner detection in digital image processing

In the context of image processing, "features" can be intuitively understood as salient or unique parts of an image that can be easily identified and used to represent the image. Think of features as "landmarks" or "focal points" in an image that make them unique. To make this easier to understand, think about how familiar places or objects are recognized in real life.

Imagine you are looking at a photo of a busy city street. What did you notice first? It could be a uniquely shaped building, a brightly colored billboard, or a unique road sign. These elements stand out because they differ from their surroundings in some way, perhaps through shape, color or texture. In image processing, these are features. Features are the building blocks of many advanced image processing tasks. They are like clues or key points that algorithms use to “understand” and process an image in a meaningful way.

Type of feature

  • Edges are places in an image where intensity or color changes significantly. Think of the silhouette of a mountain against the sky; the boundary where the mountain meets the sky forms an edge.

bf14af1d0af38a181cfb0bc46506f78f.jpegedge

  • A corner point is a point where two or more edges intersect. They are like the corners of a picture frame, where the two sides meet at a point.

bad915307c54fbc07d9b86981d6969e1.jpeg

corner point

  • Blobs are areas in an image that are qualitatively different from surrounding areas (such as brightness or color). Think of them as spots or blemishes on the surface. They are used in situations where objects need to be recognized or counted, such as counting apples in an image.

823abd6c007a76c3abb79bf8733b337b.jpeg

spot

  • Ridges are lines in an image that increase in intensity in multiple directions, like crests or ridges.

0f6f9d7837a59571468854c6f8c8e31e.jpeg

ridge

Feature detection

Feature detection algorithms in image processing are like tools for detectives to find important clues in complex scenes. These algorithms are designed to automatically detect and identify key features of images, such as edges, corners, and specific patterns, which are critical to understanding and analyzing images. A corner point can be viewed as the intersection of two edges, or as a point in the image where the gradient changes significantly, which represents a high point of local intensity variation in multiple directions.

  • Harris corner detection

  • Shi-Tomasi corner point 检浵

Harris corner detection

Harris corner detection is the basic algorithm for corner detection in image processing. It identifies points in the image where intensity changes significantly in all directions. The core idea of ​​Harris corner detection is to identify areas in an image where the intensity changes significantly when moving in any direction. The basic observation is that in flat areas (like a clear sky) the intensity remains relatively constant, while at the edges it changes sharply in one direction, but not much in the vertical direction. Around the corners, however, the intensity changes in every direction.

The algorithm uses a mathematical method that involves calculating a matrix (often called the Harris matrix) for each pixel in the image. This matrix captures gradient changes (i.e. intensity changes) in all directions around the pixel.

It maps all gradients taken in the X and Y directions.

630064340ef51cd7b9212ce73f99f179.jpeg

It then fits the ellipse to the distribution.

49916049dcf1da7ebd80b1fb425ce4ea.jpeg

This algorithm calculates the response value based on the lambda value.

ed373a408d11478eef1f739f694ef7de.jpeg

Below is a basic script demonstrating Harris corner detection.

import cv2
import numpy as np


image = cv2.imread('image.jpeg')
if image is None:
    print("Error loading image")
    exit()


# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)


# Convert to float32 for more precision
gray = np.float32(gray)


# Apply Harris Corner Detector
corners = cv2.cornerHarris(gray, blockSize=2, ksize=3, k=0.04)


# Dilate corner image to enhance corner points
corners = cv2.dilate(corners, None)


# Threshold to mark the corners on the image
threshold = 0.01 * corners.max()
image[corners > threshold] = [0, 0, 255]


# Display the result
cv2.imshow('Harris Corners', image)
cv2.waitKey(0)
cv2.destroyAllWindows()

The cv2.cornerHarris function is the key function in OpenCV for image corner detection based on the Harris corner detection algorithm. This function analyzes local changes in intensity in the image to identify corner points.

  • src is the input image. It must be a grayscale image of type float32.

  • blockSize is the neighborhood size considered for corner detection. It specifies the size of the window (i.e. local area) over which the local variance of the intensity will be calculated.

  • ksize is the aperture parameter of the Sobel operator. The Sobel operator is used to calculate the image gradient (x and y derivatives) in the Harris corner detection algorithm. ksize is the kernel size used for the Sobel operator. Common values ​​are 3, 5 or 7. Larger kernel sizes help smooth gradients but may reduce the accuracy of detecting sharp corners.

  • k is the free parameter of the Harris detector used in the equations in the algorithm. It is used to calculate the response score to determine whether an area is considered a corner point. The value of k is usually small, usually in the range of 0.04 to 0.06. The exact value can be adjusted based on the specific requirements of the application. Very high values ​​may result in fewer corners being detected, while very low values ​​may make the detector too sensitive to noise.

df6faf645c6fe2ed56ad4a849e8ef433.jpeg

Harris corner detection

Harris corner detector is good at distinguishing edges and corners and will not mistakenly mistake edges for corners. It still works even when the image is rotated because the corner structure does not change when rotating. Although it is not completely scale-invariant, it performs reasonably well under slight scale changes and different lighting conditions.

The detector can have difficulty with major scale changes (e.g., the same corner appearing at different sizes) and is not specifically designed to handle situations where lighting or perspective changes significantly. It is used in various applications such as feature extraction, image matching, motion tracking and 3D modeling.

Shi-Tomasi corner point 检浵

The Shi-Tomasi method, also known as the Good Features to Track detector, is based on the same basic principles as the Harris corner detector. However, it introduces a different way of evaluating corner response.

The key difference is the corner response function used. While Harris uses the composite score of two eigenvalues ​​based on the gradient covariance matrix, the Shi-Tomasi method simplifies this process by considering only the smaller of the eigenvalues.

The Shi-Tomasi method is particularly good at detecting more obvious and clearly defined corner points. Compared to the Harris method, it has less tendency to detect false corner points in flat areas or along edges.

import cv2
import numpy as np




image = cv2.imread('07.jpg')
if image is None:
    print("Error loading image")
    exit()


# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)


# Shi-Tomasi corner detection parameters
maxCorners = 100
qualityLevel = 0.01
minDistance = 10


corners = cv2.goodFeaturesToTrack(gray, maxCorners, qualityLevel, minDistance)


# Draw corners on the image
if corners is not None:
    corners = np.int0(corners)  
    for i in corners:
        x, y = i.ravel()
        cv2.circle(image, (x, y), 3, (0, 255, 0), -1)  


# Display the result
cv2.imshow('Shi-Tomasi Corners', image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Shi-Tomasi corner detection requires a grayscale image. cv2.cvtColor converts an image from one color space to another. cv2.goodFeaturesToTrack is a function that implements the Shi-Tomasi corner detection algorithm.

void cv::goodFeaturesToTrack ( InputArray  image,
OutputArray  corners,
int  maxCorners,
double  qualityLevel,
double  minDistance,
InputArray  mask = noArray(),
int  blockSize = 3,
bool  useHarrisDetector = false,
double  k = 0.04 
)
  • image is the source image to be used for corner detection.

  • maxCorners specifies the maximum number of corner points to return. If you set it to a negative number, it will return all detected corner points.

  • qualityLevel represents the minimum quality of the corner points to be considered. This is a relative measure based on the highest quality score of a corner point in the image. Scores are determined based on eigenvalues ​​in the Shi-Tomasi method or response functions in Harris.

  • minDistance specifies the minimum Euclidean distance between returned corner points.

  • mask is an optional binary mask specifying where to find corner points.

  • blockSize is the size of the neighborhood considered for corner detection.

·  END  ·

HAPPY LIFE

fe03496fc201f9b3d44dca8a011f035e.png

This article is for learning and communication only. If there is any infringement, please contact the author to delete it.

Guess you like

Origin blog.csdn.net/weixin_38739735/article/details/134984384