opencv defect detection

 

  • With the popularization of automated production equipment, industrial robots are increasingly used in various industries. More and more production lines are replaced by automated equipment for manual operations to achieve automated production. During the robot sorting process, the robot can not only accurately place products of different specifications and quality into designated pallets, but can also identify and classify surface defects of objects through the vision system. With the advent of the Industry 4.0 era, traditional visual inspection technology can no longer meet the requirements of modern industrial production, and visual inspection technology has become an indispensable part of modern industrial production. In machine vision systems, defect detection is a very important link. In machine vision systems, defect detection refers to automatically extracting, segmenting, identifying and tracking target objects after automatically processing the collected images through a certain algorithm during the image collection process. After obtaining the target object information after processing through these algorithms, defects in the target object are automatically identified and marked. In industrial production, defect detection is mainly divided into the following categories:

    • 1. Image segmentation

      Image segmentation is to separate each part of the image from the whole, so that the image becomes a meaningful unit with clear boundaries. Depending on the segmentation objects, the image can be segmented into several regions. Image segmentation is a basic task in machine vision systems and mainly includes methods such as morphology, edge detection, region growing and threshold segmentation. Morphological operators can directly generate boundaries, which is convenient and easy to operate; edge detection can obtain edge information, which is suitable for positioning; region growing and threshold segmentation methods can obtain the difference information between the target and the background on the same image, and obtain a more accurate target boundaries. Although there are many image segmentation methods, these methods all have a common feature: they are insensitive to noise. In order to suppress noise, people often use a series of algorithms. For example: opening operations, closing operations, etc. commonly used in mathematical morphology; methods based on edge detection, such as Canny operator, etc.; methods based on threshold selection; methods based on region growing, etc. Although these methods achieve good results, they are computationally intensive due to the need to manually select thresholds. Currently popular image segmentation technologies include: based on the combination of pixel grayscale information and edge detection algorithms, based on region growing algorithms, based on mathematical morphology combined with region growing algorithms, based on the combination of morphological operators and threshold selection algorithms, etc.

      • 1. Based on the combination of pixel grayscale information and edge detection algorithm

        This method combines edge detection with grayscale morphology to achieve target recognition without knowing the target background. The basic idea is to use edge detection operators in digital image processing to process the original image to obtain local feature information related to the edge, and then use these feature information to achieve target recognition. The Canny operator is currently the most widely used and best edge detection operator. It takes pixels as the unit, uses gray gradient information, and can detect edge information more accurately. The advantage of the Canny operator is that it can accurately extract edge information and is insensitive to noise. The disadvantage is that it is computationally intensive and time-consuming. In order to overcome the shortcomings of the Canny operator, researchers have proposed various improved algorithms. For example: (1) Hough transform (H): H cannot extract edges, but can obtain the difference information between the target and the background. (2) Bimodal method: Noise can be removed, but the information between the target and the background will be lost. (3) Gradient method: Analyzing the gradient amplitude and direction can better extract edges. (4) Multi-scale method: It can combine information from multiple scales to consider the problem, thereby detecting target edge information more accurately.

      • 2. Based on the combination of mathematical morphology and region growing algorithm

        Mathematical morphology is a mathematical tool for image analysis by transforming image structural elements, which can effectively eliminate noise and highlight objects. The region growing algorithm is a region-based segmentation method, which is an effective way to apply the region growing method to image segmentation. At present, there are many researches on the application of region growing algorithms at home and abroad. For example, Zhang Junhong and others combine mathematical morphology with region growing, use morphological erosion operations to enhance the target, and then perform region growing on the enhanced image. , and finally get the segmentation result. The results obtained using this method have the advantages of clear edges, no noise interference, and accurate positioning. However, this method also has some shortcomings: the region growing algorithm requires manual determination of seed points and thresholds during the image segmentation process, and can only segment simple structural elements, but cannot segment complex structural elements.

    • 2. Feature extraction

      Image feature extraction is to extract the feature data of the image as input data for subsequent processing. Usually, image features include the following: (1) Grayscale features: Grayscale information is one of the most important features in machine vision and has a very wide range of applications in image processing. Grayscale information refers to the change information of pixel values ​​in the image as the position changes, such as brightness and contrast. Because grayscale information can be described independently of factors such as object shape, size, lighting conditions, and background, it is also a very important feature in machine vision. (2) Edge detection features: Edge detection is a very important research content in machine vision. When an object is placed on an optical camera and photographed, it can store the object surface information on the optical image sensor, and then extract the object surface information in the image through an edge detection algorithm. Commonly used edge detection algorithms include: Canny edge detection algorithm, Hough transform edge detection algorithm, Hough transform edge detection algorithm. (3) Texture features: Texture is a kind of information that describes the relationship between different objects in an image. It includes various characteristic attributes of the object surface, such as roughness, grayscale and spatial distribution. For a specific type of object or surface, it should have a specific type of texture information. Common texture features include: gray-level co-occurrence matrix, histogram, gray-level histogram, etc. (4) Geometric features refer to features related to or described by shapes. Based on these characteristics, objects can be divided into different types: lines, cylinders, cuboids, cuboids, etc. Geometric features are usually used to describe the surface shape of objects, such as circularity, straightness, roundness, aspect ratio, etc. (5) Color characteristics: Color is a way of describing the relationship between the color information on the surface of an object and its characteristics. For example, for objects of different colors, they have different properties such as reflectivity and transmittance. Therefore, we can use color for defect detection. Common color characteristics are: RGB (red, green and blue), CMYK (white) and YCbCr (yellow), etc. (6) Texture features: Texture refers to specific types or feature information on the surface of an object. For example, when an object is a circle, it may have two different types or characteristic properties: circular and non-circular. Therefore, by using texture features for defect detection, it is possible to more accurately identify whether the target object has defects.

    • 3. Defect identification

      Defect recognition is a method of automatically identifying defects. The basic idea is that when a target object has a defect, it will emit a certain signal. When this signal reaches a certain ratio with the surrounding background signal, the target object will be found to exist. defect. During the production process, industrial robots can automatically identify and mark defects in target objects through visual inspection systems. With the improvement of industrial production levels, this method has gradually been applied to other industries. In the semiconductor industry, for example, when a defect exists on a chip, workers use handheld devices or robots to separate the chip from the substrate.

The following are several commonly used OpenCV defect detection code examples:

1. Edge detection:
```python
import cv2
# Read image
img = cv2.imread('image.jpg')
# Grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Edge detection
edges = cv2. Canny(gray, 50, 150)
# Display the results
cv2.imshow('Edges', edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
````
2. Contour detection:
```python
import cv2
# Read the image
img = cv2.imread('image.jpg')
# Grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Edge detection
edges = cv2.Canny(gray, 50, 150)
# Contour detection
contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
# Draw contours
cv2.drawContours(img, contours, -1, (0, 0, 255), 2)
# Display the results
cv2.imshow('Contours', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
````
3. Line detection:
```python
import cv2
import numpy as np
# Read image
img = cv2.imread('image.jpg')
# Grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Edge detection
edges = cv2.Canny(gray, 50, 150)
# Line detection
lines = cv2.HoughLines(edges, 1, np.pi/180, 200)
# Draw straight lines
for line in lines:
rho, theta = line[0]
a = np .cos(theta)
b = np.sin(theta)
x0 = a * rho
y0 = b * rho
x1 = int(x0 + 1000 * (-b))
y1 = int(y0 + 1000 * (a))
x2 = int(x0 - 1000 * (-b))
y2 = int(y0 - 1000 * (a))
cv2.line(img, (x1, y1), ( x2, y2), (0, 0, 255), 2)
# Display the results
cv2.imshow('Lines', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
````
These code examples can be used for common Defect detection tasks, but specific applications need to be adjusted and optimized according to actual conditions.

Guess you like

Origin blog.csdn.net/qq_42751978/article/details/130809255