Summary of industrial machine vision defect detection work

Summary of industrial machine vision inspection work (because there are no systematic handouts and documents on the Internet, they are all scattered, so I try to summarize it myself, for reference only)


(Because there are no very systematic handouts and documents on the Internet, they are all scattered, so I try to summarize it myself, for reference only)
)

Everything you want to know can be found here,

Industrial machine vision systems include: lighting systems, lenses, camera systems and image processing systems. Functionally, a typical machine vision system can be divided into: image acquisition part, image processing part and motion control part.

1. Machine vision is a branch of artificial intelligence that is developing rapidly.
2. Simply put, machine vision is the use of machines instead of human eyes for measurement and judgment.
3. Machine vision is a comprehensive technology, including image processing, mechanical engineering technology, control, electric light source lighting, optical imaging, sensors, analog and digital video technology, computer hardware and software technology (image enhancement and analysis algorithms, image cards, I /O card, etc.). A typical machine vision application system includes image capture, light source system, image digitization module, digital image processing module, intelligent judgment decision-making module and mechanical control execution module.
4. The most basic feature of the machine vision system is to improve the flexibility and automation of production. In some dangerous working environments that are not suitable for manual work or where artificial vision is difficult to meet the requirements, machine vision is often used to replace artificial vision. At the same time, in the process of large-scale repetitive industrial production, the use of machine vision inspection methods can greatly improve the efficiency and automation of production.

The machine vision system is mainly composed of three parts: image acquisition, image processing and analysis, output or display.
The system can be subdivided into:

  • Host Computer
  • Frame Grabber and Image Processor
  • Video camera
    CCTV lens
    Microscope lens
  • Lighting equipment
    Halogen light source
    LED light source
    High frequency fluorescent light source
    Flash light source
    Other special light source
  • Image display
    LCD
  • Mechanism and control system
    PLC , PC-Base controller
    Precision table
    Servo motion machine

application:

  1. Automated Optical Inspection
    (English: Automated Optical Inspection, referred to as AOI), is a high-speed and high-precision optical image inspection system, using machine vision as the inspection standard technology, as an improvement to the shortcomings of traditionally using optical instruments for inspection by manpower, the application Levels include research and development of high-tech industries, manufacturing quality control, as well as national defense, people's livelihood, medical care, environmental protection, electric power...etc.
    insert image description here
  2. face detection
  3. self-driving car
    insert image description here

camera understanding

<1> Composition of the camera:
1. Sensor chip;
2. Dustproof sheet/filter;
3. Control and signal conversion circuit board;
4. Optical interface, data interface, and housing.
insert image description here
<2> The function of the camera:
insert image description here
convert the optical signal received by the chip into an electrical signal, and then convert it into a digital signal and transmit it to the host. When the pixel on the chip receives light, it will generate a charge equal to the light intensity. The charge is converted into an electronic signal to obtain the light intensity received by each pixel, that is, each pixel is a sensor that can detect light intensity.

<3> Classification of cameras:
1. Classification by output mode: analog camera and digital camera;
2. Classification by target surface type: area scan camera and line scan camera;
3. Classification by image color: color camera and black and white camera;
4. Classified by chip technology: CCD (charge coupled device) camera and COMS (complementary metal oxide semiconductor Complementary Metal-Oxide-Semiconductor) camera.
Both CCD and CMOS are called photosensitive elements, which are semiconductor elements that convert optical images into electronic signals. They all use photodiodes to detect light, but differ in how the signal is read and produced. The relative difference between the above two is shown below.
insert image description here
Note: Due to technological development of cmos chip technology, performance is no longer part of ccd. There are many cmos chip cameras on the market.

<4> Camera pixels:
what is a pixel?
The so-called pixel refers to the smallest constituent unit of an image. Images in a computer are represented by a collection of regularly arranged points called pixels (or called PIXELs). Each point has color information such as hue and tone, so that a colorful image can be drawn.
For example: "Resolution: 1280 × 1024" will be displayed on the LCD monitor. This means 1280 pixels in horizontal direction and 1024 pixels in vertical direction. The total number of pixels in such a display is 1280 × 1024 = 1,310,720. Since the more pixels, the more details of the image can be expressed, so it can also be said "higher definition".
insert image description here
The concept of gray value:
1 Each cell represents a pixel, and each pixel corresponds to a gray value.
2 The brighter the image is, the closer the gray value is to 255, and vice versa.
insert image description herePixel structure:
The pixels of the CCD element, mainly CCD, do not all function to output image signals. Pixels can be divided into total pixels (pixels that represent the entire area of ​​the CCD element), effective pixels, and effective pixels (pixels that actually work).
Effective pixels:
pixels of image signals in total pixels. The number of effective pixels is used in the performance labeling of digital cameras, which is stipulated in the product manual.
Effective pixels:
Pixels used to guarantee product performance among effective pixels.
insert image description here
What is pixel diameter?
The so-called pixel diameter refers to the size of each CCD element, usually using μm as the unit. Strictly speaking, this size includes light-receiving elements and signal transmission channels. (= pixel pitch). That is, the pixel diameter and the pixel pitch have the same value. If the pixel diameter is smaller, the image will be drawn with smaller pixels, so a more detailed image can be obtained. The size of the light receiving part of the CCD element can be obtained from the pixel diameter and the number of effective pixels.
insert image description here
Example:
Suppose the condition of a CCD element is as follows:

  • ・Number of effective pixels…768 × 484
  • ·Pixel diameter...8.4 μm × 9.8 μm
    The size of the light-receiving part is
    : Horizontal 768 × 8.4 μm = 6.4512 mm
    Vertical 484 × 9.8 μm = 4.7432 mm
    CCD chip size:
    The size of the CCD photosensitive element is generally expressed in inches and These two methods are expressed using the APS-C size and other specifications. When expressed in inches, the size is not the actual size of the shot, but is equivalent to the diagonal length of the camera tube. For example, a 1/2-inch CCD means "having a field of view equivalent to that of a 1/2-inch camera tube". Why this calculation? This is because the original purpose of manufacturing CCD is to replace the camera tube of the TV video recorder. At that time, this strange specification was born due to the strong demand to continue to use optical products such as lenses. The size of the main inch specifications is shown in the table below:
    insert image description here
    Note: The actual chip size of each model does not have to be the above size, and the actual chip size can be obtained by "number of pixels × single pixel size" in the camera pixel 4.

<5> Camera resolution:
The resolution is determined by the chip pixels used in the camera, which is the number of pixels arranged in the chip array. For an area scan camera, the horizontal and vertical multiplication is the resolution of the camera. For example, the resolution of a camera is 1280 H × 1024 V, which means that the number of pixels in each row is 1280, and there are 1024 rows of pixels, and the resolution of this camera is about 1.3 million pixels. When imaging a field of view of the same size, the higher the resolution, the more obvious the display of details. At present, the resolutions of commonly used cameras are 300,000, 1.3 million, 2 million, 5 million, 29 million, 71 million, 120 million and so on.
Accuracy: the actual physical size represented by each pixel in the image, accuracy = field of view size in one direction / resolution in one direction of the camera, for example: if a 5 million
camera (resolution 2592×1944) is used to shoot an 80mm field of view, then accuracy = 80 ÷2592=0.031 (mm/pixel).
Line frequency/frame frequency:
The acquisition frequency of the camera. The area scan camera is expressed by frame rate, and the line scan camera is expressed by line frequency. The frame rate unit of the area array camera is FPS frame per second, that is, frame per second, which refers to how many images the camera can collect per second, and one image is one frame. For example, 15 frames per second means that the camera can capture up to 15 images per second. In general, the higher the resolution of the camera, the lower the frame rate. Line scan camera line frequency unit is Hz, 1Hz is one line. For example, 50KHz/second means that the camera scans 50,000 lines within 1 second. Generally speaking, the higher the resolution of the camera, the lower the line frequency.
Exposure time/exposure method:
Exposure time refers to the time when light is projected onto the camera sensor chip and the shutter is open. The longer the exposure time, the brighter the image. In the acquisition method of external trigger synchronization, the exposure time can be consistent with the line cycle, or a fixed time can be set. Exposure methods in industrial cameras are divided into line exposure and frame exposure, where line exposure refers to progressive exposure, and frame exposure refers to one-time exposure of one image. Line scan camera is progressive exposure, you can choose a fixed line frequency.

<6> Camera data interface type:
Data interface type: the interface connecting the camera and the data cable.
Common data interfaces include GigE, USB, IEEE1394, CameraLink, etc.
insert image description here

Understanding of the lens

<1> The role of the lens:
Image processing is a process of converting the light entering the camera element (CCD) into an electronic signal and using it as data. The most important part of this is the lens that gathers light to the imaging element. According to the principle of light refraction, the lens can gather the light from the subject to a point and form an image. At this time, the point where the rays gather is called the focal point, and the distance from the center of the lens to the focal point is called the focal length. When the lens is convex, the focal length will vary depending on the thickness (expansion) of the lens. The greater the degree of expansion, the shorter the focal length.
insert image description here
When viewing this as a CCD structure, if the subject is outside the focal point of the convex mirror, the light from the subject will be refracted at the lens and an image will be formed with the up-down and left-right positions reversed. We call this the real image, and if you place the camera element at this location, you can map a real image.

<2> Classification of lenses:

  • CCTV lens (also known as wide-angle lens):
    This lens is used in closed circuit TV (Closed CircuitTV), mainly used in the detection of FA field and the monitoring of anti-theft and disaster prevention fields. Since the number of lenses is small and the structure is relatively simple, the volume is small and the cost is low. In general, it is characterized by the ability to perform balanced aberration correction regardless of the focal-object distance.
    insert image description here
    Note: This kind of lens has a wide range of applications due to its small size, low cost, and improved performance.
  • Telecentric lens:
    This is a lens that is arranged so that the chief ray passes through the focal point, and is designed to travel at an angle of view close to 0°, which means that the chief ray will travel parallel to the optical axis of the lens. Since it is parallel to the optical axis, distortion aberration (distortion) hardly occurs, and the size and position of the subject can be captured with high precision. When image processing requires high magnification, low distortion, and deep depth of field, telecentric lenses will play their true value.
    insert image description here
    insert image description here
    Note: In addition to object-space telecentricity, there are also object-space telecentricity or bilateral telecentricity.

<3> The structure of CCTV lens:
The so-called floating structure refers to the function of moving the front group and the rear group of multiple lenses respectively. As a result, high definition and high contrast can be obtained from close range to infinity.
[optimized optical design through multiple lenses and floating structure]
In order to maximize the performance of multiple lenses, a floating structure is added to allow each internal lens group (divided into front group and rear group) to move independently. Its structure is that when the focus is adjusted, if the front group of the lens moves, the rear group will also move to the most suitable position, so that the skew correction can be optimized. High performance is achieved by always maintaining the optimum positional relationship of the lens groups from close range to infinity.
insert image description here
<4> Lens interface:
The lens installation method is used to connect the CCD and the lens. There are many types. If the two do not meet certain conditions at the same time, the compatibility between the CCD and the lens cannot be guaranteed. In terms of mechanical structure, there are problems with the structure and size of the junction, and in terms of optical structure, there are problems such as the flange distance on the CCD side, so when selecting a lens, it is necessary to confirm the CCD holder used. The CCD brackets used in the FA industry are called C brackets, and they mostly have specifications with an inner diameter of 25.4 mm (1 in) and a pitch of 0.794 mm (32 teeth/1 in).
Commonly used industrial cameras include C interface, CS interface, F interface, V interface, T2 interface, Leica interface, M42 interface, M50 interface, etc. The difference in interface type is not directly related to the performance and quality of the lens, but the interface method is different. Generally, you can also find the interface between various commonly used interfaces.
insert image description here
<5> Working distance of the lens:
Working distance (WD): When the lens is in focus, the distance from the measured object to the front end of the lens is called the working distance.
In practical applications, the lens cannot focus on objects at any object distance at the same time, so the working distance of the lens has a certain range.
insert image description here
<6> The focal length of the lens:
One of the specifications in the lens is "focal distance". Representative lenses in FA lenses are lenses with focal lengths of 8 mm / 16 mm / 25 mm / 50 mm and other specifications. Based on the required field of view and focal distance of the subject you want to shoot, you can find the focus position = WD (working distance). When it is a CCD, the proportional formula working distance: field of view = focus distance: CCD size is established. The size of WD and field of view is determined by the focal length of the lens and the size of the CCD, and the following ratio formula
insert image description here
can be applied when the distance above the shortest distance without connecting ring is applied. <7> The field of view of the lens: the shooting range in the working distance range. In general, the longer the working distance between the subject and the lens, the wider the field of view (angle of view). In addition, the width of the field of view is determined by the focal length of the lens. We refer to the angle of the range that the lens can shoot relative to the field of view as the angle of view or the angle of view. The shorter the focal length of the lens, the larger the angle of view and the wider the field of view. Conversely, a longer focal length allows you to zoom in on distant subjects. <8> Aperture of the lens: Inside the lens, there is a polygonal or circular hole-shaped grating device with variable area, which is called the aperture. The function of the aperture is to control the amount of light passing through the lens, and the aperture factor is usually used to describe its size. The aperture factor refers to the ratio of the focal length f' of the lens to the effective aperture D of the entire lens, and is usually represented by f/#. Its calculation formula: f/#=f'/D. The smaller the f/# value, the larger the aperture. Generally, the f/# value is increased by √2 times, so in the same unit of time, the light transmission area of ​​the upper level is twice that of the next level. For example, the aperture is adjusted from f/8 to f/5.6, and the light transmission area It will be doubled. The effect of aperture on the brightness of the picture: the same lens, the larger the aperture, the larger the aperture, the brighter the picture. <9> Depth of field of the lens: the depth of object space that can obtain a clear image on the image plane. That is: before and after the subject (focus point), there is still a clear range in its imaging, which is the depth of field. In other words, the subject is within the range of front and rear depth of field. Factors affecting depth of field:

insert image description here


insert image description here




insert image description here


insert image description here

1. The larger the aperture, the smaller the depth of field; the smaller the aperture, the larger the depth of field;
2. The longer the focal length of the lens, the smaller the depth of field; the shorter the focal length, the larger the depth of field;
3. The farther the shooting distance, the greater the depth of field; The closer the depth of field is, the smaller the depth of field is;
insert image description here
<10> Adjust the position on the lens:
Aperture:
By adjusting the aperture size of the lens, the amount of light entering the lens can be controlled, and the brightness of the image will also change accordingly;
Focus:
The image is not clear, and it is difficult to present a good image The image effect reduces the system accuracy, directly affects the overall performance of the machine vision system, and increases the difficulty of image processing.

understanding of light source

<1> Type of light source:

  1. LED
    LED is an acronym for Light Emitting Diode, which means "light emitting diode (semiconductor element)", so it is also called light emitting diode. Fluorescent lamps use the discharge phenomenon to indirectly convert electrical energy into light, while LEDs directly convert electrons into light, so the energy conversion efficiency is higher = light source that saves power. In addition, LEDs have been widely used in image processing in recent years because they have advantages such as long service life and rich wavelengths (colors).
    insert image description here
  2. Fluorescent lamp
    A light source that emits visible light by using ultraviolet rays generated by arc discharge as phosphors. Generally speaking, its structure is that the inside of the glass tube is coated with phosphor, the mercury is sealed inside, and then electrodes for discharge are installed at both ends of the tube.
    In the past, it was widely used due to its longer life than white flag lamps. Its luminous color is mainly white, in addition to daylight color, three-wavelength fluorescent lamps that are closer to natural light, etc. In addition, its shape is classified into a straight tube fluorescent lamp, a circular fluorescent lamp, a bulb type fluorescent lamp, and the like.
    insert image description here
  3. Halogen lamp
    A bulb made by sealing inert gases such as nitrogen and halogen gases such as iodine in a glass bulb. The principle of light emission is the same as that of the white flag lamp, but the halogen lamp has the characteristics of brighter light and longer service life. It is used in the headlights of automobiles, searchlights in shopping malls, and lighting in studios. The glow color is warm white only.
    insert image description here
  4. Xenon lamp
    A discharge lamp that emits a light source close to natural light. It is made by sealing the hernia in a quartz tube. Compared with the white flag lamp, it has the characteristics of brighter,
    lower power consumption and longer life. Also known as a xenon lamp. It is mainly used as a light source of a mapping device or a light projector.
    Can be divided into short arc lamps, long arc lamps, flash lamps and so on.
    insert image description here
  5. Metal halide lamp
    A type of high-intensity discharge (HID), which seals the mixed vapor of metal halide (metal halide) and mercury in the lamp, and emits light through arc discharge. Also referred to as metal halide lamp. It not only has high brightness, but also has the advantages of less power consumption and longer service life.
    It has been used in the lighting of roads and tunnels in the past. Now it is also used in indoor lighting of large buildings, fish tank lighting for ornamental fish, and night game lighting in sports venues.
    insert image description here
    Relative characteristics of various lighting:
    insert image description here
    <2> LED light source structure:
    LED uses the movement of electrons generated by electrification in N-type and P-type junction semiconductors to make electrons (-) and positive holes (+) contact and combine. produced by luminescence. The wavelength (color) of light will vary depending on the size of the semiconductor's forbidden band (the region where electrons cannot exist). Therefore, semiconductor materials corresponding to various wavelengths have emerged. In recent years, due to the invention of blue light-emitting diodes and white light-emitting diodes made of gallium nitride, its use has also expanded to applications such as display screens and lighting.
    insert image description here
    Supplement:
    Common light sources (red, blue, green and white):
    insert image description here
    Common light sources (types):
    insert image description hereinsert image description here
    <3> Light source
    insert image description here
    controller The purpose of the controller:
    The main purpose of the light source controller is to supply power to the light source, control the brightness and lighting status of the light source ( On, off), and the strobe of the light source can also be realized by giving the controller an external trigger signal (switch value or level signal), thereby effectively prolonging the service life of the light source. There are two types of controllers: analog controllers and digital controllers. Digital controllers can communicate with PC devices through RS232 or Ethernet. Staff can choose different types of controllers according to actual needs.
    Controller technical features:
    Most controllers adopt current control mode. Unique automatic load technology. Unique programmable trigger function. Efficient and flexible communication protocol. Short response time and high trigger frequency detection.

<4> The role of the light source
A good image should have the following conditions:
Contrast: the contrast is obvious, the boundary between the target and the background is clearly contrasted, and the gray value difference between the target and the background is required to be at least 30 or more; Uniformity: the
overall brightness of the image is required to be uniform, Or the overall unevenness but the difference in gray scale does not affect the image processing;
Authenticity: related to the color, the color needs to be true, the brightness is moderate, not overexposed, and the excessive pixels meet the precision detection requirements.
insert image description here

traditional algorithm

Mainly use Halcon to develop the defect detection algorithm (the Halcon deep learning detection algorithm and the Haikang AI defect detection platform.sol program algorithm will be supplemented later)
reference:
Halcon speed pass (to be supplemented);
Halcon cracks the License .
It took a little time to record and organize the documents:
insert image description here
link: My CSDN .
Source code: Not open source yet.

DL deep learning method

It is mainly improved by using open source frameworks and models.
insert image description here
This blog is also made in PPT format:
insert image description here
link: My CSDN_PPT .

Guess you like

Origin blog.csdn.net/weixin_44348719/article/details/130704909