Camera optics, imaging and 3A algorithm (vision), camera development

 Imaging and optics. Computer vision, image processing, digital imaging. Autonomous driving and vision.
 Lens design; human imaging (camera), machine vision and computer vision

- Optics and camera, books
"Applied Optics", "Geometric Optics"

 Camera algorithm book, implemented by FPGA or DSP. Such as ISP function realization, 3A, 3D noise reduction, edge enhancement, color restoration, image enhancement, anti-shake, defogging, privacy occlusion, etc.,
 Zhang Furong, "Research on H.264 Encoder Based on DM642"
 Li Fanghui, Wang Fei, Peikun He, "Principle and Application of TMS329C6000 Series DSPs"

- camera development

Android camera development-https://blog.csdn.net/zhangbijun1230/article/category/6500605

Camera protocol-https://blog.csdn.net/zhangbijun1230/article/category/8792290

camera- https://blog.csdn.net/zhangbijun1230/article/category/7508987
Depth camera- https://blog.csdn.net/zhangbijun1230/article/category/7531550

> Camera optical principle

Analysis of Camera Image Processing Principle-Anti-noise Zoom Strobe, etc.-https://blog.csdn.net/colorant//article/list/6?
Analysis of Camera Image Processing Principle-http://blog.chinaunix.net/uid- 24486720-id-370942.html

Camera theoretical basis and working principle-https
  : //blog.csdn.net/ysum6846/article/details/54380169 working principle: light enters the camera through the lens Lens, then filters the infrared light through the IR Filter, and finally reaches the sensor, Senor can be divided into two types: CMOS and CCD according to the material. It can convert optical signals into electrical signals, and then convert them into digital signals through the internal ADC circuit, and then transmit them to DSP (if any, if not, use DVP The data is sent to the baseband chip baseband in the method, the data format at this time is Raw Data, and processing will be discussed later), and then converted into RGB, YUV and other formats for output.
  There are two types of sensors commonly used at present, one is CCD (charge coupled) original; the other is CMOS (metal oxide conductor) original.
   1. CCD (Charge Coupled Device), a charge-coupled device sensor: It is made of a high-sensitivity semiconductor material, which can convert light into electric charges, which are converted into electrical signals through an analog-to-digital converter chip. CCD is composed of many independent photosensitive units, usually in megapixels. When the surface of the CCD is exposed to light, each photosensitive unit will reflect the charge on the component, and the signals generated by all the photosensitive units are added together to form a complete image. CCD sensors are dominated by Japanese manufacturers, and 90% of the global market is monopolized by Japanese manufacturers. Sony, Panasonic, and Sharp are the leaders.
  2. CMOS (Complementary Metal-Oxide Semiconductor): Complementary Metal-Oxide Semiconductor: It is mainly a semiconductor made of silicon and germanium, which makes it coexist with N(-) and P(+) semiconductors on CMOS. The current generated by the two complementary effects can be recorded by the processing chip and interpreted as an image. CMOS sensors are mainly dominated by the United States, South Korea and Taiwan. The main manufacturers are OmniVison, Agilent, and Micron in the United States, Sharp Image, Original Phase, and Taishi in Taiwan, and Samsung and Hyundai in South Korea.

Camera principle-https://blog.csdn.net/g_salamander/article/details/8086835
  With the popularity of digital cameras and mobile phones, CCD/CMOS image sensors have received extensive attention and applications in recent years. Image sensors generally adopt certain modes to collect image data, and the commonly used modes are BGR mode and CFA mode. BGR mode is an image data mode that can be directly processed for display and compression. It is determined by the three primary color values ​​of R (red), G (green), and B (blue) to determine a pixel point, such as those used by Fuji digital cameras. The SUPER CCD image sensor adopts this mode. Its advantage is that the image data generated by the image sensor can be directly displayed and other subsequent processing without interpolation. The image effect is the best, but the cost is high, and it is often used in professional cameras. Generally, the sensor of a digital camera (CCD or CMOS) accounts for about 10% to 25% of the total cost of the whole machine. In order to reduce costs and reduce the size, most digital cameras on the market adopt the CFA mode, that is, cover a layer of color on the surface of the pixel array Color Filter Array (Color Filter Array, CFA), there are many kinds of color filter arrays, the Bayer format filter array is now the most widely used, which meets the GRBG law, the number of green pixels is twice the number of red or blue pixels, this is because people The peak of the eye's sensitivity to the visible light spectrum is located in the mid-band, which corresponds exactly to the green spectral component.

Generally speaking, the camera is mainly composed of two parts: lens and sensor IC. Some sensor ICs integrate DSP, and some do not, but they also need external DSP processing. In terms of subdivision, the camera equipment is composed of the following parts:
 1) lens (lens) Generally, the lens structure of a camera is composed of several lenses, which are divided into plastic lens (Plastic) and glass lens (Glass). Usually the lens structure has : 1P, 2P, 1G1P, 1G3P, 2G2P, 4G, etc.
 2) sensor (image sensor) Senor is a semiconductor chip, there are two types: CCD and CMOS. The Sensor converts the light guided from the lens into an electrical signal, and then converts it into a digital signal through the internal AD. Since each pixel of the Sensor can only be light-sensitive R light, B light or G light, each pixel stores monochrome at this time, which we call RAW DATA data. To restore the RAW DATA data of each pixel to the three primary colors, ISP is required to process it. 
 3) ISP (image signal processing) mainly completes the processing of digital images, and converts the raw data collected by the sensor into a format supported by the display. 
 4) CAMIF (camera controller) The camera interface circuit on the chip controls the device, receives the data collected by the sensor and delivers it to the CPU, and sends it to the LCD for display.

  Working principle: After the external light passes through the lens, it is irradiated on the Sensor surface after being filtered by the color filter. The Sensor converts the light guided from the lens into an electrical signal, and then converts it into a digital signal through the internal AD. If the Sensor does not integrate DSP, it will be transmitted to the baseband by DVP, and the data format at this time is RAW DATA. If DSP is integrated, RAW DATA data is processed by AWB, color matrix, lens shading, gamma, sharpness, AE and de-noise, and then output data in YUV or RGB format.
Finally, the CPU will send it to the framebuffer for display, so that we can see the scene captured by the camera.
  Like RGB, YUV is one of the commonly used color models in color space, and the two can be converted to each other. Y in YUV represents brightness, and U and V represent chroma. Compared with RGB, its advantage is that it takes up less space. YCbCr is part of the ITU-R BT601 recommendation during the development of the World Digital Organization video standard. In fact, it is a scaled and offset replica of YUV. Among them, Y has the same meaning as Y in YUV. Cb and Cr also refer to color, but they are different in terms of representation. In the YUV family, YCbCr is the most widely used member in computer systems, and it has a wide range of applications. JPEG and MPEG both use this format. Generally speaking, YUV refers to YCbCr. YCbCr has many sampling formats, such as 4:4:4, 4:2:2, 4:1:1 and 4:2:0.

Digital signal processing chip DSP (DIGITAL SIGNAL PROCESSING) function: mainly through a series of complex mathematical algorithms to optimize the digital image signal parameters, and transmit the processed signals to PCs and other devices through USB and other interfaces. DSP structure framework:
 1. ISP (image signal processor)
 2. JPEG encoder (JPEG image decoder)
 3. USB device controller (USB device controller)

  Optical zoom: Through the adjustment of the lens, you can zoom in and out of the object you want to shoot, keeping the pixels and image quality basically unchanged, but you can take your ideal image.    
  Digital zoom: In fact, there is no zoom. It is just taken from the original picture and zoomed in. What you see on the LCD screen is enlarged. In fact, the picture quality is not substantially improved, and the pixels are lower than the maximum pixels that your camera can shoot. The picture quality is basically tasteless, but it can provide some convenience.

  As one of the core modules of camera phones, the adjustment of the camera sensor effect involves many parameters, and can have a deep understanding and grasp of the basic optical principles and sensor software/hardware principles of image processing.
  The recognition of color by the human eye is based on the principle that the human eye has three different sensing units for the spectrum, and different sensing units have different response curves to different wavelengths of light. The color perception is obtained through the synthesis of the brain. Generally speaking, we can use the concept of RGB three primary colors to understand the decomposition and synthesis of colors.

> Camera 3A algorithm, 3A algorithm: auto focus, auto exposure, auto white balance.
  The 3A control in the image refers to automatic exposure control (AE), automatic focus control (AF), and automatic white balance control (AWB). The automatic exposure control can automatically adjust the brightness of the image, the automatic focus control can automatically adjust the focus of the image, and the automatic white balance can make the image image the color of the classic light source.
  The essence of white balance is to make white objects appear white under any light source .
The general algorithm adjusts the white balance gain to make the color of the shooting image close to the real color of the object. The gain adjustment is based on the color temperature of the ambient light source.
  The automatic exposure is to make the photosensitive device obtain the appropriate exposure.
The general algorithm adjusts the corresponding exposure parameters by obtaining the brightness of the image to obtain the appropriate exposure. Exposure parameters include the aperture size, shutter speed and the brightness gain of the camera sensor.
  Auto focus is the process of adjusting the focus of the camera to automatically get a clear image

.
The basic step of the AF algorithm is to first judge the blur degree of the image, obtain the evaluation value of each image collected through the appropriate blur degree evaluation function, and then obtain a series of peak values ​​of the evaluation value through the search algorithm, and finally collect the image through the motor drive The device is adjusted to the position of the peak to obtain the clearest image.The key to the algorithm is to achieve a balance between accuracy and speed.At the same time, the accuracy of the algorithm is affected by both the software algorithm and the hardware accuracy.

3A algorithm understanding-https://blog.csdn.net/u012900947/article/details/80897364
   3A technology is auto focus (AF), automatic exposure (AE) and automatic white balance (AWB). 3A digital imaging technology uses AF autofocus algorithm, AE auto exposure algorithm and AWB auto white balance algorithm to achieve the maximum image contrast, improve the overexposure or underexposure of the subject, and compensate the chromatic aberration of the picture under different light. In order to present higher quality image information. The camera with 3A digital imaging technology can guarantee the accurate color reproduction of the image and present the perfect day and night monitoring effect.
   The 3A imaging control algorithm has a crucial influence on the imaging effect of the camera. Whether it is in the early morning or dusk, or the night environment with complex light, it can not be affected by the viewfinder, light and shadow, and provides accurate color reproduction, presenting the perfect day and night monitoring effect.

Research on image-based automatic exposure algorithm: https://wenku.baidu.com/view/c854fa93fd0a79563c1e72ba.html
  At present, there are basically two automatic exposure control methods. One is to use the reference brightness value to divide the image evenly into many sub-images. The brightness of each sub-image is used to set the reference brightness value. The reference brightness value can be obtained by adjusting the aperture size [2]
, and the same can be used The reference brightness value is obtained by setting the shutter speed [3]. There are still some camera manufacturers adopting another method, which is to conduct exposure control by studying the relationship between brightness and exposure value under different lighting conditions [4-6]

Camera parameter introduction and 3A programming algorithm-https:
//blog.csdn.net/qccz123456/article/details/52371614 camera parameters usually have resolution, clarity, brightness, contrast, saturation, focal length, field of view, aperture , Gain, exposure time, white balance, etc.

3A Modes and State (3A Modes and State) 
Google source web address link: https://source.android.com/devices/camera
Image 3A algorithm and gamma correction principle and partial implementation-https://blog.csdn. net/piaoxuezhong/article/details/78313542

Guess you like

Origin blog.csdn.net/ShareUs/article/details/94295628