Camera optics, and imaging algorithms 3A

And optical imaging. Computer vision, image processing, digital imaging. Autopilot and vision.
 Lens design; people imaging (camera), computer vision, machine vision

- optical and camera, book
"Applied Optics", "geometrical optics"

 Book Camera (camera) algorithm, FPGA or DSP implementation. The ISP function implemented, 3A, 3D noise reduction, edge enhancement, color reduction, image enhancement, image stabilization, defogging, privacy mask or the like,
 Zhang hibiscus, "H.264 encoder based on studies of DM642"
 Li Fanghui, Fei, TRANSACTIONS oF, "TMS329C6000 series DSPs principles and applications"

- camera development

android camera development - https://blog.csdn.net/zhangbijun1230/article/category/6500605

Camera agreement - https://blog.csdn.net/zhangbijun1230/article/category/8792290

camera- https://blog.csdn.net/zhangbijun1230/article/category/7508987
depth camera - https://blog.csdn.net/zhangbijun1230/article/category/7531550

> Camera optics

Camera image analysis processing principle - anti-noise zoom strobe - https://blog.csdn.net/colorant//article/list/6?
Camera Image Processing Principle Analysis - http://blog.chinaunix.net/uid- 24486720-id-370942.html

camera theory and principle - https://blog.csdn.net/ysum6846/article/details/54380169
  working principle: the light entering the camera through the lens Lens, then filtered through infrared light IR Filter, and finally reaches the sensor (sensor) senor divided material can be divided in accordance with two kinds of CCD and CMOS, the optical signals into electrical signals, and then converted by the ADC circuit inside the digital signal, and then transmitted to the DSP (if any places without the DVP way to transfer data baseband baseband chip, the data format at this time Raw data, followed by processing speaking) processing, is converted into an output format RGB, YUV and the like.
  There are two currently used sensor A CCD (Charge Coupled) original; one is an original CMOS (Metal Oxide Semiconductor).
   1.CCD (Charge Coupled Device), a charge coupled device sensor: Use is made of a high-speed semiconductor material, can light into electric charge, converted into electrical signals by an analog chip. CCD by a number of independent sensitive units, usually in megapixels units. When the CCD surface exposed to light, the charge of each photosensitive unit will be reflected in the assembly, all of the signal generated by the photoconductor units together, constitute a complete image. CCD sensor as the leading Japanese manufacturers, 90% of Japanese companies a monopoly on the global market, Sony, Panasonic, Sharp is leading.
  2.CMOS (Complementary Metal-Oxide Semiconductor) , complementary metal oxide semiconductors: mainly made of silicon and germanium semiconductor, it coexist with N in CMOS (-) and P (+) level of the semiconductor, which two complementary effects resulting current may be processed and interpreted as an image recording chip. CMOS sensor mainly in the United States, South Korea and China Taiwan led the major manufacturers in the United States of OmnVison, Agilent, Micron, China Taiwan sharp images, the original phase, such as Thailand, as South Korea's Samsung, Hyundai.

camera principle - https://blog.csdn.net/g_salamander/article/details/8086835
  With the popularity of digital cameras, mobile phones, CCD / CMOS image sensors in recent years widespread attention and application. The image sensor is generally used to acquire a certain pattern of image data, and a commonly used model BGR CFA patterns. BGR direct mode is a display mode and the image data compression process, it is common to determine a pixel from (red),, B (blue) tristimulus values R G (green), Fuji digital camera used e.g. SUPER CCD image sensor on the use of this model, the advantage that the image data of the image sensor can be directly generated without interpolation for subsequent processing such as the display image best, but the high cost, commonly used in professional camera. General digital camera sensor (CCD or CMOS) about 10% to 25% of the total cost of the machine, in order to reduce costs, reduce the volume of digital cameras on the market mostly adopt CFA patterns, i.e., a surface covered with a layer of color pixel array filter array (color filter array, CFA), a variety of color filter array is now the most widely used format is the Bayer filter array, meet GRBG law, the number of green pixels is twice the number of pixels of red or blue, this is because people eye band located in the visible spectral sensitivity peaks which exactly corresponds to the green spectral component.

Generally, Camera lens mainly consists of two parts and the sensor IC, where some of the DSP integrated sensor IC, there is no integrated, but also require external DSP processing. In terms of breakdown, the lower camera apparatus consists of several parts:
 1) Lens Lens structure (lens) of the camera is typically a few pieces of lenses, plastic lenses divided (Plastic) and a glass lens (Glass), usually lens structure : 1P, 2P, 1G1P, 1G3P , 2G2P, 4G and so on.
 2) sensor (image sensor) Senor is a semiconductor chip, there are two types: CCD and CMOS. Sensor conduction over the light from the lens into an electric signal, and then converted into digital signals by the AD therein. Since each pixel Sensor photosensitive only R light and B light, or light, or G, each pixel stored at this time it is monochrome, we call RAW DATA data. To the RAW DATA data of each pixel is reduced to three primary colors, it is necessary to process the ISP. 
 3) ISP (image signal processor) mainly to complete digital image processing, the acquired raw sensor data into a format supported by the display. 
 4) CAMIF (camera controller) Camera interface circuits on the chip, to control the device, receiving sensor data collected to the CPU, and sent to LCD displays.

  Working principle: after the external light passes through the lens, after the color filter surface of the filter is irradiated to the Sensor, Sensor conduction over the light from the lens into electrical signals, and then through the interior of the AD conversion to a digital signal. Sensor If the DSP is not integrated, the transmission to baseband by DVP manner, when the data format is RAW DATA. If integrated DSP, RAW DATA after the AWB data, the color matrix, lens shading, gamma, sharpness, AE processing, and de-noise, the output data of the YUV or RGB format.
The last show will be sent by the CPU framebuffer, so that we see camera captured the scene.
  YUV and RGB as color space is one of the common color model, the two can be interchangeable. Y represents luminance obtained in YUV, U and V represent chrominance. Compared with RGB, it has the advantage of taking up less space. YCbCr is in the world of digital video standards development process organization as ITU - R BT601 part of the proposal, in fact, YUV scaled and offset replica. Wherein Y is consistent with the meaning of YUV Y, Cb, Cr color all referring to the same, but the method of representing different. In the YUV family, YCbCr in a computer system is the most widely used member of its applications is very wide, JPEG, MPEG adopt this format. People generally spoken mostly refers to YUV YCbCr. There are many sampling YCbCr formats, such as 4:4:4, 4:2:2, 1 pixel in horizontal and 4:2:0.

A digital signal processing chip DSP (DIGITAL SIGNAL PROCESSING) Function: mainly, digital image signal processing parameters were optimized through a series of complex operations of mathematical algorithms, and the signal processing interface to a PC via USB, and other equipment. DSP framework:
 1. the ISP (Image Signal Processor) (image signal processor)
 2. Encoder the JPEG (the JPEG image decoder)
 3. the USB Device Controller (the USB device controller)

  Optical zoom: By adjusting the lens zooms you want to shoot the object, keeping the same pixel picture quality and is essentially the same, but you can shoot your own ideal object image.    
  Digital zoom: Actually, no zoom, zoom out just intercepts from the original picture, you can see from the LCD screen is bigger, in fact, is not essentially improve the quality, and lower than the maximum pixel pixel camera you can shoot. He says the picture quality is basically tasteless, but can provide some convenience.

  As one of the core module of camera phones, adjusting the camera sensor effect, involving a large number of parameters, the basic principles of optics and sensor hardware / software of the principles of image processing can have a deep understanding and grasp.
  Identifying the human eye to color is obtained based on the color perceived by the brain synthesizing three different sensing units of the human eye there is spectrum sensing unit has a different response curves to different principles of light of a different wavelength band. In general, we can use the concept of popular RGB three primary colors to understand the decomposition and synthetic colors.

> Camera 3A algorithm, 3A algorithm: AF, AE, automatic white balance.
  3A refers control image automatic exposure control (AE), automatic focus control (AF), automatic white balance control (AWB). Automatic exposure control to automatically adjust the brightness of an image, autofocus control can automatically adjust the focus of the image, the auto white balance so that the image can be imaged in a classic color light source.
  Essence of the white balance is the object are displayed in white light of any .
Typically by adjusting the balance gain algorithm, the color of the captured picture of an object approaching the real color, the color temperature is gain-adjusted according to the ambient light source.
  AE is to enable the sensing device to obtain a suitable exposure amount.
The general algorithm acquired by adjusting brightness of the image corresponding to the exposure parameters to obtain an appropriate exposure amount. Exposure parameter includes a brightness gain aperture size, shutter speed and the camera sensor.
  I.e. autofocus camera focal length is automatically adjusted to obtain a clear image of the procedure

.
The basic algorithm is the first step of AF determines the degree of blurring of images, an evaluation value is obtained for each sub-image collected by the appropriate fuzzy evaluation function, and a series of evaluation value obtained by the peak search algorithm, and finally collected through the motor drive the key device is adjusted to a position where the peak value, to obtain the clearest image, the algorithm is to achieve balance of accuracy and speed, while the accuracy of the algorithm is affected by both software algorithms and hardware accuracy.

3A appreciated Algorithm - https://blog.csdn.net/u012900947/article/details/80897364
   3A technique i.e. autofocus (AF), automatic exposure (AE) and auto white balance (AWB). 3A digital imaging technology utilizes algorithms AF AF, AE and AWB autoexposure algorithm automatic white balance algorithms to achieve maximum image contrast, improve body subject underexposure or overexposure case, the color picture light at different illumination compensated, thereby presenting a high-quality image information. 3A imaging technology using a digital camera can be a good guarantee accurate color reproduction image, showing a perfect day and night monitoring results.
   3A imaging control algorithm has a crucial influence on the imaging camera. Either in the morning or evening, or a night light complex environment, can not framing, light and shadow effects to provide accurate color reproduction, showing a perfect day and night monitoring results.

Automatic exposure image based algorithm: https: //wenku.baidu.com/view/c854fa93fd0a79563c1e72ba.html
  Currently, automatic exposure control method are essentially two. One is using a reference luminance value, the image is uniformly divided into a number of sub-images, the brightness of each sub-image is a reference brightness value used to set the reference luminance value can be obtained [2] by adjusting the aperture size
, also can the luminance value is obtained with reference to [3] by setting the shutter speed. There are some camera manufacturers use another method, exposure control is performed [4-6] by studying the relationship between the brightness and the exposure value under different lighting conditions

3A and camera parameters introduced programming algorithms - https://blog.csdn.net/qccz123456/article/details/52371614
camera parameters usually resolution, sharpness, brightness, contrast, saturation, focal length, angle of view, the diaphragm , gain, exposure time, white balance and so on.

3A model and state transitions (3A Modes and State) 
Google source web address link: https: //source.android.com/devices/camera
image 3A and gamma correction algorithm principle and partially implemented - https: //blog.csdn. net / piaoxuezhong / article / details /    7831354

Published 407 original articles · won praise 150 · views 380 000 +

Guess you like

Origin blog.csdn.net/ds1130071727/article/details/104953379