Phone camera structure and working principle introduction

 

A cell phone camera: PCB board member, lens, and filter holders, DSP (CCD use), sensors and other components.

Working principle: taking a subject through a lens, projecting an optical image generated on the sensor, and an optical image is converted into an electrical signal and then through the analog-digital conversion into a digital signal, a digital signal through the DSP processing, and is supplied to a processor for processing the phone, finally converted into an image on the phone screen can be seen.

 

1 PCB board

The camera used in the printed circuit board, into hard board, soft board, three kinds Flex PCB

 

2 Lens

Taking a subject lens is imaged on the sensor device, which usually consists of several pieces of lenses. From the material point of view, the camera lens and the plastic lens can be divided into a glass lens.

There are two more important lens parameters: aperture and focal length.

1) a diaphragm mounting device is quantity of light reaching the sensor through the lens in the lens control, in addition to controlling the amount of light, the diaphragm has a function to control the depth of field, the larger the aperture, the smaller the depth of field is, usually when the portraits hazy background effect is small a reflection of the depth of field.

 

(The smaller the value, the larger the aperture, the more the amount of light, the picture brighter, the narrower the focal plane, the larger body bokeh;

The larger the value, the smaller the aperture, the smaller the amount of light, the screen is relatively dark, the wider the focal plane, the sharper the front body. )

2) the focal length is the distance between the center point a clear image from the lens to the plane formed by the sensor.

 

3 and the color filter holder

The role of the fixture, in fact, be fixed lens, while there will be a filter on the holder.

Color filter includes two modes, one is the RGB primary-color separation method, and the other is complementary CMYK color separation method.

The role of the color filter to filter out, to ensure that each diode monochromatic light feel.

Why should filter into a monochromatic light? Because the photodiode can output different levels, that is, it only represents the intensity of light, can not represent color information (yellow light and red light, as long as corresponding to the same brightness, the diode will be the same output level information). 

 

4 DSP

Known digital signal processing chip: Its function is, the digital image signal through a series of complex optimization processing operations of mathematical algorithms, and finally reached the processed signal on a display.

DSP framework:.. (1) ISP (image signal processor); (2) JPEG encoder (JPEG image decoder).

DSP chip control action is: the sensor chip data acquired immediately and quickly spread baseband and refresh photosensitive chip, the control chip is good or bad, directly determines the picture quality (such as color saturation, sharpness) with fluency.

(*) ISP chip is the "brains"

(ISP chip is divided into two kinds of integrated and discrete, separate ISP ISP chip processing power than integrated chip, but the higher the cost.)

ISP chip is acting on the sensor input signal arithmetic processing, ultimately obtained through linear correction, noise removal, repair dead pixels, color interpolation result of processing, white balance correction, exposure correction, and the like. ISP chip can determine the final image quality cell phone camera to a large extent, it is usually room for improvement in the image quality of up to 10% -15%.

Software algorithm within an ISP is important: the ISP chip current sensor input signal is processed first original raw image is generated, and the software algorithm like a lot of the original image "PS" in the interior, the image optimization color, hue, contrast, noise, etc., the last generation we have seen jpg format pictures.

 

Sensor 5

Camera sensor is the core component is the most critical technology. (Which is a method for receiving light through the lens, and these optical signals into electrical signals. Photodiode) the larger the area of ​​the photosensitive device, capture more photons, the better the photosensitive properties, the low SNR . There are two main, CCD (Charge Coupled Device), CMOS (Complementary Metal Oxide Semiconductor).

In the CMOS sensor camera, which has been integrated into the DSP chip in CMOS, from the appearance point of view, they are a whole. The use of a CCD sensor camera CCD is divided into two separate parts and DSP.

Development Trend of the image sensor is highly sensitive, high-performance direction high resolution, low power, low-voltage operation and other development.

 

6 Flash

Flash photography is one way to increase the amount of exposure the camera in low light conditions will be lit around the scene, in order to compensate for bad light, enhance the screen brightness. Further, in the complex environment light, stray light can be removed using the flash, so that the color reproduction of more real picture.

 

7 Image Quality

Refers to the performance of image quality, image quality imaging requires testing to test various aspects, such as exposure, clarity, color, texture, noise, image stabilization, flash, focus, artifacts, and the like.

The root cause of mobile phone camera imaging results than not: the size of the photosensitive member.

The key factors pixel picture quality is not determined, then who is it? The answer is the sensor.

Relatively speaking, the larger sensor size, the better the photosensitive properties, capture more photons (pattern signal), the lower the SNR, the more excellent imaging results naturally, but it will result in a larger volume of the sensor in the phone, by weight ,Increased costs.

 

8 dual camera phone

In fact, the theoretical basis of two-camera optical system is originally required of the longitudinal space, start to spread in the plane of the lateral space. So that is reaching the level of imaging, will not affect the overall outstanding camera phone appearance.

Dual camera features:

1) using two cameras to generate stereo vision, depth of field image obtained by blurring the background depth information.

2) the use of information about two different images are fused to expect higher resolution, better image quality, or implement optical zoom, enhanced night shooting.

Dual camera phone works are the following:

1) + color camera color (RGB + RGB), the advantage is that the depth of field can be calculated in order to achieve the background blur and refocus (i.e., after taking a first focus);

2) color + black and white camera (RGB + Mono), the advantage that it can improve the low light or night phone image quality;

Improve low-light photo quality are generally three ways: to extend the exposure time, increase the ISO speed, the iris. (Extended exposure time will cause problems tremor, the phone usually fixed aperture can not be adjusted)

Color camera has a filter, allowing only RGB light to enter, it will filter out some of the light, and there is no black and white camera filters, all of the light can come in, get a larger amount of light, so black and white camera image brighter, details are better reserved. Results SNR fused significantly enhanced (signal to noise ratio, SNR, useful information and noise)

3) + wide-angle telephoto lens (Wide + Tele), this combination is the biggest advantage is that you can achieve optical zoom (currently dual camera principle adopted by most major mobile phone manufacturers);

 

9 focal length and angle of view

Focal length: the focal length of one of the main properties of the taking lens, the distance between the optical center of the lens from the focal plane (film or CCD or the CMOS image sensor), represented by F, the unit mm.

Perspective: Perspective refers range of the lens image and can see the recording.

Lens focal length and inversely proportional to the angle of view, the longer the lens focal length, the smaller the angle of view.

 

10 Zoom

The zoom lens is an important capability, including the optical zoom (optical zoom) and digital zoom (digital zoom) two kinds.

Although amplified distant objects have both contribute to telephoto shots, but only the optical zoom can support the image of the subject image, add more pixels, so that the body is not only larger, but also relatively more clearly. Usually the larger the zoom factor is suitable for telephoto shots.

1) through optical zoom position of the lens, the object and the focus change tripartite generated. It is changed by changing the lens focal length optical assembly structure, to achieve zooming. When the image plane movement in the horizontal direction, vision and focus will change, more distant scene becomes clearer.

2) Digital zoom is by the processor within the digital camera, the area of ​​each pixel in the picture is increased, so as to achieve the purpose of amplification, use "interpolation" processing means to do amplified by the digital zoom, zoom shooting scene, but drop it there will be some degree of clarity.

 

11 camera or cell phone camera quality evaluation of professional bodies

DXOMARK:https://www.dxomark.com/cn/

DXOMark test items: exposure and contrast, color, autofocus, texture scores, noise, artifacts, flash, and image stabilization (video), and many test items .

1) exposure and contrast: AE projects designed to measure how the camera photographing and appropriately adjusted according to the brightness of the subject and the background. (Contrast) Dynamic range is the camera to photograph the highlight scene detail ability to darkest portion.

2) Score color camera project designed to measure how accurately reproduce colors under a variety of lighting conditions.

3) aims autofocus camera can be measured accurately in focus for the subject under different lighting conditions with how fast.

4) aims texture measuring cameras how to retain the minute details, such detail can be observed on the object surface. Since the introduction of camera noise reduction technology vendors (such as a longer shutter time and post-processing), and produce motion blur and softened the details of fewer side effects, so the texture of the score becomes particularly important.

5) noise project designed to measure how much image noise exists. Noise may come from the light of the scene itself, it may be an image sensor and camera electronics caused.

6) artifacts project designed to measure the camera lens and digital processing of the image caused by the degree of distortion or other imperfections.

The main noise of the image sensor of the camera, whereas the artifacts are generated by the lens distortion, including straight or curved looks normal color region.

7) Flash project aims to measure the phone's built-in flash (if any) whether it can efficiently and accurately illuminate the subject.

 

12 Auto White Balance

Characteristics of the human visual system has color constancy, and therefore the human observation of things may not be affected by the color of the light source. But the image sensor itself does not have the feature of this color constancy, and therefore, it is captured under different light image, a light source is affected by the color change occurs. For example, captured under a clear sky might bluish image, and captured by candlelight objects take on a reddish color. Accordingly, in order to eliminate the influence of the color light to the image sensor of the imaging, auto white balance function is to simulate the human visual system color constancy features to eliminate the influence of light source color image.

 

13 color saturation

Refers to the saturation of the color purity of the color, the higher the purity of a color, it is more distinctive performance; lower purity, the performance is bleak.

 

14 3A Technology

3A refers to the technology autofocus (AF), automatic exposure (AE) and automatic white balance (AWB).

Contrast AF algorithm lens moving image acquired by the image contrast is maximized.

AE algorithm automatically sets the exposure value from the available lighting conditions.

Automatic white balance algorithm adjusts the color fidelity of images in accordance with lighting conditions.

 

15 Image Edge Detection

Edge information mainly in the high frequency band, generally detect said edge image sharpening or, in essence, a high frequency filter. We know that is the rate of change of differential operation request signal, with a strengthening of the role of high-frequency components. In terms of airspace operations, sharpen the image is to calculate the differential. Since the digital image signal is discrete, differential operation calculates the difference or gradient becomes. There are several image processing edge detection (gradient) operator, including conventional ordinary first-order difference, Robert operator (differential cross), the Sobel operator, etc., is based on finding the gradient strength. Laplacian (second order differential) is based on zero-crossing detection. By calculating the gradient threshold is set to give the edge image.

First-order differential edge operators, such as the classic operators: Roberts (Robert), Prewitt (Prewitt), Sobel (Sobel), Canny (Canny), etc.,

The edge of the second order differential operator, LOG edge detection operator.

 

Sobel operator: mainly used for edge detection, which is technically a discrete difference operator for calculating an approximation of the gradient of the image intensity function, the disadvantage is not Sobel operator relating to the image and background areas strictly separately, in other words Sobel operator does not perform processing based on the gray image, since the Sobel operator is not closely simulate the physiological characteristics of human vision, the image contour is extracted sometimes unsatisfactory.

Canny operator: The operator function better than several front, but it is cumbersome to implement, Canny operator is having a filter, enhance, optimize operator detected multi-stage, before performing the processing, Canny operator to Gaussian smoothing filter to smooth the image to remove noise, Canny segmentation algorithm using the finite difference of first order partial derivatives calculated gradient magnitude and direction,

Guess you like

Origin www.cnblogs.com/libai123456/p/12099520.html