Notes: Introduction to camera working principle and color space

The basic working principle of the camera:
the optical image generated by the lens (LENS) is projected onto the surface of the image sensor, and then converted into an electrical signal, which is converted into a digital image signal after A/D (analog signal), and then sent to the digital image processor (DSP) processing, you can see the image on the monitor.
The basic structure of the camera has 3 main components: lens, image sensor, DSP.
Image sensors can be divided into two types: CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor).
CCD: High sensitivity, low noise, high signal-to-noise ratio; but the production process is complicated, high cost, high power consumption, and less application in camera products.
CMOS: high integration, low power consumption, low cost; but high requirements for light sources.
Basic concepts:
1. Resolution
UXGA, that is , the output format with a resolution of 1600 1200, and similar ones are: SXGA (1280 1024), XVGA (1280 960), WXGA (1280 800), XGA (1024 768), SVGA ( 800 600), VGA (640 480), CIF (352 288) and QQVGA (160*120), etc.
2.
SCCB protocol (Serial Camera Control Bus) SCCB (Serial Camera Control Bus) is the same protocol as I2C. The SCCB protocol has two lines and three lines, the two lines are SIO_C and SIO_D, and the three lines are SIO_E, SIO_C and SIO_D. The 2-wire SCCB bus can only be controlled by one master device to one slave device, but the 3-wire SCCB interface can control multiple slave devices, so when there is only one slave device, use two wires and there are multiple slave devices. When using three lines.
3. Camera data output type:
RAW RGB, RGB, YUV, YCbCr (RGB, YUV and YCbCr are all artificially specified color models or color spaces, YCbCr4:2:0 represents the sampling format)
RAW RGB: is not ISP (image signal Processing) The image format directly output by processing, each pixel only outputs one color;
RGB: RGB (red, green and blue) is a space defined by the color recognized by the human eye, which can represent most colors. However, RGB color space is generally not used in scientific research because its details are difficult to adjust digitally. It puts the hue, brightness, and saturation together to represent the three quantities, which is difficult to separate. It is the most common hardware-oriented color model. This model is used for color monitors and a large class of color video cameras.
YUV: In the YUV space, each color has a luminance signal Y, and two chrominance signals U and V. The luminance signal is the perception of intensity, which is disconnected from the chrominance signal, so that the intensity can be changed without affecting the color.
YUV uses RGB information, but it generates a black and white image from a full-color image, and then extracts three main colors into two additional signals to describe the colors. Combining these three signals can produce a full-color image. The Y channel describes the Luma signal, which is a little different from the luminance signal, and the value range is between light and dark. Luma is a signal that can be seen on black and white TV. U (Cb) and V (Cr) channels extract brightness values ​​from red (U) and blue (V) to reduce the amount of color information. These values ​​can be recombined to determine the mixed signal of red, green and blue.
YCbCr: YCbCr is part of the ITU-R BT1601 recommendation during the development of the video standard of the World Digital Organization. It is actually a scaled and offset version of YUV. Among them, Y has the same meaning as Y in YUV. Cb and Cr also refer to color, but they are different in the representation method. In the YUV family, YCbCr is the most widely used member in computer systems, and its application fields are very wide. Both JPEG and MPEG use this format. Generally speaking, YUV mostly refers to YCbCr.

Guess you like

Origin blog.csdn.net/abc101619/article/details/109369291