Image Processing for Embedded Devices <5>

2.2 sensor system

                                          

2.2 exemplary pixel structure in FIG.

 

         Figure 2.2 is a classical structure of a pixel sensor in a schematic. You can see all the constituent units: a microlens, the electronic parts and the photosensitive area. The next section analyzes these cells, which express the advantages and disadvantages and its effect on the final image.

2.2.1 microlens

 

2.3 compensation strategies FIG different pixel fill factor: (a) high fill factor pixels, to capture all of the incident light; (b) low fill factor of pixels, using a microlens to converge to the incident effective region

 

         Capturing sensitivity level (light sensitivity) is one of the most important characteristics of the image sensor. The design (FIG. 2.3) which depends primarily on the photosensitive area, and the same design and technical circumstances, depending on the size of the sensitive region (Figure 2.2). The fill factor is the ratio of the photosensitive region and a pixel size (30% -100% range). When the need to lower the fill factor is improved by using a microlens. Individually placed microlenses in the lens surface for each pixel, for converging the light receiving area. Micro lenses may increase the effective fill factor (the ratio of the effective refractive region and the entire region occupied by the lens array) to about 70%, significantly improved the sensitivity (Non-charge capacity). Figure 2.3 shows the differences between the pixels of the presence or absence of the microlens.

2.2.2 and microlens aberration

Crosstalk

 

FIG different types of crosstalk 2.4: (a) the probe input light is captured correctly, no crosstalk effect; (b) optical crosstalk: a respective portion of the light is not filtered but to capture the probe to adjacent detector illustrated case green to red light collector is collected; (c) an electronic crosstalk: dispersing photons portion of the silicon layer and the collection of different types of detectors, the case illustrated, the red photons collected by the silicon layer adjacent to the green detector.

FIG. 2.4 (a), one pixel over the charge, the photon through all angles in CFA filtered by a filter unit and accumulated in the color filter unit below the photodetector. Optical crosstalk occurs at the photodetector at an angle of photons through a filter unit in CFA, into a pixel photodetector (photodiode) adjacent to, but not below the filter element. This contamination of adjacent pixel charge packets, FIG. 2.4 (b). There are also electronic crosstalk, occurs in the red photons pass through the filter into the silicon layer before the electrons are generated. This leads to inconsistency in response to different colors, FIG. 2.4 (c) represents a charge loss drift error pixel to the substrate and electrons.

 

Color

 

图2.5 色差影响下的图像的局部

 

图2.6 纵向和横向色差

 

色差是一个术语,表示一个图像系统在图像或最少图像某些区域中放置不正确的颜色。色差是由于镜头对不同波长的光有不同的折射率引起的。这意味着各颜色通道的焦距不同,因此色差会产生可见的颜色边纹和颜色模糊。图2.5是色差的一个例子。这种色差是典型地遍布总张图像,但在高对比的边缘最为明显。色差一般被定义为2种类型(见图2.6):

l  纵向色差

l  横向色差(亦称为放大色差)

 

图2.7 R和B通道失真

 

         纵向色差由于波长不同导致图像平面上失焦,横向色差由于焦点转移产生。前者造成模糊失真,后者导致颜色通道的转移失真。图像处理设置通常根据绿色通道聚焦图像平面,如图2.7,因为绿色分量对亮度影响最大。因此和其他2个颜色通道相比,绿色通道显示出更少的色差。

         矫正色差的2种主要方法:

l  光学改良的镜头;

l  采集到的图像进行信号处理。

         第一种方法往往成本高且难以实现。另一方面,信号处理算法可以用廉价且有效的方式矫正这个缺陷。

 

图2.8 棋盘模式用于矫正(左)和用于测试(右)

 

         在标准模式(如棋盘模式)下测量放大色差是最简单的技术。图2.8(左边)是一个用于矫正的模式示例。第二个密度更低的棋盘模式(测试图),如图2.8(右边),用于对结果的独立验证。尤其如文献( J. Mallon and P. F. Whelan, “Calibration and removal of lateral chromatic aberration in images,” Pattern Recognition Letters, vol. 28, no. 1, pp. 125-135, 2007.)中探讨的,预先准备好无色差标准模式的参考数据,然后实际拍摄标准模式,通过比较拍摄所得数据和参考数据检测出颜色偏移。

         用于矫正色差的常规图像处理算法通常利用光学系统的补充信息计算由于放大色差引起的颜色偏移。文献(E. C. Haseltine and W. G. Redmann, “Electronic and computational correction of chromatic aberration associated with an optical system used to view a color video display” U.S. Patent 5369450, November 1994.)提出一个方法,通过引入不同的几何变形到目标图像的颜色成分用以矫正纵向色差。这个变形会很大程度抵消由镜头单元形成的实际图像的红绿蓝颜色成分在图像高度上差异。这个通过确定一个或多个特定颜色变形函数完成(考虑到红绿蓝图层下镜头的不同的纵向放大率参数),函数作用于一个或多个颜色成分的几何特征。不同的变形函数可能会作用于红色和蓝色成分上,使其和绿色成分一致。

 

紫边现象

 

图2.9紫边影响下的图像的局部

 

         虽然特定条件下色差也可能是紫色的,“紫边现象”通常是值由微透镜引起的一种典型的图像设备现象。紫边可被认为“微透镜层级的色差”。因此,紫边现象肉眼可见并且贯穿每帧图像。图2.9展示了一副紫边影响下的图像。对比强烈的边缘位置最为明显,特别处于背光情况下。Blooming(光晕或高光溢出)会使紫边现象更可见。实际上图像设备通常测量场景的曝光时间,最亮的区域在不得溢出势阱的情况下可能多的累积电荷,得到最高的动态范围并且响应直接取决于击打在光电二极管上光子数量。这种曝光控制正常情况下能有效工作,但是场景的曝光测量是依据一些区域的平均亮度。如果场景包含小区域,这些小区域和场景平均亮度相比格外的亮,这些亮区域会导致数量非常多的光子,成为sensor上的小事故。这些光子在sensor中产生电荷,通过充分数量的累积,开始泄漏出像元势阱到周围的像元中(Blooming效应)。

         当电荷从一个光电二极管势阱泄漏到周围的光电二极管势阱,结果是周围区域引入了假的更大的信号。假如周围像元本应不产生信号因这些区域场景为黑,这个假的信号就会尤为明显。从亮到暗过度明显的地方Blooming效应最可见。我们已经知道镜头偏差会造成亮白光的红蓝分量出现在不正确的sensor位置。电荷泄漏会放大该现象,通过从正确位置散布sensor响应电荷至更远处,去马赛克处理还会额外增强这种位置错误。

 

图2.10 CMY颜色空间中紫色所处位置

 

大部分矫正紫边效应的算法技巧是构建二值图,区别过曝区域和非过曝区域。检测靠近过曝区域的紫色区域,降低其饱和度。部分技术会依据周围像素的饱和度,防止过度降低饱和度,保证高质量图像。部分其他技术通过检测CrCb域的Magenta和Cyan颜色范围来确定是否存在紫边现象,如图2.10。

         文献(T. Masuno, M. Ohki, and R. Yamada, “Image processing apparatus and image processing method as well as computer program,” U.S. Patent 7529405, May 2009)中介绍了一个令人感兴趣的技术,用于去除紫边现象,假定紫边主要出现在过曝的高亮区域周边,然后计算每个像素的伪色程度,根据伪色程度进行合适的矫正。像素满足如下条件:

l  像素周边存在过曝像素;

l  像素显示紫色;

l  像素具有高饱和度。

上述像素被认定为紫边像素。通过平均周围未显示伪色的像素进行矫正处理。

Guess you like

Origin www.cnblogs.com/yhszjm/p/11205986.html