Digital Image Processing (Fanwai) Image Enhancement

image enhancement

The method of image enhancement is to add some information or transform data to the original image by certain means, selectively highlight the features of interest in the image or suppress (cover up) some unnecessary features in the image, so that the image matches the visual response characteristics. .

image contrast

The image contrast is calculated as follows:
C = ∑ δ δ ( i , j ) P δ ( i , j ) C=\displaystyle\sum_{ { \delta}}\delta(i,j)P_\delta(i,j)C=dd ( i ,j)Pd(i,j)
其中, δ ( i , j ) = ∣ i − j ∣ \delta(i,j)=\lvert i-j\rvert d ( i ,j)=ij , that is, the gray level difference between adjacent pixels;P δ ( i , j ) P_\delta(i,j)Pd(i,j ) is the gray level difference of adjacent pixels isδ \deltaThe pixel distribution probability of δ . It can be a four-neighborhood or an eight-neighborhood. The specific process is as follows:
the original image is:
L = [ 1 3 5 2 1 3 3 6 0 ] L=\begin{bmatrix} 1 & 3 &5\\ 2 & 1 &3 \\ 3 & 6&0 \end{bmatrix}L= 123316530
Calculated according to the four neighborhoods, the contrast CL = [ ( 1 2 + 2 2 ) + ( 2 2 + 2 2 + 2 2 ) + ( 1 2 + 1 2 + 1 2 ) + ( 2 2 + 2 2 + 1 2 + 5 2 ) + ( 2 2 + 2 2 + 3 2 ) + ( 1 2 + 3 2 ) + ( 5 2 + 3 2 + 6 2 ) + ( 3 2 + 6 2 ) / 22 C_L=\lbrack(1^ 2+2^2)+(2^2+2^2+2^2)+(1^2+1^2+1^2)+(2^2+2^2+1^2+5^ 2)+(2^2+2^2+3^2)+(1^2+3^2)+(5^2+3^2+6^2)+(3^2+6^2) /twenty twoCL=[(12+22)+(22+22+22)+(12+12+12)+(22+22+12+52)+(22+22+32)+(12+32)+(52+32+62)+(32+62 )/22. 22 is the number of squares.

1. Contrast widening

The purpose of contrast widening is to improve the image quality by enhancing the contrast between light and dark in the image, so that the display effect of the image is clearer.

Linear Contrast Stretching

By suppressing the contrast of non-important information to make room for important information to expand the contrast.
insert image description here
The grayscale of the original image is f ( i , j ) f(i,j)f(i,j ) , the grayscale of the processed image isg ( i , j ) g(i,j)g(i,j ) . The gray distribution of the important scenes in the original image is assumed to be distributed in[ fa , fb ] \lbrack f_a,f_b\rbrack[fa,fb] , then the purpose of linear stretching of the contrast is to make the gray distribution of the important scene of the processed image in[ ga , gb ] \lbrack g_a,g_b\rbrack[ga,gb] , whenΔ f = ( fb − fa ) < Δ g = ( gb − ga ) \varDelta f=(f_b-f_a)<\varDelta g=(g_b-g_a)f _=(fbfa)<g _=(gbga) , the purpose of contrast broadening can be achieved.
It is calculated as follows:
g ( i , j ) = { α f ( i , j ) 0 ≤ f ( i , j ) < a β ( f ( i , j ) − a ) + ga a ≤ f ( i , j ) < b γ ( f ( i , j ) − b ) + gbb ≤ f ( i , j ) < 255 g(i,j)= \begin{cases} \alpha f(i,j) &\text{ } 0 \le f(i,j)<a \\ \beta (f(i,j)-a)+g_a &\text{ } a\le f(i,j)<b \\ \gamma (f(i ,j)-b)+g_b &b\le f(i,j)<255 \end{cases}g(i,j)= αf(i,j)β(f(i,j)a)+gac ( f ( i ,j)b)+gb 0f(i,j)<a af(i,j)<bbf(i,j)<255
其中, α = g a f a , β = g b − g a f b − f a , γ = 255 − g b 255 − f b \alpha =\frac{g_a}{f_a},\beta =\frac{g_b-g_a}{f_b-f_a},\gamma =\frac{255-g_b}{255-f_b} a=faga,b=fbfagbga,c=255fb255gb.
The C++ code is as follows:

    cv::Mat image = cv::imread("Lena.bmp");
    cv::Mat grayImage(image.size(), CV_8UC1);
    cv::Mat dstImage(grayImage.size(), CV_8UC1);
    cv::cvtColor(image, grayImage, CV_BGR2GRAY);
    int fa = 50, fb = 100;
    float ga = 30, gb = 120;
    for (int row = 0; row < grayImage.rows; row++)
    {
    
    
        uchar *currentData = grayImage.ptr<uchar>(row);
        for (int col = 0; col < grayImage.cols; col++)
        {
    
    
            if (*(currentData + col) >= 0 && *(currentData + col) < fa)
            {
    
    
                dstImage.at<uchar>(row, col) = uchar(ga / fa * (*(currentData + col)));
            }
            if (*(currentData + col) >= fa && *(currentData + col) < fb)
            {
    
    
                dstImage.at<uchar>(row, col) = uchar((gb-ga) / (fb-fa) * (*(currentData + col)-fa)+ga);
            }
            if (*(currentData + col) >= fb && *(currentData + col) < 255)
            {
    
    
                dstImage.at<uchar>(row, col) = uchar((255-gb) / (255-fb) * (*(currentData + col)-fb)+gb);
            }
        }
        //currentData++;
    }

The result is as follows:
insert image description here

Nonlinear Contrast Stretching

Through a smooth mapping curve, the grayscale change of the processed image is relatively smooth. The calculation formula is as follows:
g ( i , j ) = c ⋅ lg ( 1 + f ( i , j ) ) g(i,j)=c\cdot lg(1+f(i,j))g(i,j)=clg(1+f(i,j ))
actually completes the function of suppressing the high-brightness area and expanding the low-brightness area.

2. Histogram equalization

There is such a conclusion in information theory: when the distribution of data is close to uniform distribution, the amount of information (entropy) carried by the data is the largest.
The basic principle of the grayscale histogram is to widen the grayscale with a large number of pixels in the image (that is, the grayscale value that plays a major role in the picture), and to widen the grayscale value with a small number of pixels (that is, the grayscale value that plays a major role in the picture). Gray values ​​that do not play a major role) are merged.
The specific steps of the histogram equalization method are as follows:

  1. Find the original image f ( i , j ) M × N f(i,j)_{M\times N}f(i,j)M×NThe gray histogram of the 256-dimensional vector hf h_fhfexpress;
  2. there hf h_fhfFind the gray distribution probability of the original image, denoted as pf p_fpf,则有 p f ( i ) = 1 N f ⋅ h f ( i ) , i = 0 , 1 , … , 255 p_f(i)=\frac{1}{N_f}\cdot h_f(i),i=0,1,\dots ,255 pf(i)=Nf1hf(i),i=0,1,,255
    Among them,N f = M × N N_f=M\times NNf=M×N M , N M,N M,N is the length and width of the image respectively) is the total number of pixels of the image;
  3. Calculate the cumulative distribution probability of each gray value of the image, denoted as pa p_apa,则有 p a ( i ) = ∑ k = 0 i p f ( k ) , i = 1 , 2 , … , 255 p_a(i)=\displaystyle\sum_{k=0}^ip_f(k),i=1,2,\dots ,255 pa(i)=k=0ipf(k),i=1,2,,255
    Among them, letpa ( 0 ) = 0 p_a(0)=0pa(0)=0
  4. Perform histogram equalization calculation to obtain the pixel value g ( i , j ) g(i,j) of the processed imageg(i,j ) One: g ( i , j ) = 255 ⋅ p ( k ) g(i,j)=255\cdot p_a(k)g(i,j)=255pa(k)

The C++ code looks like this:

    cv::Mat image = cv::imread("Lena.bmp");
    cv::Mat src(image.size(), CV_8UC1);
    //转为灰度图像
    cv::cvtColor(image, src, CV_BGR2GRAY);

    cv::Mat dst(image.size(), CV_8UC1);
    float hf[256] = {
    
     0 };
    for (int row = 0; row < src.rows; row++)
    {
    
    
        uchar *currentData = src.ptr<uchar>(row);
        for (int col = 0; col < src.cols; col++)
        {
    
    
            hf[*(currentData + col)] += 1;
        }
    }
    float pf[256] = {
    
     0 };
    for (int i = 0; i < 256; i++)
    {
    
    
        pf[i] = hf[i] / (src.rows*src.cols);
    }
    float pa[256] = {
    
     0 };
    for (int i = 1; i < 256; i++)
    {
    
    
        float sumNumber = 0;
        for (int j = 0; j < i+1; j++)
        {
    
    
            sumNumber += pf[j];
        }
        pa[i] = sumNumber;
    }
    for (int row = 0; row < dst.rows; row++)
    {
    
    
        uchar * currentData = dst.ptr<uchar>(row);
        for (int col = 0; col < dst.cols; col++)
        {
    
    
            *(currentData + col) = uchar(255 * pa[src.at<uchar>(row, col)]);
        }
    }

The result display:
insert image description here

Guess you like

Origin blog.csdn.net/qq_41596730/article/details/126908505