Digital Image Processing (13) Image Enlargement and Bilinear Interpolation Algorithm

preface

Image magnification is two algorithms that are often used in daily learning. We first discuss the zooming process and how to optimize the bilinear interpolation algorithm when zooming in.
The international standard test image Lena is used. For convenience, we convert the imported color image into a grayscale image for scaling.

image enlargement

Different from image reduction, image enlargement is a process from small data volume to large data volume, so many unknown data need to be estimated.
If a picture W × HW\times HW×The H image needs to be enlargedk 1 × k 2 k_1 \times k_2k1×k2(i.e. row magnification k 1 k_1k1Times, column enlargement k 1 k_1k1times), the enlarged image size is int ( W ∗ k 1 ) × int ( H ∗ k 1 ) int(W*k_1)\times int(H*k_1)int(Wk1)×int(Hk1) , the reason for taking int here is that the multiplied value may be a decimal.
The specific process is shown in the figure below:
We have an image with a size of 4*4, and each pixel value is 1.
insert image description here

Its coordinate matrix looks like this
insert image description here

Now we enlarge the rows and columns of the original image (called f below) by 1.5 times ( k 1 = 1.5 k_1=1.5k1=1.5 k 2 = 1.5 k_2=1.5 k2=1.5 ) . _ Widht=6= 4 × 1.5 4\times1.5of the enlarged image (called g below)4×1.5,height=6= 4 × 1.5 4\times1.5 4×1.5 . _ _ Next we need to calculate the coordinates in g to be mapped in f. As shown in the figure below:
insert image description here
We calculated that the point (0,1) in g should correspond to the point (0,0.67) in f, but there is no such coordinate in f, so we use linear interpolation to calculate the point (0,0.67) pixel value.
If you haven't understood the linear interpolation method, it is recommended to understand it first.
The general formula for linear interpolation is as follows:
f ( x 2 ) − f ( x 1 ) x 2 − x 1 = f ( x ) − f ( x 1 ) x − x 1 \frac{f(x_2)-f(x_1 )}{x_2-x_1}=\frac{f(x)-f(x_1)}{x-x_1}x2x1f(x2)f(x1)=xx1f(x)f(x1)
f ( x ) = x 2 − x x 2 − x 1 f ( x 1 ) + x − x 1 x 2 − x 1 f ( x 2 ) f(x)=\frac{x_2-x}{x_2-x_1}f(x_1)+\frac{x-x_1}{x_2-x_1}f(x_2) f(x)=x2x1x2xf(x1)+x2x1xx1f(x2)
The second formula above is the weight formula.
Sof ( 0 , 0.67 ) = 1 − 0.67 1 − 0 f ( 0 , 0 ) + 0.67 − 0 1 − 0 f ( 0 , 1 ) = 1 f(0,0.67)=\frac{1-0.67}{ 1-0}f(0,0)+\frac{0.67-0}{1-0}f(0,1)=1f(0,0.67)=1010.67f(0,0)+100.670f(0,1)=1
Let's give another example:
insert image description here
notice that the calculated x is 1.3, between 1 and 2; y is 2.67, between 2 and 3. So we need to calculate the pixel value at (1.3,2.67) coordinates by bilinear interpolation. The bilinear interpolation algorithm is shown in the figure below.
insert image description here
Specifically,Q 11 Q_{11}Q11 Q 21 Q_{21} Q21Perform linear interpolation to calculate R 1 R_1R1, then for Q 12 Q_{12}Q12 Q 22 Q_{22} Q22Perform linear interpolation to calculate R 2 R_2R2, and finally for R 2 R_2R2 R 1 R_1 R1Perform interpolation to calculate the coordinates PP we wantThe value of P.
f ( 1.3 , 2 ) = ( 2 − 1.3 ) f ( 2 , 2 ) + ( 1.3 − 1 ) f ( 1 , 2 ) = 1 f(1.3,2)=(2-1.3)f(2,2) +(1.3-1)f(1,2)=1f(1.3,2)=(21.3)f(2,2)+(1.31)f(1,2)=1
f ( 1.3 , 3 ) = ( 2 − 1.3 ) f ( 2 , 3 ) + ( 1.3 − 1 ) f ( 1 , 3 ) = 1 f(1.3,3)=(2-1.3)f(2,3)+(1.3-1)f(1,3)=1 f(1.3,3)=(21.3)f(2,3)+(1.31)f(1,3)=1
We linearly interpolate (1.3,2) and (1.3,3).
f ( 1.3 , 2.67 ) = ( 3 − 2.67 ) f ( 1.3 , 2 ) + ( 2.67 − 2 ) f ( 1.3 , 3 ) = 1 f(1.3,2.67)=(3-2.67)f(1.3,2) +(2.67-2)f(1.3,3)=1f(1.3,2.67)=(32.67)f(1.3,2)+(2.672)f(1.3,3)=1
, so the pixel value at (1.3, 2.37) calculated according to bilinear interpolation is 1.

In summary, if the calculated point is an edge point in f, use monolinear interpolation, and if it is not an edge point, use bilinear interpolation.
The C++ code is as follows:

int main()
{
    
    
    cv::Mat image = cv::imread("LenaRGB.bmp");
    //Using gray image for easy calculations
    cv::Mat grayImage(image.size(), CV_8UC1);
    cv::cvtColor(image, grayImage, CV_BGR2GRAY);
    
    int width = grayImage.cols;
    int height = grayImage.rows;

    // Scale factor
    double k1 = 1.4;
    double k2 = 1.7;
    //Create zero mat to save the outputs
    cv::Mat outImage = cv::Mat::zeros(round(height*k1), round(width*k2), CV_64FC1);

    for (int row = 0; row < outImage.rows; row++)
    {
    
    
        for (int col = 0; col < outImage.cols; col++)
        {
    
    
        	//得到放大图坐标在原图中对应的坐标
            double srcX = (1 / 1.4)*row;
            double srcY = (1 / 1.7)*col;

            if (srcX > (grayImage.rows - 1))
            {
    
    
                srcX = grayImage.rows - 1;
            }
            if (srcY > (grayImage.cols - 1))
            {
    
    
                srcY = grayImage.cols - 1;
            }

            // Get x0,x1,y0,y1
            int x0 = floor(srcX);
            int x1 = ceil(srcX);
            int y0 = floor(srcY);
            int y1 = ceil(srcY);

            if ((x0 == x1) && (y0 == y1))
            {
    
    
                outImage.at<double>(row, col) = (double)grayImage.at<uchar>(x0, y0);
                continue;
            }
            // 如果是边缘点,则使用单线性插值
            if (x0 == x1)
            {
    
    
                double temp = (y1 - srcY)*grayImage.at<uchar>(x0, y0)+
                               (srcY-y0)*grayImage.at<uchar>(x0,y1);
                outImage.at<double>(row, col) = temp;
                continue;
            }
            if (y0 == y1)
            {
    
    
                double temp = (x1 - srcX)*grayImage.at<uchar>(x0, y0) +
                    (srcX - x0)*grayImage.at<uchar>(x1, y1);
                outImage.at<double>(row, col) = temp;
                continue;
            }
            // 不是边缘点则使用双线性插值
            double temp1 = (y1 - srcY)*grayImage.at<uchar>(x0, y0) +
                (srcY - y0)*grayImage.at<uchar>(x0, y1);
            double temp2 = (y1 - srcY)*grayImage.at<uchar>(x1, y0) +
                (srcY - y0)*grayImage.at<uchar>(x1, y1);
            double temp = (x1 - srcX)*temp1 + (srcX - x0)*temp2;
            outImage.at<double>(row, col) = temp;
        }
    }
    // Convert CV_64FC1 to CV_8UC1 
    outImage.convertTo(outImage, CV_8UC1);
    return 0;
}

Guess you like

Origin blog.csdn.net/qq_41596730/article/details/127498792