Mean filter comes with simple code

1. Concept introduction
 Mean filtering is a typical linear filtering algorithm, which refers to replacing the current pixel value with the mean value of nxn pixel values ​​around the current pixel point. Using this method to traverse and process each pixel in the image can complete the mean value filtering of the entire image.

2. The basic principle
 is shown in Figure 2-1. When we perform mean filtering on the pixels in row 5 and column 5, we first need to consider how many surrounding pixels need to be averaged. Usually, we take the current pixel as the center and take the average value of all pixels in an area with the same number of rows and columns .
 For example, we can take the average value of all the pixels in the 3x3 area around the current pixel, or take the average of the pixel values ​​of all the pixels in the surrounding 5x5 area.

insert image description here

Figure 2-1 Example of pixel values ​​of an image

 When the position of the current pixel is the 5th row and 5th column, we average the pixel values ​​in the surrounding 5*5 area. The calculation method is as follows: new pixel value = [(197+25+106+156+159) (149+40+107+5+71)+ (163+198+226+223+156) + (222+37+68+193+157)+ (42+ 72+250+41+75
)
]
/
25
=
126


After the new value is calculated, we use the new value as the pixel value filtered by the mean value of the current pixel point. For each pixel in Figure 2-1, we calculate the mean value of the pixel value in the surrounding 5x5 area, and use it as the new value of the current pixel point to obtain the mean value filtering result of the current image.


However, there is no 5x5 domain area on the border of the image. As shown in Figure 2-1, the pixel on the first row and first column in the upper left corner has a pixel value of 23. If the surrounding 5x5 field is taken as the center point, part of the 5x5 field is located outside the image. However, there are no pixels and pixel values ​​outside the image, and it is obviously impossible to calculate the field mean value of this point.


Therefore, for the edge pixels, only the mean value of the pixel values ​​of the surrounding domain points existing in the image can be taken . As shown in Figure 2-2, when calculating the mean filtering result in the upper left corner, only the average value of the pixel values ​​in the 33 area with gray background in the figure is taken. The calculation method is as follows:
new pixel value =
[(23+158+140)+
(238+0+67)+
(199+197+25)]/9
=116

insert image description here

Figure 2-2 Processing of boundary points

 In addition, we can also expand the surrounding pixels of the current image. For example, the current 97-size image can be expanded to a 13*11-size image, as shown in Figure 2-3.

insert image description here

Figure 2-3 Extended Edge


 After expanding the edge of the image, we can fill in different pixel values ​​in the newly added rows and columns. On this basis, the average value of the pixel values ​​of the pixels in the 5x5 field is calculated for the 9x7 original image. OpenCV provides a variety of boundary processing methods, and we can choose different boundary processing modes according to actual needs.

 For the pixels in row 5 and column 5, the operation process is equivalent to multiplying a 55 matrix whose internal value is 1/25, so that the result of mean filtering is 126. As shown in Figure 2-4.

insert image description here

Figure 2-4 Schematic diagram of the operation of mean filtering for pixels in row 5 and column 5

According to the above calculation, for each pixel point, it is multiplied by a 55 proof whose internal value is 1/25 to obtain the calculation result of the mean value filter. The schematic diagram is shown in Figure 2-4

insert image description here

Figure 1-4 Schematic diagram of the operation of mean filtering for each pixel

By generalizing the 5x5 matrix used, the results shown in Figure 2-5 below can be obtained.

insert image description here

Figure 2-5 Generalizing a matrix

In Opencv, the matrix on the right side of Figure 1-5 is called a convolution kernel, and its general form is shown in Figure 2-6 below. Among them, M and N correspond to the height and width respectively. Generally, the values ​​of M and N are equal, and the commonly used ones are 3x3, 5x5, and 7x7. If the values ​​of M and N are larger, the value involved in the operation is larger, the number of pixels involved in the operation is larger, and the image distortion is more serious.

insert image description here

Figure 2-6 Convolution kernel

It can be found from the average filter processing of images using convolution kernels of different sizes that the larger the convolution kernel, the more obvious the distortion of the image.

The larger the convolution kernel, the more pixels will participate in the mean value operation, that is, the current calculation is the average value of the pixel values ​​of more points, the better the denoising effect, of course, the longer the calculation time will be, and the more serious the image distortion will be. Therefore, in actual processing, it is necessary to strike a balance between distortion and denoising effect, and select a convolution kernel of an appropriate size.
 

void MainWindow::on_averagefilter_clicked()
{
    cv::Mat FilterImg;
    QImage Qtemp1,Qtemp2;

    FilterImg = m_noiseimg.clone();

    for(int i = 1 ; i < m_noiseimg.rows - 1 ; i++)
        for(int j = 1 ; j < m_noiseimg.cols - 1 ; j++){
            for(int k = 0 ; k < 3 ; k++){
                FilterImg.at<cv::Vec3b>(i,j)[k] = cv::saturate_cast<uchar>((m_noiseimg.at<cv::Vec3b>(i - 1,j - 1)[k] + m_noiseimg.at<cv::Vec3b>(i - 1,j)[k] + m_noiseimg.at<cv::Vec3b>(i - 1,j + 1)[k]
                                                                        +m_noiseimg.at<cv::Vec3b>(i,j - 1)[k] + m_noiseimg.at<cv::Vec3b>(i,j)[k] + m_noiseimg.at<cv::Vec3b>(i,j + 1)[k]
                                                                        +m_noiseimg.at<cv::Vec3b>(i + 1,j - 1)[k] + m_noiseimg.at<cv::Vec3b>(i + 1,j)[k] + m_noiseimg.at<cv::Vec3b>(i + 1,j + 1)[k])/9);
            }
        }
    Qtemp2 = QImage((const unsigned char*)(FilterImg.data), FilterImg.cols, FilterImg.rows, FilterImg.step, QImage::Format_RGB888);
    ui->Label3->setPixmap(QPixmap::fromImage(Qtemp2));
    Qtemp2 = Qtemp2.scaled(250, 250, Qt::KeepAspectRatio, Qt::SmoothTransformation);
    ui->Label3->setScaledContents(true);
    ui->Label3->resize(Qtemp2.size());
    ui->Label3->show();
}

Guess you like

Origin blog.csdn.net/cyy1104/article/details/130357891
Recommended