opencv 31-image smoothing-box filtering cv2.boxFilter()

Box Filtering is a simple image smoothing method. It is mainly used to remove noise and reduce details in images while maintaining the overall brightness distribution of the image.

The principle of box filtering is simple: for each pixel in the image, average the pixel values ​​in a fixed-size neighborhood around it, and then assign this average value to the current pixel. This neighborhood is usually a square called a box or window. Box filtering is equivalent to filtering the image with a mean filter.

The difference between and mean filtering is that box filtering does not calculate the pixel mean.
In mean filtering, the pixel value of the filtering result is the neighborhood average of any point, which is equal to the sum of the pixel values ​​of each neighborhood divided by the neighborhood area.

In box filtering, you can freely choose whether to normalize the results of mean filtering, that is, you can freely choose whether the filtering result is the average of the sum of pixel values ​​in the neighborhood value, or the sum of neighborhood pixel values.

Let's take a 5×5 neighborhood as an example. When performing box filtering, if the mean of the neighborhood pixel values ​​is calculated, the filtering relationship is as shown in Figure 7-15.

Insert image description here

Still taking the 5×5 neighborhood as an example. When performing box filtering, if the sum of neighborhood pixel values ​​is calculated, the filtering relationship
is as shown in Figure 7- 16 shown.
Insert image description here

According to the above relationship, if the mean value of neighborhood pixel values ​​is calculated, the convolution kernel used is:

Insert image description here
If the sum of neighborhood pixel values ​​is calculated, the convolution kernel used is:

Insert image description here
In OpenCV, the function that implements box filtering is cv2.boxFilter(), and its syntax format is:

dst = cv2.boxFilter( src, ddepth, ksize, anchor, normalize, borderType
)

In the formula:
 dst is the return value, which represents the processing result obtained after box filtering.

 src is the image that needs to be processed, that is, the original image. It can have any number of channels and process each channel independently. The image depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

 ddepth is the image depth of the processed result image. Generally, -1 is used to indicate the same image depth as the original image.

 ksize is the size of the filter kernel. The filter kernel size refers to the height and width of the neighborhood image selected during the filtering process.

For example, the value of the filter kernel can be (3,3), which means that the neighborhood mean of 3×3 size is used as the result of image mean filtering, as shown in the following formula.

Insert image description here
 anchor is the anchor point, and its default value is (-1, -1), which means that the current point for calculating the mean is located at the center point of the kernel.
The default value can be used for this value. In special cases, different points can be specified as anchor points.

 normalize indicates whether to perform normalization during filtering (here refers to normalizing the calculation results to values ​​within the current pixel value range). This parameter is a logical value. May be true (value 1) or false (value 0).

 When the parameter normalize=1, it means that normalization processing is to be performed, and the sum of neighborhood pixel values ​​is divided by the area.
 When the parameter normalize=0, it means that normalization is not required and the sum of neighboring pixel values ​​is used directly.

Normally, for box filtering, the convolution kernel can be expressed as:
Insert image description here
The above corresponding relationship is:

Insert image description here
For example, for a 5×5 neighborhood, when the parameter normalize=1, normalization processing is performed, and mean filtering is calculated at this time.
In this case, the function cv2.boxFilter() and the function cv2.blur() have the same effect.

At this time, the corresponding convolution kernel is:

Insert image description here
Also for the 5×5 neighborhood, when the parameter normalize=0, no normalization processing is performed. At this time, the filtering calculation is the sum of neighborhood pixel values, and the convolution kernel used is:

Insert image description here
When normalize=0, because no normalization is performed, the value obtained by filtering is likely to exceed the maximum value of the current pixel value range and is thus truncated to the maximum value.

This results in a pure white image.

 borderType is the border style, this value determines how the border is handled.

Normally, when using the box filter function, directly use the default values ​​
for the parameters anchor, normalize and borderType. Therefore, the common form of the function cv2.boxFilter() is:

dst = cv2.boxFilter( src, ddepth, ksize )

Experiment 1: Perform box filtering on the noisy image and display the filtering results

code show as below:

import cv2
o=cv2.imread("lenaNoise.png")
r=cv2.boxFilter(o,-1,(5,5))
cv2.imshow("original",o)
cv2.imshow("result",r)
cv2.waitKey()
cv2.destroyAllWindows()

Run results:
In this example, the box filter function uses the default value for the normalize parameter. By default, this value is 1, indicating normalization. In other words, this example uses the cv2.boxFilter() function with normalize as the default value True.
At this time, the filtering results of it and the function cv2.blur() are exactly the same. As shown in the figure, the left image is the original image, and the right image is the box filtering result image
Insert image description here

Experiment 2: For noisy images, set the parameter normalize to 0 in the box filter function cv2.boxFilter() to display the filtering results.

code show as below:

import cv2
o=cv2.imread("lenaNoise.png")
r=cv2.boxFilter(o,-1,(5,5),normalize=0)
cv2.imshow("original",o)
cv2.imshow("result",r)
cv2.waitKey()
cv2.destroyAllWindows()

In this example, the image is not normalized. When filtering, the sum of pixel values ​​in the 5×5 neighborhood is calculated. The pixel value of the image will basically exceed the maximum value of the current pixel value of 255. Therefore, the resulting image is close to pure white, with some spots showing color. Some points have color because the pixel values ​​in the neighborhood around these points are small, and the neighborhood pixel values ​​are still less than 255 after adding them up.

The image filtering results at this time are as shown in the figure. The left image is the original image, and the right image is the processing result after box filtering.
Insert image description here

Experiment 3: For noisy images, use the box filter function cv2.boxFilter() to denoise, set the value of parameter normalize to 0, set the size of the convolution kernel to 2×2, and display the filtering results

code show as below:

import cv2
o=cv2.imread("lenaNoise.png")
r=cv2.boxFilter(o,-1,(2,2),normalize=0)
cv2.imshow("original",o)
cv2.imshow("result",r)
cv2.waitKey()
cv2.destroyAllWindows()

In this example, the convolution kernel size is 2×2 and the parameter normalize=0. Therefore, in this example, the box filter calculates the sum of pixel values ​​in the 2×2 neighborhood. The sum of the four pixel values ​​is not necessarily greater than 255, so some pixels in the calculation result image are not white. As shown in the figure, the left image is the original image, and the right image is the result of box filtering.
Insert image description here

Guess you like

Origin blog.csdn.net/hai411741962/article/details/132061787