Opencv-based edge preservation filtering algorithm

Edge-preserving filtering algorithm

Gaussian Bilateral Filter

Gaussian filtering takes into account the influence of the image space position on the weight, the closer to the center point, the greater the weight. However, Gaussian filtering does not consider the influence of the pixel distribution in the image on the output of the image convolution. The Gaussian bilateral filtering fully considers this point, and retains the large differences in the spatial distribution of pixel values ​​so that the edge information of the image can be completely retained.
The distribution of pixels in a certain area of ​​an image has certain rules. To retain edge information, only those with similar pixel values ​​should participate in the calculation, and those with large differences should not participate in the convolution calculation. The so-called bilateral filtering means that both the spatial position and the pixel value distribution are considered at the same time.

Insert picture description here

opencv API:

void bilateralFilter( InputArray src, 
					  OutputArray dst, 
					  int d,
					  double sigmaColor, double sigmaSpace,
					  int borderType = BORDER_DEFAULT );

InputArray src: The input image, which can be of Mat type. The image must be an 8-bit or floating-point single-channel or three-channel image.
. OutputArray dst: The output image, which has the same size and type as the original image.
. int d: Indicates the diameter range of each pixel neighborhood in the filtering process. If this value is non-positive, the function will calculate the value from the fifth parameter sigmaSpace. Generally 0.
. double sigmaColor: The sigma value of the color space filter. The larger the value of this parameter, the wider the color in the pixel neighborhood will be mixed together to produce a larger semi-equal color area. Try to take as large as possible.
. double sigmaSpace: The sigma value of the filter in the coordinate space. If the value is larger, it means that the distant pixels with similar colors will affect each other, so that the sufficiently similar colors in the larger area can obtain the same color. When d>0, d specifies the neighborhood size and sigmaSpace facial features, otherwise d is proportional to sigmaSpace. Try to be as small as possible.
. int borderType=BORDER_DEFAULT: Used to infer a certain border mode of pixels outside the image, with a default value of BORDER_DEFAULT.

	Mat image = imread("E:\\picture\\dot.png");
	imshow("原始图", image);

	Mat dstimg;
	bilateralFilter(image, dstimg, 0, 100, 10);
	imshow("bilateral", dstimg);

	imwrite("E:\\picture\\dotout.png", dstimg);
	waitKey(0);

Insert picture description here

Mean-shift mean shift filter

Mean-shift mean shift filtering is a kind of image edge preservation filtering algorithm, which is often used to denoise the image before watershed segmentation. At the same time, it is applied in image processing tasks such as target tracking, image comparison, and video analysis. Therefore, the mean shift filter is a very widely used filtering algorithm.

The mean shift filter algorithm fully considers the spatial range distribution of pixel values. Only the pixels that meet the distribution will participate in the calculation. The pixel average (RGB three values) and the spatial position (pixel coordinates X, Y) are calculated, and the new The mean position is used as the center position of the window to continue to calculate the mean and mean position based on the given spatial distribution of pixel values, so that the center position is constantly shifted until the position no longer changes (dx=dy=0dx=dy=0), but the pixel distribution is actually Generally, it is not particularly ideal, so the migration stop conditions (such as the number of migrations) will be artificially set, which can also assign the final RGB average value to the central pixel.

In some cases, the mean shift filter will be more effective than the Gaussian bilateral filter.

opencv API:

void pyrMeanShiftFiltering( InputArray src, OutputArray dst,
                            double sp, double sr,
                            int maxLevel = 1,
							TermCriteria termcrit
							=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS,5,1) );

InputArray src: The input image, which can be of Mat type. The image must be an 8-bit or floating-point single-channel or three-channel image.
. OutputArray dst: The output image, which has the same size and type as the original image.
. sp: window size.
. sr: The radius of the color space.
TermCriteria: Specify the migration stop condition, the default migration is five times and two consecutive dx+dy dx+dy is not greater than 1.

	Mat image = imread("E:\\picture\\dot.png");
	imshow("原始图", image);

	//Mat element = getStructuringElement(MORPH_RECT, Size(15, 15));
	Mat dstimg;
	pyrMeanShiftFiltering(image, dstimg, 15, 50, 1);
	imshow("bilateral", dstimg);

	imwrite("E:\\picture\\dotout.png", dstimg);
	waitKey(0);

Insert picture description here

Guess you like

Origin blog.csdn.net/qq_36587495/article/details/108555063