Image filtering of Opencv: 1. Image convolution (cv2.filter2D)

        Writing these blogs is mainly to record the process of learning Opencv, and I hope it can help everyone.

        

        In OpenCV, users are allowed to customize the convolution kernel to realize the convolution operation. The function of using the custom convolution kernel to realize the convolution operation is cv2.filter2D(), and its syntax format is:

        dst=cv2.filter2D(src,ddepth,kernel,anchor,delta,borderType)

         In the formula:

         ● dst is the return value, indicating the processing result obtained after box filtering.

         ● src is the image to be processed, that is, the original image. It can have any number of channels and can process each channel independently. Image depth should be one of CV_8U, CV_16U, CV_16S, CV_32F or CV_64F.

        ● ddepth is the image depth of the processed image, generally use -1 to indicate the same image depth as the original image.

        ● kernel is a convolution kernel, which is a single-channel array. If you want to use different kernels for each channel when processing a color image, you must decompose the color image and use different kernels to complete the operation.

        ● anchor is the anchor point, and its default value is (-1,-1), which means that the current mean value calculation point is located at the center point of the kernel. Use the default value for this value, and in special cases, you can specify a different point as the anchor point.

        ● delta is the correction value, it is optional. If the value exists, it will be added to the basic filtering result as the final filtering result.

        ● borderType is the border style, this value determines how to process the border, usually the default value is enough.

        In general, when using cv.filter2D(), for the parameters anchor, correction value delta, and borderType, the default values ​​can be used directly.

        Therefore, the common form of cv.filter2D() is:

        dst=cv2.filter2D(src,ddepth,kernel)

        example:

        

import cv2  as cv
import numpy as np

def cv_show(name, img):
    cv.imshow(name, img)
    cv.waitKey(0)
    cv.destroyAllWindows()


# 卷积操作
src = np.array([[1,2,3,4,5],
                [6,7,8,9,10],
                [11,12,13,14,15],
                [16,17,18,19,20],
                [21,22,23,24,25]],dtype='float32')
kernel1 = np.ones((3,3), dtype='float32')/9
result = cv.filter2D(src, -1,kernel=kernel1)

print('卷积前矩阵为:\n {}'.format(src))
print('卷积后矩阵为:\n {}'.format(result))


# 与图像做卷积操作
img = cv.imread('D:\\dlam.jpg')
if img is None:
    print('Failed to read the imagine')
kernel2 = np.ones((7,7), dtype='float32')/49
result2 = cv.filter2D(img, -1,kernel=kernel2)

cv_show('哆啦A梦',img)
cv_show('reslut', result2)

The result is as follows:

The matrix before convolution is:
 [[ 1. 2. 3. 4. 5.]
 [ 6. 7. 8. 9. 10.]
 [11. 12. 13. 14. 15.]
 [16. 17. 18. 19. 20.]
 [21. 22. 23. 24. 25.]]
The matrix after convolution is:
 [[ 5. 5.3333335 6.3333335 7.333333 7.666667 ]
 [ 6.666667 7. 8. 9. 9.333333 ]
 [11.666668 12. 13. 13.999999 14.333334 ]
 [16.666666 17. 17.999998 19. 19.333332 ]
 [18.333334 18.666666 19.666668 20.666668 21. ]]

It can be clearly seen that the image has become blurred

Guess you like

Origin blog.csdn.net/qq_49478668/article/details/123169357