Digital Image Processing--Convolution Operation

Digital Image Processing – Convolution Operation

1. Main content

(1) Write one-dimensional convolution and two-dimensional convolution operation programs by yourself;
(2) Use the one-dimensional convolution and two-dimensional convolution functions that come with Python to realize convolution operations;
(3) Compare (1)(2) As a result, the arithmetic program written by oneself is modified and optimized.

2. Source code

import cv2
import matplotlib.pyplot as plt
import scipy.signal
import torch


print('==========================一维卷积=========================')
#一维卷积
G = [1,2,3,4,5,6,7,8,9]
H = [-1,0,1]
F = []
F_optim = []
#自定义一维卷积函数
def conv1d(vector,kernel):

    for i in range(len(vector)):
        if i == 0:
            x = kernel[-1]*vector[i]
            F.append(x)
        elif i == 1:
            x = kernel[-1]*vector[i] + kernel[-2]*vector[i-1]
            F.append(x)
        elif 1<i<9:
            x = kernel[-1]*vector[i] + kernel[-2]*vector[i-1] + kernel[-3]*vector[i-2]
            F.append(x)
        # elif i == 8:
        #     x = kernel[-1]*vector[i] + kernel[-2]*vector[i-1] + kernel[-3]*vector[i-2]
        #     F.append(x)
        # # elif i == 10:
        #     x = kernel[-3]*vector[i-2]
        #     F.append(x)
    return F
#修改原始自定义一维卷积
def optim_conv1d(vector,kernel):

    for i in range(len(vector)+2):
        if i == 0:
            x = kernel[-1]*vector[i]
            F_optim.append(x)
        elif i == 1:
            x = kernel[-1]*vector[i] + kernel[-2]*vector[i-1]
            F_optim.append(x)
        elif 1<i<9:
            x = kernel[-1]*vector[i] + kernel[-2]*vector[i-1] + kernel[-3]*vector[i-2]
            F_optim.append(x)
        elif i == 9:
            x = kernel[-2]*vector[i-1] + kernel[-3]*vector[i-2]
            F_optim.append(x)
        elif i == 10:
            x = kernel[-3]*vector[i-2]
            F_optim.append(x)
    return F_optim

F = conv1d(G , H)
print('       conv1d-自定义')
print(F)
F_optim = optim_conv1d(G , H)
print('    optim_conv1d-自定义')
print(F_optim)

#调用一维卷积函数
print('       conv1d-库自带')
print(scipy.signal.convolve(G, H))


print('==========================二维卷积=========================')
#二维卷积
#频域卷积 = 空域相乘
#读取源图像像素
rose = cv2.imread('2.png',0)
print('             rose')
print(rose)
#定义卷积核
H_2d = [[1,1,1],[1,2,1],[1,1,1]]

#最大最小归一化
def maxmin_norm(array):

    maxcols = array.max(axis = 0)
    mincols = array.min(axis = 0)
    data_shape = array.shape
    data_rows, data_cols = data_shape
    t = np.empty((data_rows, data_cols))
    for i in range(data_cols):
        t[:, i] = (array[:, i] - mincols[i]) / (maxcols[i] - mincols[i])
    return t

#自定义二维卷积函数
def conv2d(x,kernel):
    x = rose.shape
    #填充
    big_rose = np.zeros((x[0] + 2, x[1] + 2))
    big_rose[1:big_rose.shape[0] - 1, 1:big_rose.shape[1] - 1] = rose

    new_list = []
    for i in range(len(kernel)):
        for j in range(len(kernel)):
            x = kernel[i][j] / 10
            new_list.append(x)

    new_list = np.matrix(new_list)
    kernel = new_list.reshape(3, 3)
    print('  kernel')
    print(kernel)

    for i in range(big_rose.shape[0] - 2):
        for j in range(big_rose.shape[1] - 2):
            sum = big_rose[i + 1][j + 1]*kernel[1,1] + big_rose[i + 1 + 1][j + 1]*kernel[2,1]+ \
                  big_rose[i + 1 - 1][j + 1]*kernel[0,1] + big_rose[i + 1][j + 1 + 1]*kernel[1,2] + \
                  big_rose[i + 1][j + 1 - 1] *kernel[1,0]+ big_rose[i + 1 - 1][j + 1 - 1]*kernel[0,0] + \
                  big_rose[i + 1 + 1][j + 1 + 1]*kernel[2,2] + big_rose[i + 1 - 1][j + 1 + 1]*kernel[0,2] + \
                  big_rose[i + 1 + 1][j + 1 - 1]*kernel[2,0]
            big_rose[i + 1][j + 1] = sum
            sum = 0

    #print('After_big_rose:', big_rose)
    After_rose = big_rose[1:big_rose.shape[0] - 1, 1:big_rose.shape[1] - 1]
    #归一化
    After_rose = maxmin_norm(After_rose)
    return After_rose

#修改自定义二维卷积函数并加以优化
def Conv2d_Optim(image, mfilter):
    mI, nI = np.shape(image)
    [mF, nF] = np.shape(mfilter)

    height = int(mF / 2)
    width = int(nF / 2)
    #源图像
    convImage = np.zeros((mI, nI))
    #填充
    if mF % 2 == 0:
        imData = np.pad(image,(width, width - 1),'constant')
    else:
        imData = np.pad(image,(width, height),'constant')

    padmI, padnI = imData.shape
    convHeight = padmI - mF + 1
    convWidth = padnI - nF + 1
    #开始卷积
    for i in range(convHeight):
        for j in range(convWidth):
            localImage = imData[i:i + mF, j:j + nF]
            convImage[i][j] = np.sum(localImage * mfilter)
    #大于255 赋值为255
    #小于0   赋值为0
    convImage1 = convImage.clip(0, 255)

    return convImage1

#库自带二维卷积函数
rose = torch.from_numpy(rose)
scharr=np.array([[-3-3j,0-10j,+3-3j],[-10+0j,0+0j,+10+0j],[-3+3j,0+10j,+3+3j]]) #设置一个特殊的卷积和
#scharr = np.array([[0,0,0],[-1,1,0],[0,0,0]])
rose_conv2d = scipy.signal.convolve2d(rose,scharr,boundary='symm',mode='same')
rose_conv2d = maxmin_norm(rose_conv2d)
print('                       rose_conv2d')
print(rose_conv2d)

After_rose = conv2d(rose,H_2d)
print('                       After_conv2d_rose')
print(After_rose)

filter1 = [[-1, -2, -1], [0, 0, 0], [1, 2, 1]]
After_conv2d_Optim = Conv2d_Optim(rose,filter1)
print('   After_conv2d_Optim_rose')
print(After_conv2d_Optim)


plt.subplot(221),plt.imshow(rose,cmap = 'gray'),plt.title('Original')
plt.subplot(222),plt.imshow(After_rose,cmap = 'gray'),plt.title('Conv2d-Normalize')
plt.subplot(223),plt.imshow(rose_conv2d,cmap = 'gray'),plt.title('Conv2d-Function')
plt.subplot(224),plt.imshow(After_conv2d_Optim,cmap = 'gray'),plt.title('Conv2d-Optim')

plt.show()

3. Realize the result

3.1 One-dimensional convolution
In this experiment, for one-dimensional convolution, the data source selected by the custom one-dimensional convolution function is a simple Vector ([1,2,3,4,5,6,7,8 ,9]), the convolution kernel is ([-1,0,1]). The result of one-dimensional convolution on the data source is shown below.
insert image description here

After the result is obtained, the library function scipy.signal.convolve() is called to convolve the data source.

insert image description here
Through the analysis of the library's built-in functions, it can be obtained that the convolution kernel used in the library's own one-dimensional convolution function is ([1,0,-1]), and the data is filled, and the edge elements are also convolved. product operation.
So marginal consideration is given to custom 1D functions. Its modified one-dimensional convolution function performs convolution on the data source, and the operation results are as follows.

insert image description here

3.2 Two-dimensional convolution
In this experiment, for two-dimensional convolution, a grayscale image with a size of (1200*675) was read, and a custom two-dimensional convolution function was used to fill the edge of the image with zeros. And a two-dimensional convolution operation was performed, and finally, the maximum and minimum normalization processing was used. The operation results are shown in the figure below.
insert image description here

The custom two-dimensional convolution enhances the main body of the image and plays a smoothing role. Next, this experiment calls the scripy.signal.convolve2d() function to complete the two-dimensional convolution of the source data image, and the convolution result is shown in the figure below.
insert image description here

From the above convolution result graph, the gray value of each pixel is relatively small, but the contour result is still relatively clear, which has the effect of edge detection. Therefore, a custom 2D convolution function was modified. First, change the original convolution kernel ([[1,1,1],[1,2,1],[1,1,1]]) to ([[-1,-2,-1] ,[0,0,0],[1,2,1]]); Secondly, after convolution, normalization is performed to limit the gray value between (0,255). The modified result is shown in the figure below.
insert image description here
As can be seen from the above figure, the modified two-dimensional convolution function also realizes the edge detection function, compared with the previous smoothing convolution effect. How the convolution kernel is set determines the effect of convolution.

Xiaobai records! ! !

Guess you like

Origin blog.csdn.net/MZYYZT/article/details/128196930