(Digital Image Processing MATLAB+Python) Chapter 7 Image Sharpening - Sections 1 and 2: Overview of Image Sharpening and Differential Operators

Image sharpening : It is a technology used to improve image quality, which can enhance the high-frequency detail information in the image , thus making the image clearer and more visually impactful. In image processing and computer vision, image sharpening is often used in applications such as feature extraction, image enhancement, object recognition, etc.

insert image description here

One: Image edge analysis

Image edge analysis : It is a technique used to find obvious edges or contours in an image , which can help identify features such as object boundaries, internal structures, and textures in an image. In image processing and computer vision, edge analysis is often used in applications such as object detection, object tracking, and image segmentation

There are mainly the following types of edges in the image

  • Thin line edge:
  • Mutant edge:
  • Gradient edge:

insert image description here

As shown in the figure below, various edge detection methods

  • Thin line edge : detect first order differential crossing 0 point, second order differential extreme point
  • Abrupt edge : detect the first-order differential extreme point, the second-order differential crosses 0
  • Gradient edge : difficult to detect, the second order differential information is slightly more than the first order differential

insert image description here

Two: first-order differential operator

(1) Gradient operator

A: Definition

Gradient operator : It is a kind of algorithm used for image edge detection and feature extraction. They calculate the gradient information of each position in the image based on the change of image gray value, and are used to find obvious edges or features in the image. For a function image f ( x , y ) f(x,y)f(x,y ) , which is at( x , y ) (x,y)(x,y ) where the gradient is

G [ f ( x , y ) ] = [ ∂ f ∂ x ∂ f ∂ y ] T G[f(x, y)]=\left[\begin{array}{ll}\frac{\partial f}{\partial x} & \frac{\partial f}{\partial y}\end{array}\right]^{T} G[f(x,y)]=[xfyf]T

Replaced by the magnitude of the gradient, it is

G [ f ( x , y ) ] = [ ( ∂ f ∂ x ) 2 + ( ∂ f ∂ y ) 2 ] 1 2  或  G [ f ( x , y ) ] = ∣ ∂ f ∂ x ∣ + ∣ ∂ f ∂ y ∣ G[f(x, y)]=\left[\left(\frac{\partial f}{\partial x}\right)^{2}+\left(\frac{\partial f}{\partial y}\right)^{2}\right]^{\frac{1}{2}} \text { 或 } G[f(x, y)]=\left|\frac{\partial f}{\partial x}\right|+\left|\frac{\partial f}{\partial y}\right| G[f(x,y)]=[(xf)2+(yf)2]21 or  G [ f ( x ,y)]= xf + yf

Discrete matrix of numbers, using difference instead of differentiation, where g ( x , y ) g(x,y)g(x,y ) is called the gradient image

∂ f ∂ x = Δ f Δ x = f ( x + 1 , y ) − f ( x , y ) x + 1 − x = f ( x + 1 , y ) − f ( x , y ) ∂ f ∂ y = Δ f Δ y = f ( x , y + 1 ) − f ( x , y ) y + 1 − y = f ( x , y + 1 ) − f ( x , y ) g ( x , y ) = ∣ f ( x + 1 , y ) − f ( x , y ) ∣ + ∣ f ( x , y + 1 ) − f ( x , y ) ∣ \begin{array}{l}\frac{\partial f}{\partial x}=\frac{\Delta f}{\Delta x}=\frac{f(x+1, y)-f(x, y)}{x+1-x}=f(x+1, y)-f(x, y) \\\frac{\partial f}{\partial y}=\frac{\Delta f}{\Delta y}=\frac{f(x, y+1)-f(x, y)}{y+1-y}=f(x, y+1)-f(x, y) \\g(x, y)=|f(x+1, y)-f(x, y)|+|f(x, y+1)-f(x, y)|\end{array} xf=Δx _f _=x+1xf(x+1,y)f(x,y)=f(x+1,y)f(x,y)yf=y _f _=y + 1 yf(x,y+1)f(x,y)=f(x,y+1)f(x,y)g(x,y)=f(x+1,y)f(x,y)+f(x,y+1)f(x,y)

B: Edge detection

  • Thresholding gradient images to detect local variation extrema

Fixed border grayscale :

g ( x , y ) = { LGG [ f ( x , y ) ] ≥ T f ( x , y ) and let g(x , y )=\left\{\begin{array}{lc}L_{G} & G[f(x, y)] \geq T \\f(x, y) & \text { also }\end{array}\right.g(x,y)={ LGf(x,y)G[f(x,y)]T  Other

Highlight borders :

g ( x , y ) = { G [ f ( x , y ) ] G [ f ( x , y ) ] ≥ T f ( x , y ) and let g ( x , y )=\left\{\begin{array }{lc}G[f(x, y)] & G[f(x, y)] \geq T \\f(x, y) & \text { other }\end{array}\right.g(x,y)={ G[f(x,y)]f(x,y)G[f(x,y)]T  Other

Binarize border and background :

g ( x , y ) = { LGG [ f ( x , y ) ] ≥ TLB and g(x, y)=\left\{\begin{array}{lc}L_{G} & G[f(x, y)] \geq T \\L_{B} & \text { also }\end{array}\right.g(x,y)={ LGLBG[f(x,y)]T  Other

C: example

An example calculation is as follows

insert image description here

D: program

insert image description here


matlab implementation :

Image=im2double(rgb2gray(imread('lotus.jpg')));
subplot(131),imshow(Image),title('原图像');
[h,w]=size(Image);
edgeImage=zeros(h,w);
for x=1:w-1
    for y=1:h-1
        edgeImage(y,x)=abs(Image(y,x+1)-Image(y,x))+abs(Image(y+1,x)-Image(y,x));
    end
end
subplot(132),imshow(edgeImage),title('梯度图像');
sharpImage=Image+edgeImage;
subplot(133),imshow(sharpImage),title('锐化图像');

Python implementation :

import cv2
import numpy as np
from matplotlib import pyplot as plt

# 读取图像
img = cv2.imread('lotus.jpg')
# 将图像转为灰度图像并将像素值缩放到[0,1]之间
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = gray.astype(np.float64) / 255

# 显示原图像
plt.subplot(131)
plt.imshow(gray, cmap='gray')
plt.title('原图像')

# 计算梯度图像
h, w = gray.shape
edge_img = np.zeros((h, w))
for x in range(w-1):
    for y in range(h-1):
        edge_img[y,x] = abs(gray[y,x+1]-gray[y,x]) + abs(gray[y+1,x]-gray[y,x])

# 显示梯度图像
plt.subplot(132)
plt.imshow(edge_img, cmap='gray')
plt.title('梯度图像')

# 锐化图像
sharp_img = gray + edge_img
sharp_img = np.clip(sharp_img, 0, 1)

# 显示锐化图像
plt.subplot(133)
plt.imshow(sharp_img, cmap='gray')
plt.title('锐化图像')

# 显示所有图像
plt.show()

(2) Robert operator

A: Definition

Robert operator : It is an edge detection operator whose principle is based on the difference of pixel values ​​in the image. The implementation of this operator uses two 2 × 2 2 \times 22×2 convolution kernelG x G_xGxG y G_yGy, respectively calculate the pixel point ( x , y ) (x,y)(x,y )( x + 1 , y + 1 ) (x+1,y+1)(x+1,y+1 ) The difference in grayscale between. Specifically,G x G_xGxG y G_yGyThe value is as follows

[ 1 0 0 − 1 ] and [ 0 1 − 1 0 ] \begin{bmatrix} 1 & 0 \\ 0 & -1\end{bmatrix} and \begin{bmatrix} 0 & 1 \\ -1 & 0\ end{bmatrix}[1001]and[0110]

Then, for the input image III , its edge strengthEEE

  • *Indicates the convolution operation

E ( x , y ) = ( I ( x , y ) ∗ G x ) 2 + ( I ( x , y ) ∗ G y ) 2 E(x, y)=\sqrt{\left(I(x, y) * G_{x}\right)^{2}+\left(I(x, y) * G_{y}\right)^{2}} E ( x ,y)=(I(x,y)Gx)2+(I(x,y)Gy)2

The resulting edge strength EEE can be used to detect edges in an image, and edges are usually inthe EEIt appears where E takes a larger value. In addition, in order to improve computational efficiency, it is usually possible to use a pre-calculated convolution kernel to implement the Robert operator

B: Example

The following figure is a calculation example

insert image description here

C: program

insert image description here

edgeFunction : Used to detect edges in an image and generate a binarized edge image. The function syntax is as follows

BW = edge(I, method, threshold, direction)

The meaning of the parameters is as follows

  • I: The input image, which can be a grayscale image or a color image. For color images, it is usually converted to grayscale before edge detection
  • method: The method of edge detection, including the following, the default value is'sobel'
    • 'sobel': Sobel operator to detect edges
    • 'prewitt': Prewitt operator to detect edges
    • 'roberts': Roberts operator to detect edges
    • 'log': Laplacian of Gaussian operator to detect edges
    • 'zerocross': Edge detection using Laplacian operator and zero crossings
    • 'canny': Use the Canny operator to detect edges
  • threshold: Binarization threshold used to convert detected edges into a binarized image. For the Canny operator, this parameter can be a vector containing two elements representing the low and high thresholds, respectively. The default value is 0.2
  • direction: The detection direction of the edge, including the following types, the default value is'both'
    • 'both': detect edges in horizontal and vertical directions
    • 'horizontal': Only detect edges in the horizontal direction
    • 'vertical': Only detect vertical edges

matlab implementation :

Image=im2double(rgb2gray(imread('lotus.jpg')));
% 将名为 lotus.jpg 的彩色图像读入并转换为灰度图像,然后将其类型转换为double型,存储在变量 Image 中。

figure,imshow(Image),title('原图像');
% 显示原始图像

BW= edge(Image,'roberts');
% 对图像进行 Robert 边缘检测,得到一个二值化的边缘图像,存储在变量 BW 中。

figure,imshow(BW),title('边缘检测');
% 显示 Robert 边缘检测结果

H1=[1 0; 0 -1];
H2=[0 1;-1 0];
% 定义两个 2×2 的卷积核 H1 和 H2,分别为 Robert 算子的两个分量。

R1=imfilter(Image,H1);
R2=imfilter(Image,H2);
% 对原始图像分别使用 H1 和 H2 进行卷积操作,得到两个梯度图像 R1 和 R2。

edgeImage=abs(R1)+abs(R2);
% 将 R1 和 R2 两个梯度图像的绝对值相加,得到最终的梯度图像,存储在变量 edgeImage 中。

figure,imshow(edgeImage),title('Robert梯度图像');
% 显示 Robert 算子得到的梯度图像

sharpImage=Image+edgeImage;
% 将原始图像与梯度图像相加,得到锐化后的图像,存储在变量 sharpImage 中。

figure,imshow(sharpImage),title('Robert锐化图像');
% 显示锐化后的图像

Python implementation :

import cv2
import numpy as np
from matplotlib import pyplot as plt

# 读入图像并转换为灰度图像
img = cv2.imread('lotus.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray = gray.astype(np.float64) / 255.0

# 显示原图像
plt.imshow(gray, cmap='gray')
plt.title('原图像')
plt.show()

# 进行边缘检测
edge = cv2.Canny(gray, 100, 200)

# 显示边缘检测结果
plt.imshow(edge, cmap='gray')
plt.title('边缘检测')
plt.show()

# 定义两个 2×2 的卷积核 H1 和 H2,分别为 Robert 算子的两个分量
H1 = np.array([[1, 0], [0, -1]])
H2 = np.array([[0, 1], [-1, 0]])

# 对原始图像分别使用 H1 和 H2 进行卷积操作,得到两个梯度图像 R1 和 R2
R1 = cv2.filter2D(gray, -1, H1)
R2 = cv2.filter2D(gray, -1, H2)

# 将 R1 和 R2 两个梯度图像的绝对值相加,得到最终的梯度图像
edgeImage = cv2.addWeighted(cv2.convertScaleAbs(R1), 0.5, cv2.convertScaleAbs(R2), 0.5, 0)

# 显示 Robert 算子得到的梯度图像
plt.imshow(edgeImage, cmap='gray')
plt.title('Robert梯度图像')
plt.show()

# 将原始图像与梯度图像相加,得到锐化后的图像
sharpImage = cv2.addWeighted(gray, 1, edgeImage, 1, 0)

# 显示锐化后的图像
plt.imshow(sharpImage, cmap='gray')
plt.title('Robert锐化图像')
plt.show()

(3) Sobel operator

A: Definition

Sobel operator : It is a commonly used edge detection operator, which is used to detect edge parts in digital images. It uses two 3 × 3 3 \times 33×3 convolution kernels, respectively for the image inxxxyyThe convolution operation is performed in the y direction to calculate the magnitude and direction of the gradient at each pixel. xxxxyyThe convolution kernel in the y direction can be expressed as

H x = [ − 1 − 2 − 1 0 0 0 1 2 1 ] , H y = ​​[ − 1 0 1 − 2 0 − 2 − 1 0 1 ] H_{x}=\begin{bmatrix} -1 & - 2&-1\\ 0&0&0\\1&2&1 \end{bmatrix}&H_{y}=\begin{bmatrix}-1&0&1\\-2&0&-2\ \-1&0&1\end{bmatrix}Hx= 101202101 and Hy= 121000121

For a grayscale image III , its inxxApplying the Sobel operator in the x direction, a new imageG x G_xGx, where the value of each pixel represents its position at xxThe magnitude of the gradient in the x direction, i.e.

G x ( i , j ) = ∑ m = − 1 1 ∑ n = − 1 1 H x ( m + 2 , n + 2 ) I ( i + m , j + n ) G_{x}(i, j)=\sum_{m=-1}^{1} \sum_{n=-1}^{1} H_{x}(m+2, n+2) I(i+m, j+n) Gx(i,j)=m=11n=11Hx(m+2,n+2)I(i+m,j+n)

Similarly, for image III y y Apply the Sobel operator in the y direction to get a new imageG y G_yGy, where the value of each pixel represents its value in yyThe magnitude of the gradient in the y direction, ie

G y ( i , j ) = ∑ m = − 1 1 ∑ n = − 1 1 H y ( m + 2 , n + 2 ) I ( i + m , j + n ) G_{y}(i, j)=\sum_{m=-1}^{1} \sum_{n=-1}^{1} H_{y}(m+2, n+2) I(i+m, j+n) Gy(i,j)=m=11n=11Hy(m+2,n+2)I(i+m,j+n)

The final gradient image GGG can passG x G_xGxG y G_yGyThe sum of the squares and the root of

G ( i , j ) = G x ( i , j ) 2 + G y ( i , j ) 2 G(i,j)=\sqrt{ G_{x}(i,j)^{2}+G_{y}(i,j)^{2} } G(i,j)=Gx(i,j)2+Gy(i,j)2

The purpose of the Sobel operator is to find the position where the gray level changes sharply in the image, that is, the edge. By comparing the gradient size and direction of each pixel, the edge part of the image can be extracted to facilitate subsequent image analysis and processing

B: Example

The following figure is a calculation example

insert image description here

C: program

insert image description here


matlab implementation :

Image=im2double(rgb2gray(imread('lotus.jpg')));
figure,imshow(Image),title('原图像');
BW= edge(Image,'sobel');
figure,imshow(BW),title('边缘检测');
H1=[-1 -2 -1;0 0 0;1 2 1];
H2=[-1 0 1;-2 0 2;-1 0 1];
R1=imfilter(Image,H1);
R2=imfilter(Image,H2);
edgeImage=abs(R1)+abs(R2);
figure,imshow(edgeImage),title('Sobel梯度图像');
sharpImage=Image+edgeImage;
figure,imshow(sharpImage),title('Sobel锐化图像');

Python implementation :

import cv2
import numpy as np
import matplotlib.pyplot as plt

# 读入图像并转换为灰度图
image = cv2.imread('lotus.jpg')
image_gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image_gray = cv2.normalize(image_gray.astype('float'), None, 0.0, 1.0, cv2.NORM_MINMAX)

# 显示原图像
plt.imshow(image_gray, cmap='gray')
plt.title('原图像')
plt.show()

# 边缘检测
edge_image = cv2.Sobel(image_gray, cv2.CV_64F, 1, 0) + cv2.Sobel(image_gray, cv2.CV_64F, 0, 1)
edge_image = cv2.normalize(np.abs(edge_image), None, 0.0, 1.0, cv2.NORM_MINMAX)

# 显示边缘检测结果
plt.imshow(edge_image, cmap='gray')
plt.title('边缘检测')
plt.show()

# Sobel梯度图像
H1 = np.array([[-1, -2, -1], [0, 0, 0], [1, 2, 1]])
H2 = np.array([[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]])
R1 = cv2.filter2D(image_gray, -1, H1)
R2 = cv2.filter2D(image_gray, -1, H2)
sobel_image = np.abs(R1) + np.abs(R2)
sobel_image = cv2.normalize(sobel_image, None, 0.0, 1.0, cv2.NORM_MINMAX)

# 显示Sobel梯度图像
plt.imshow(sobel_image, cmap='gray')
plt.title('Sobel梯度图像')
plt.show()

# Sobel锐化图像
sharp_image = image_gray + sobel_image
sharp_image = cv2.normalize(sharp_image, None, 0.0, 1.0, cv2.NORM_MINMAX)

# 显示Sobel锐化图像
plt.imshow(sharp_image, cmap='gray')
plt.title('Sobel锐化图像')
plt.show()

(4) Prewitt operator

A: Definition

Prewitt operator : It is a classic image edge detection operator used to detect horizontal and vertical edges in an image. It is a discrete differential operator, which extracts edge information by calculating the gradient of image pixel values. For a grayscale image III , the Prewitt operator calculates the gradient for the horizontal and vertical directions of the image respectively, and obtains two gradient imagesG x G_xGxG y G_yGy. The element values ​​of these gradient images represent the gradient magnitude and direction at each pixel. The horizontal and vertical templates of the Prewitt operator are

H x = [ − 1 0 1 − 1 0 1 − 1 0 1 ] and H y = ​​[ − 1 − 1 − 1 − 0 0 0 1 1 1 ] H_{x}=\begin{bmatrix} -1 & & 1\\ -1 & 0 & 1\\ -1 & 0 & 1 then end {bmatrix} and H_{y}=\begin{bmatrix} -1 & -1 & -1\\ -0 & 0 & \\ 1&1&1\end{bmatrix}Hx= 111000111 and Hy= 101101101

The calculation formulas for horizontal and vertical gradient images are:

  • *Indicates the convolution operation

G x = I ∗ H x , G y = I ∗ H y G_{x}=I*H_{x},G_{y}=I*H_{y} Gx=IHx,Gy=IHy

Finally, the horizontal and vertical gradient images can be combined to compute the edge gradient image GGG

G = G x 2 + G y 2 G=\sqrt{ G^{2}_{x}+G^{2}_{y} } G=Gx2+Gy2

According to the information of the gradient size and direction, a threshold can be set to determine whether the pixel is an edge, thereby extracting the edge information in the image

B: Example

insert image description here

C: program

insert image description here


matlab implementation :

clear,clc,close all;
Image=im2double(rgb2gray(imread('lotus.jpg')));
H1=[-1 -1 -1;0 0 0;1 1 1];
H2=[0 -1 -1;1 0 -1; 1 1 0];
H3=[1 0 -1;1 0 -1;1 0 -1];
H4=[1 1 0;1 0 -1;0 -1 -1];
H5=[1 1 1;0 0 0;-1 -1 -1];
H6=[0 1 1;-1 0 1;-1 -1 0];
H7=[-1 0 1;-1 0 1;-1 0 1];
H8=[-1 -1 0;-1 0 1;0 1 1];
R1=imfilter(Image,H1);
R2=imfilter(Image,H2);
R3=imfilter(Image,H3);
R4=imfilter(Image,H4);
R5=imfilter(Image,H5);
R6=imfilter(Image,H6);
R7=imfilter(Image,H7);
R8=imfilter(Image,H8);
edgeImage1=abs(R1)+abs(R7);
sharpImage1=edgeImage1+Image;
f1=max(max(R1,R2),max(R3,R4));
f2=max(max(R5,R6),max(R7,R8));
edgeImage2=max(f1,f2);
sharpImage2=edgeImage2+Image;
subplot(221),imshow(edgeImage1),title('两个模板边缘检测');
subplot(222),imshow(edgeImage2),title('八个模板边缘检测');
subplot(223),imshow(sharpImage1),title('两个模板边缘锐化');
subplot(224),imshow(sharpImage2),title('八个模板边缘锐化');


Python implementation :

import cv2
import matplotlib.pyplot as plt
import numpy as np

# 读取图像并转换为灰度图像
image = cv2.imread('lotus.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image = np.float32(image) / 255

# 定义边缘检测和边缘锐化的卷积核
H1 = np.array([[-1, -1, -1], [0, 0, 0], [1, 1, 1]])
H2 = np.array([[0, -1, -1], [1, 0, -1], [1, 1, 0]])
H3 = np.array([[1, 0, -1], [1, 0, -1], [1, 0, -1]])
H4 = np.array([[1, 1, 0], [1, 0, -1], [0, -1, -1]])
H5 = np.array([[1, 1, 1], [0, 0, 0], [-1, -1, -1]])
H6 = np.array([[0, 1, 1], [-1, 0, 1], [-1, -1, 0]])
H7 = np.array([[-1, 0, 1], [-1, 0, 1], [-1, 0, 1]])
H8 = np.array([[-1, -1, 0], [-1, 0, 1], [0, 1, 1]])

# 对图像进行卷积操作,得到八个卷积结果
R1 = cv2.filter2D(image, -1, H1)
R2 = cv2.filter2D(image, -1, H2)
R3 = cv2.filter2D(image, -1, H3)
R4 = cv2.filter2D(image, -1, H4)
R5 = cv2.filter2D(image, -1, H5)
R6 = cv2.filter2D(image, -1, H6)
R7 = cv2.filter2D(image, -1, H7)
R8 = cv2.filter2D(image, -1, H8)

# 计算两个模板和八个模板的边缘检测结果和边缘锐化结果
edgeImage1 = np.abs(R1) + np.abs(R7)
sharpImage1 = edgeImage1 + image
f1 = np.maximum(np.maximum(R1, R2), np.maximum(R3, R4))
f2 = np.maximum(np.maximum(R5, R6), np.maximum(R7, R8))
edgeImage2 = np.maximum(f1, f2)
sharpImage2 = edgeImage2 + image

# 显示图像
plt.subplot(221), plt.imshow(edgeImage1, cmap='gray'), plt.title('两个模板边缘检测')
plt.subplot(222), plt.imshow(edgeImage2, cmap='gray'), plt.title('八个模板边缘检测')
plt.subplot(223), plt.imshow(sharpImage1, cmap='gray'), plt.title('两个模板边缘锐化')
plt.subplot(224), plt.imshow(sharpImage2, cmap='gray',plt.title('八个模板边缘锐化')

Three: second order differential operator

(1) Definition

Second-order differential operator (Laplacian operator) : In three-dimensional Euclidean space, the Laplacian operator is usually expressed as Δ \DeltaΔ , which can be defined as the vector operator∇ \nablaThe operation of the divergence of ∇ , namely

Δ 2 f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 \Delta^{2} f=\frac{\partial^{2} f}{\partial x^{2}}+\frac{\partial^{2} f}{\partial y^{2}} D2 f=x22 f+y22 f

in

∂ 2 f ∂ x 2 = Δ x f ( x + 1 , y ) − Δ x f ( x , y ) = [ f ( x + 1 , y ) − f ( x , y ) ] − [ f ( x , y ) − f ( x − 1 , y ) ] = f ( x + 1 , y ) + f ( x − 1 , y ) − 2 f ( x , y ) ∂ 2 f ∂ y 2 = Δ y f ( x , y + 1 ) − Δ y f ( x , y ) = [ f ( x , y + 1 ) − f ( x , y ) ] − [ f ( x , y ) − f ( x , y − 1 ) ] = f ( x , y + 1 ) + f ( x , y − 1 ) − 2 f ( x , y ) \begin{aligned}\frac{\partial^{2} f}{\partial x^{2}} & =\Delta_{x} f(x+1, y)-\Delta_{x} f(x, y) \\& =[f(x+1, y)-f(x, y)]-[f(x, y)-f(x-1, y)] \\& =f(x+1, y)+f(x-1, y)-2 f(x, y) \\\frac{\partial^{2} f}{\partial y^{2}} & =\Delta_{y} f(x, y+1)-\Delta_{y} f(x, y) \\& =[f(x, y+1)-f(x, y)]-[f(x, y)-f(x, y-1)] \\& =f(x, y+1)+f(x, y-1)-2 f(x, y)\end{aligned} x22 fy22 f=Dxf(x+1,y)Dxf(x,y)=[f(x+1,y)f(x,y)][f(x,y)f(x1,y)]=f(x+1,y)+f(x1,y)2f(x,y)=Dyf(x,y+1)Dyf(x,y)=[f(x,y+1)f(x,y)][f(x,y)f(x,y1)]=f(x,y+1)+f(x,y1)2f(x,y)

(2) Example

The following figure is an example

insert image description here

(3) Program

insert image description here


matlab implementation :

Image=im2double(rgb2gray(imread('lotus.jpg')));
figure,imshow(Image),title('原图像');
H=fspecial('laplacian',0);
R=imfilter(Image,H);
edgeImage=abs(R);
figure,imshow(edgeImage),title('Laplacian梯度图像');
H1=[0 -1 0;-1 5 -1;0 -1 0];
sharpImage=imfilter(Image,H1);
figure,imshow(sharpImage),title('Laplacian锐化图像');

Python implementation :

import cv2
import matplotlib.pyplot as plt

Image = cv2.imread('lotus.jpg')
Image = cv2.cvtColor(Image, cv2.COLOR_BGR2GRAY)
Image = cv2.normalize(Image.astype('float'), None, 0.0, 1.0, cv2.NORM_MINMAX)

plt.imshow(Image, cmap='gray')
plt.title('原图像')
plt.show()

H = cv2.Laplacian(Image, ddepth=-1, ksize=3)
R = cv2.filter2D(Image, -1, H)

edgeImage = cv2.convertScaleAbs(R)

plt.imshow(edgeImage, cmap='gray')
plt.title('Laplacian梯度图像')
plt.show()

H1 = np.array([[0, -1, 0], [-1, 5, -1], [0, -1, 0]])
sharpImage = cv2.filter2D(Image, -1, H1)

plt.imshow(sharpImage, cmap='gray')
plt.title('Laplacian锐化图像')
plt.show()

Guess you like

Origin blog.csdn.net/qq_39183034/article/details/130374803
Recommended