[Image Restoration] MATLAB Simulation of Image Restoration Algorithm Based on Deep Learning

1. Software version

matlab2021a

2. Theoretical knowledge of this algorithm

       In many fields, people have high requirements for image quality, such as medical image field, satellite remote sensing field and so on. With the rapid development of the information age, low-resolution images have been difficult to meet the needs of specific scenarios. Therefore, the research of low-resolution image restoration and reconstruction has gradually become a research hotspot at this stage, and has broad application prospects and application value. At present, the existing image restoration and reconstruction algorithms can solve the problem of low image resolution to a certain extent, but for images with rich details, the reconstruction ability is poor and the visual effect is poor. In response to this problem, this paper proposes an image restoration and reconstruction algorithm based on convolutional neural network based on the basic principles of deep learning neural network. The main contents of this article are as follows:

       Firstly, based on reading a large number of domestic and foreign literatures, the research status of image restoration and reconstruction algorithms is summarized. The basic principles, advantages and disadvantages of convolutional neural networks in deep learning are summarized.

       Secondly, the basic principle and structure of convolutional neural network are introduced in detail. The forward propagation and back propagation formulas are derived. The superiority of convolutional neural network in image super-resolution reconstruction is theoretically demonstrated.

       Thirdly, the convolutional neural network model is designed using the MATLAB deep learning toolbox. During the design process, MATLAB was used to optimize the parameters of the convolutional neural network, including the number of convolutional layers, learning rate, convolution kernel size and batch size. Through the MATLAB simulation results, the optimal parameters of CNN are: the learning rate is 0.05, the number of convolutional layers is 18, the convolution kernel size is 3*3, and the batch size is 32. Finally, the performance of the convolutional neural network is simulated using the IAPR-TC12 benchmark image database. The bicubic method and the SRCNN method are compared. The simulation results show that the proposed method has better performance and higher finesse advantages compared with the traditional bicubic and SRCNN methods.

        Through the theoretical introduction of convolution neural network, we will establish the following deep learning network model of image reconstruction.

Fig 1. The structure of image reconstruction CNN model

Figure 6 shows that the CNN contains the model of image Input Layer, convolution 2d Layer, relu Layer and regression Layer.

According to the structure of this CNN model, we will use the deep learning toolbox to establish the CNN image reconstruction model. The main functions used include “imageInputLayer”, “convolution2dLayer”, “reluLayer”, “trainNetwork” and ”regression Layer”.

         The function of “imageInputLayer” is mainly used to input images. An image input layer inputs 2-D images to a network and applies data normalization. In this function, we can set the size and type of the input image. In our model, we set up the function as follows:

Layers = imageInputLayer([64 64 1],

'Name','InputLayer',

'Normalization','none'

);

         The function of “convolution2dLayer” is mainly used to realized the image convolution and get the feature of image. A 2-D convolutional layer applies sliding convolutional filters to the input. The layer convolves the input by moving the filters along the input vertically and horizontally and computing the dot product of the weights and the input, and then adding a bias term. In our model, we set the  function as follows:

convLayer   =  convolution2dLayer(3,64,

'Padding',1,

'WeightsInitializer','he',

'BiasInitializer','zeros',

'Name','Conv1'

);

         The function of “reluLayer” is mainly used to realized the activation layer of convolution neural networks. A ReLU layer performs a threshold operation to each element of the input, where any value less than zero is set to zero.

relLayer     = reluLayer('Name', 'ReLU1');

The function of “regressionLayer” is mainly used to compute the half-mean-squared-error loss for regression problems. In our model, we set up the function as follows:

regressionLayer('Name','FinalRegressionLayer')

The function of “trainNetwork” is to train a convolutional neural network for image data. We can train on either CPU or GPU. In our model, we set the function as follows:

options = trainingOptions('sgdm', ...

    'Momentum', 0.9, ...

    'InitialLearnRate', initLearningRate, ...

    'LearnRateSchedule', 'piecewise', ...

    'LearnRateDropPeriod', 10, ...

    'LearnRateDropFactor', learningRateFactor, ...

    'L2Regularization', l2reg, ...

    'MaxEpochs', maxEpochs, ...

    'MiniBatchSize', miniBatchSize, ...

    'GradientThresholdMethod', 'l2norm', ...

    'GradientThreshold', 0.01, ...

    'Plots', 'training-progress', ...

    'Verbose', false);

net = trainNetwork(dsTrain, layers, options);

3. Core code

clc;
close all;
clear all;
warning off;
addpath 'func\'

%%
Testdir        = 'images\';
[train_up2,train_res2,augmenter]=func_train_data(Testdir);

%%
SCS                     = 64;
[layers,lgraph,dsTrain] = func_myCNN(SCS,train_up2,train_res2,augmenter,18,3);

figure
plot(lgraph)


%%
maxEpochs          = 1;
epochIntervals     = 1;
initLearningRate   = 0.05;
learningRateFactor = 0.1;
l2reg              = 0.0001;
miniBatchSize      = 32;

options = trainingOptions('sgdm', ...
    'Momentum',0.9, ...
    'InitialLearnRate',initLearningRate, ...
    'LearnRateSchedule','piecewise', ...
    'LearnRateDropPeriod',10, ...
    'LearnRateDropFactor',learningRateFactor, ...
    'L2Regularization',l2reg, ...
    'MaxEpochs',maxEpochs, ...
    'MiniBatchSize',miniBatchSize, ...
    'GradientThresholdMethod','l2norm', ...
    'GradientThreshold',0.01, ...
    'Plots','training-progress', ...
    'Verbose',false);

net = trainNetwork(dsTrain,layers,options);




save trained_cnn.mat 

4. Operation steps and simulation conclusion

 

 

 

 

 

 

unit

parameter

1

InputLayer

input Image size is 64x64x1

2

Conv1

Convolution kernel size is 3x3,convolution kernel number is 64, stride is [1 1 1]

3

ReLU1

The activation function  is ReLU

4

Conv2

Convolution kernel size is 3x3,convolution kernel number is 64, stride is [1 1 1]

5

ReLU2

The activation function  is ReLU

6

Conv3

Convolution kernel size is 3x3,convolution kernel number is 64, stride is [1 1 1]

7

ReLU4

The activation function  is ReLU

……….

8

Conv16

Convolution kernel size is 3x3,convolution kernel number is 64, stride is [1 1 1]

9

ReLU16

The activation function  is ReLU

10

Conv17

Convolution kernel size is 3x3,convolution kernel number is 64, stride is [1 1 1]

11

ReLU17

The activation function  is ReLU

12

Conv18

Convolution kernel size is 3x3x64,convolution kernel number is 64, stride is [1 1 1]

13

FinalRegressionLayer

The Regression output is mean squared error

5. References

[1]

B. WU, Y. WU and H. ZHANG, Image restoration technology based on variational partial differential equation, Beijing: Peking University Press, 2008.

[2]

J. YANG and C. HUANG, Digital image processing and MATLAB implementation (second edition), Beijing: Electronic Industry Press, 2013.

[3]

D. Wang, Z. Li, S. Guo and L. Xie, "Nonlocally centralized sparse represention for image restoration," IEEE Trans. Image Process, vol. 22, pp. 1620-1630, 2013.

A05-110

6.完整源码获得方式

方式1:微信或者QQ联系博主

方式2:订阅MATLAB/FPGA教程,免费获得教程案例以及任意2份完整源码

Guess you like

Origin blog.csdn.net/ccsss22/article/details/124003641