15.2 BP Neural Network Realizes Image Compression——Image Compression Based on BP Neural Network (matlab program)

1. Brief description

      

Image Compression Principle of BP Neural Network

Digital image compression is actually an image processing technology that expresses the original pixel matrix with a small number of bits lossy or lossless. In fact, it is to reduce time redundancy, space redundancy, spectrum redundancy, etc. in image data. As a purpose, to achieve more efficient storage and transmission of data by reducing one or more of the above redundant information. In fact, no matter what specific architecture or technical method is used in the image compression system, the basic process is Consistent, it can mainly be summarized as three parts: encoding, quantization, and decoding. The flow chart is as follows:

Theoretically speaking, the problem of encoding and decoding can actually be attributed to the problem of mapping and optimization. From the perspective of neural network, it is nothing more than realizing a nonlinear mapping relationship from input to output, and the standard for measuring performance can be obtained from parallel Whether the processing capacity is efficient, whether the fault tolerance rate is appropriate, and whether it is robust. The basic principle of analyzing image compression is actually the same as the above-mentioned BP neural network principle: as shown in the following figure:

In the BP network, the mapping relationship between the input layer and the hidden layer is equivalent to the encoder, which is used to perform linear or nonlinear transformation on the image signal, and the mapping relationship between the hidden layer and the output layer is equivalent to the encoder. Inversely transform the compressed signal data to reconstruct the image data. The compression ratio S=the number of neurons in the input layer/the number of neuron nodes in the hidden layer. The number of neuron nodes in the input layer and output layer of the BP neural network is theoretically It is consistent. The number of neurons in the hidden layer is much smaller than the number of input layers in the output layer. In theory, the effect of different image compression ratios can be achieved by adjusting the number of neuron nodes in the hidden layer.

2. Code


%% clean up
clc
clear all
rng(0)

%% Compression ratio control
K=4;
N=10;
row=256;
col=256;

%% data input

I=imread('lena.bmp');
%% uniformly convert the shape to row*col
I=imresize(I,[row,col]);

%% Image block division, forming K^2*N matrix
P=block_divide(I,K);

%% Normalized
P=double(P)/255;

%% Establish BP neural network
net=feedforwardnet(N,'trainlm');
T=P;
net.trainParam.goal=0.001;
net.trainParam.epochs=500;
tic
net=train(net,P,T);
toc

%% save the result
com.lw=net.lw{2};
com.b=net.b{2};
[~,len]=size(P); % number of training samples
com.d=zeros(N ,len);
for i=1:len
    com.d(:,i)=tansig(net.iw{1}*P(:,i)+net.b{1});
end
minlw= min(com. lw(:));
maxlw= max(com.lw(:));
com.lw=(com.lw-minlw)/(maxlw-minlw);
minb= min(com.b(:));
maxb= max(com.b(:));
com.b=(com.b-minb)/(maxb-minb);
maxd=max(com.d(:));
mind=min(com.d(:) );
com.d=(com.d-mind)/(maxd-mind);

com.lw=uint8(com.lw*63);
com.b=uint8(com.b*63);
com.d=uint8(com.d*63);

save comp com minlw maxlw minb maxb maxd mind
 

3. Running results

 

Guess you like

Origin blog.csdn.net/m0_57943157/article/details/131564966