IRCNN-FPOCS Code Interpretation (1): Overall Framework

0 Preface

According to the idea of ​​realizing the thesis code by yourself, study the author's code, find your own knowledge blind spots and deficiencies, and improve your coding skills.

This module mainly introduces the idea of ​​code implementation. For detailed analysis, see the follow-up blog.

1. Synthetic seismic data

Using the wave equation? ? ? Synthetic data, that is, labels (Ground truth).

The code is implemented in generateHyperbolic.m,

The function used is: D = hyperbolic_events(dt, f0, tmax, offset, tau, v, amp, snr, L); % dt:
sampling interval (seconds)
%f0: center frequency (Hz)
%tmax: simulated maximum long time (seconds)
%h: offset vector in meters
%tau, v, amp: intercept vector, rms velocity and amplitude for each linear event, v in m/s, tau in seconds units)
%snr: signal-to-noise ratio (clean maximum amplitude signal/maximum amplitude of noise)
%L: random noise is the average of L samples

The parameter settings in the paper are as follows:

dt = 2./1000;
tmax = 2.;
n = 100;
offset = (-n:n)*10;
tau = [.5, .8, 1., 1.4];
v = [1700, 1800, 2000, 2300];   
amp = [.4, .4, .6, .5];
f0 = 30;
snr = Inf; 
L = 20;

The generated seismic data is as follows:

 However, how does the setting of the parameters affect the shape of the curve in the image, and how does it correspond to actual indicators such as the distance between seismic traces and the sampling interval during the actual seismic data acquisition process, which is not clear?

2. Generate noisy downsampled seismic data

Input: Synthetic Seismic Data

Output: Noisy downsampled seismic data

Data preprocessing includes four steps: normalization, adding noise, downsampling, and pre-interpolation.

2.1 Normalization

1) Method:

Min-Max Normalization
 x' = (x - X_min) / (X_max - X_min)

2) Function:

Accelerate model convergence and save training time

2.2 Add noise

Here is random noise. Noise level [0,255].

2.3 Downsampling

1) Generate a downsampling template

There are two options here, downsampling according to the specified template and custom downsampling. Custom downsampling is covered here.

The downsampling here can randomly go to rows, randomly go to columns, regularly go to columns, and irregularly go to columns.

What is used here is to go out of the order.

如:mask = projMask(D, Ratio, sampleType)

Parameter D is the input image, Ratio is the ratio of retaining the original image, sampleType is the sampling type, here is iregc (irregular removal of column data).

The implementation code is as follows:

index = randperm(n); % n为原图像列的数目,将n个整数随机排列
sample = index(1:fix(n*r)); % fix是向下取整的意思,r是保留的比例 
mask(:, sample) = 1;

Finally, a mask containing only 0 and 1 is obtained, 0 means that the value of the corresponding position of the original image is removed, and 1 means reserved.

2) Get downsampled data

The downsampling data can be obtained by dot multiplying the original data and the downsampling template. Note that a small processing is also done here, and the missing value is filled with the mean value of the original image. At least the missing part does not look so abrupt.

% Down-sampling input.
input = nlabel.*mask;   % nlabel为加噪声的地震数据
input(mask==0) = mean(nlabel(:));  % 将缺失值用均值补齐

2.4 Pre-interpolation

Function: Pre-interpolation can improve efficiency and accuracy, just like when the neural network initializes the weights, it does not initialize the weights randomly, but uses a certain method for initialization.

The Shepard interpolation method is used here, also known as the weighting method that is inversely proportional to the distance. Its basic implementation is to define the interpolation function as the weighted average of the function values ​​of each data point, and the weight function is defined as being inversely proportional to the distance.

What type of interpolation algorithm is not important, it is important to have this step, good initialization is very important!

3. Implement the iterative process of IRCNN-FPOCS

3.1 The overall description of the paper algorithm is as follows:

%  论文算法步骤5   
maxv = 30;
epsilon = 10;
LambdaS = maxv * exp(((0:totalIter-1) * (log(epsilon) - log(maxv))) / (totalIter-1));  

for itern = 1 : totalIter
    %%%%%%%%%%%%%%%
    
    d_old = output;
    
    % 对应论文算法的步骤3、4、7,这里认为 alpha=1,tt为论文算法步骤3里面的st.
    % 步骤7作用有点类似于网络训练中的 optimize.zero_grad() 放在前面后面都可以
    tt = next_t(t);  
    beta = (t-1)/tt;
    output = mask.*input + (1 - mask).*(output + beta * (output - d_old));
   
    d_old = output;
    %%%%%%%%%%%%%%%%%%%%%
    
    if ns(itern+1) ~= ns(itern)
        [net] = loadmodel(LambdaS(itern), CNNdenoiser);    
        net = vl_simplenn_tidy(net); %修复不完整或过时的网络
        if useGPU
            net = vl_simplenn_move(net, 'gpu');
        end
    end
    
    for k = 1 : inIter
        res    = vl_simplenn(net,output,[],[],'conserveMemory',true,'mode','test');
        output = output - res(end).x;
    end
end

The goal of step 5 of the thesis is to find the variance of the noise \sigma _{t}. The maximum value is set to 30, and the minimum value is set to 10. The LambdaS in the code iterates 30 times in an exponential manner to generate 30 values ​​between 30 and 10 as Parameters of the IRCNN denoising model.

 output = mask.*input + (1 - mask).*(output + beta * (output - d_old));  

input is the result of the original image plus noise downsampling and filling with the mean value,

output adds noise to the original image at the beginning of the iteration and uses the mean value to complement the pre-interpolation results. In the subsequent steps, it is updated with IRCNN. At this time, the value in step 7 of the paper \alphais 1.

  d_old and output respectively represent the d calculated twice before and after the iteration

res = vl_simplenn(net,output,[],[],'conserveMemory',true,'mode','test'); This step is the test result of the IRCNN depth model, where the input output is noisy data, and the output res is Residuals (that is, the separated noise data).

I don't know what the ns in the code is for? ? ? keep asking. 

3.2 How to embed the IRCNN depth model into matlab?

Using the deep learning network in matlab requires the matconvnn toolbox.

1) Load the trained parameters into the model

[net] = loadmodel(LambdaS(itern), CNNdenoiser);    
net = vl_simplenn_tidy(net); % repair incomplete or outdated network

2) Test output

res = vl_simplenn(net,output,[],[],'conserveMemory',true,'mode','test');

3) Model training process. Implemented in python.

4. Other instructions

For the detailed description of the code, see the comments, which are not publicly available yet.

Guess you like

Origin blog.csdn.net/u014655960/article/details/128566529