Literature reading An implementation of the seismic resolution enhancing network based on GAN

topic

An implementation of the seismic resolution enhancing network based on
GAN

Summary

For seismic data, this paper leverages deep learning to learn features at different levels and merge them to recover the missing resolution.

  1. Introduce GAN network to seismic data processing;
  2. For 3D live datasets, a typical adaptive bandwidth extension in the continuous wavelet transform domain is used to increase its resolution;
  3. Experiments show that comparable results to traditional methods can be obtained while also recovering more tiny reflections than traditional methods.

introduction

Improving seismic resolution has always been a research hotspot in the field of seismic data processing.

  • Traditional approach: Considering this problem as a "deconvolution" problem, one needs to estimate the propagating wavelet to generate a deconvolution operator. However, estimating an accurate model of wavelets is a difficult task.
  • Advanced resolution enhancement methods: including transform domain attenuation compensation, inverse Q filtering and time-varying deconvolution, etc. However, these methods usually have extremely high computational complexity, so fast algorithms and accurate estimation of seismic attenuation components are required to be implemented in large-scale applications.
  • Deep learning method: Neural network can solve this kind of problem from the perspective of global optimization, it has strong feature capture ability and some responsible graph functions for learning. Therefore, it can remove the process of wavelet and attenuation coefficient estimation, and generate super-resolution restoration results without human intervention.

method

The overall architecture of the network consists of a generator and a discriminator. The generator aims to produce high-resolution output, while the discriminator aims to judge whether the input data is a real high-resolution sample or a fake high-resolution output produced by the generator.
insert image description here
The basic blocks in the generator consist of residual structures and batch normalization layers; upsampling uses subpixel convolutions.
insert image description here

Since the training of GAN is unstable, pre-training is also introduced.
In the pre-training phase, only the generator is involved, and the loss function of the generator can be written as:
min ⁡ f ∑ ( G f ( I lr ) − I hr ) 2 \min_f \sum(G_f(\mathbf{I}^{lr })-\mathbf{I}^{hr})^2fmin(Gf(Ilr)Ihr)2
I l r \mathbf{I}^{lr} Il r is the original data sample,I hr \mathbf{I}^{hr}Ih r is the corresponding high-resolution sample,G f G_fGfparameters for the generator.

During formal training, the generator loss consists of MSE loss and adversarial loss as follows:
lmse = ∑ n = 1 N ( G f ( I lr ) − I hr ) 2 l_{mse}=\sum_{n=1} ^N(G_f(\mathbf{I}^{lr})-\mathbf{I}^{hr})^2lmse=n=1N(Gf(Ilr)Ihr)2
ifv = ∑ n = 1 N − log ⁡ D θ ( G f ( I lr ) ) l_{adv}=\sum_{n=1}^N-\log D_\theta (G_f(\mathbf{I}^ {lr}))ladv=n=1NlogDi(Gf(Ilr))
D θ D_\theta Diis the parameter of the discriminator.
The loss function of the discriminator is as follows:
lcross _ enropy = − ∑ n = 1 N [ log ⁡ D θ ( I hr ) + log ⁡ ( 1 − D θ ( G f ( I lr ) ) ) ] l_{cross\_enropy} =-\sum_{n=1}^N[\log D_\theta(\mathbf{I}^{hr})+\log (1- D_\theta (G_f(\mathbf{I}^{lr}) ))]lcross_enropy=n=1N[logDi(Ihr)+log(1Di(Gf(Ilr)))]

Meanwhile, the perceptual field is related to the patch size and network depth. For seismic data, structure is always extended in space and depth. Therefore exploiting the spatial correlation of the data helps to recover the lost resolution.
Therefore, choose to construct a 22-layer network with a patch size of 96 × 96 96 \times 9696×96

experiment

!](https://img-blog.csdnimg.cn/cb851595ef074081b873a5ffa8dcb8b0.png)

Guess you like

Origin blog.csdn.net/weixin_48320163/article/details/129203674