Hot papers | a two-component SAR image denoising depth learning network for

1.Intruction

    In the object recognition, object tracking and image classification and other fields, the demand for high-quality synthetic aperture radar images (SAR image) is very urgent. However, synthetic aperture radar image quality will be affected born multi-channel noise, which greatly hindered the application of the image.

    Since 2017, the depth of learning-based method can be a good learning ground floor mapping between image noise and noise-free image, however, the optical image training is different, no noise SAR image can not be obtained directly in reality. To generate training, typically by using an optical image noise model to simulate the SAR image noise, which causes a problem: the need to add a fixed amount of noise in the system simulation. Once you want to add noise is fixed, deep learning model can only learn the noise distribution of the fixed type, which can lead to over-smoothing (oversmo-othing) or too much emphasis on certain details (fake details) lost generalization. Therefore, the depth learning model with self-correction capabilities, is key to achieving SAR image denoising superiority.

     This paper introduces the concept of texture map level (TLM), the design of a two component deep learning network to solve the above problems. TLM is a heat map, which shows the randomness of the distribution of the image pattern, and uniform dimensions. The network consists of two subnets, subnet i.e. texture estimation and noise removal subnet. The former is used to generate TLM, which is used for removing noise from the original SAR image and the corresponding TLM.

2.Method

A. Texture hierarchical graph

    And measuring the optical quality of the results of different image denoising, SAR images in real denoised image can not be acquired. Therefore, as compared to the structural similarity index (the SSIM) and PSNR (PNSR), ENL (ENL) is the most common method of evaluation index SAR image denoising. However, due to over-smoothing filter may achieve a relatively high ENL, thus only ENL as the evaluation index is not enough. Evaluation of the paper using a second-order statistics of gray level co-occurrence matrix (GLMC), that homogeneity (homogen-eity). Texture pattern more random, the lower its homogeneity. Using a fixed step size and a sliding window policy may calculate the partial image homogeneity. Then, local homogeneity is sampled by bicubic interpolation, to restore the original image size. The final output image is texture mapping stage (TLM).

B. double portion depth learning network

    To TLM embedded into the network, the paper design depth learning network one pair of components. FCNe (full convolution Network) estimated subnet texture, FCNd for the de-noising subnet. FCNe noise image as an input, and outputs the same noise image size TLM. Then, the image noise and TLM are connected to form a double channel input FCNd output final filtered result. The network architecture shown in Figure 1.
Here illustrations 1. network architecture into the picture description

** ** Figure 1. Network Architecture

    FCNe five convolution layer and the active layer RELU composition FCN they follow typical paradigm. Convolution kernel size is set to 3x3. FCNd U-shaped structure by connecting skip (skip connections) and deconvolution (Deconvolution) to expand the accepted domain, discover multi-scale features. The network comprises a dimension reduction path (blue in FIG. 1) and the size of the restoration path (yellow portion in FIG. 1). The basic components of FCNd is dense block (dense block), the BN layer, RELU active layer, two layers and a convolution layer dropout (to prevent over-fitting). All FCNd convolution kernel size set to 3 × 3. In each of the dimensionality reduction step, maxpooling layer (dimension reduction and to increase the receptive field), and inputs the size reduced by half. Path dimension reduction to reduce the cost of the image spatial dimensions to extract content feature multi-scale, resulting in a gradient problem disappears. Thus, the size of the recovery path by way of deconvolution to recover the output size, and the use of the skip mode connection between the characteristics of the two paths, to maintain the details of such network training easier.

3. EXPERIMENTS & CONCLUSION

    The experimental paper contains a total of two aspects, namely immersive real SAR images and SAR images. In experiments realistic SAR images, using PSNR (the PSNR), the similarity of the structure (the SSIM) and the edge retention index (edge preservation index: EPI) evaluation is performed. The model respectively DnCNN, WNNM, GFCNN comparison algorithm, after comparing the results obtained are as follows (bolded optimal index).
Here Insert Picture Description
    In this network, the input pattern and standard patterns previously obtained from the study are compared. This comparison is performed not by a large window in pattern matching is performed directly, but rather by the small segment pattern matching window. Only when the difference between the two modes in any small window does not exceed a certain limit, the network will judge these patterns are consistent with other modes.

    In experiments SAR images, since no image is denoised, and the indicators used for the UMQ ENL (ENL), divide it into two UQMH UQME and evaluated. The former represents the uniformity ratio map, the latter remaining quantization ratio map structure, and UQMH UQME smaller value, the better the performance of the filter.

    As can be seen from Table 2, since WNNM DnCNN and has too large noise immunity, they obtained high ENL. This model than GFCNN and FCNd better ENL. For UMQE and UMQH, the result of this process is desirable, to retain the structure in less residual image ratio. In general, the method of removing noise at the same time retains much of the structure and details, and achieved good results.
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description

More interesting information scan code concern BBIT
Released six original articles · won praise 0 · Views 37

Guess you like

Origin blog.csdn.net/ShenggengLin/article/details/105301153