超分辨率CARN:Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network

This paper presents a lightweight cascade residual network, fast speed, performance is also good.

submit questions

Despite the depth of learning methods to improve the quality of the SR image, but the speed is slow, it does not apply to the real scene. From this perspective, the lightweight design for deep learning model is very important practical applications.
One way is to have a lot to reduce the number of parameters, methods to achieve this goal, but the most simple and effective method is to use a recursive network. For example, using a recursive network DRCN to reduce redundancy parameters, DRRN be improved by addition of the residual DRCN architecture to DRCN. Compared with standard CNN, these models are effective in reducing the number of model parameters, showing good performance.
However, the model has two disadvantages:
(1) before the input image input CNN model, for the first sample thereon;
(2) increase the depth or width of the web, to compensate for losses caused by using a recursive network. These points make the model can be maintained in the details of the image reconstruction, but increases the number of operations and time.

Solution

Proposed a cascade remaining network (Carn) and variants thereof Mobile Network (CARN-M).
ResNet intermediate portion of the model is designed, in addition, also to integrate the local and global levels characteristic from a plurality of layers using a cascade mechanism, which may reflect different levels of the input representation to receive additional information based on. In addition CARN model, we also provide CARN- m model, performance decreased slightly, but faster.
Lightweight document on the network:
(. 1) Deep compression: Compressing with Deep Pruning Neural Networks, trained Quantization and Huffman Coding.
(2) Squeezenet: Alexnet-Level and Accuracy Parameters Fewer 50X with 0.5 MB Model Construction of a paper size based alexnet. architecture, using 50 × fewer parameters can achieve a similar level of performance.
(3) Mobilenets: Efficient convolutional neural networks for mobile vision applications
applications can be divided into convolution depth to establish an efficient network (from Rigid-motion scattering for image classification of this article)

Network Architecture

You can see, CARN is to Resnet residual block into a cascade of residual network.
Here Insert Picture Description
Local and global increase in resnet expander module based on the output of the intermediate layer to a higher layer is concatenated convolutional convolution using same, wherein the easy cascading between layers, and finally converge to a single convolution 1 x1 Floor.
It has the following three characteristics:

  • Global and local cascade connection
  • Characterized in that the intermediate cascade, and are combined in the convolution block size of 1 × 1
  • The multi-level representation and quick connections allowing more efficient transmission of information

However, the advantages of multi-level representation is limited to the interior of each local expander module, such as in the quick connector 1 × 1 convolution multiplication operation may hinder such transfer of information, it is also reasonable that the performance will be degraded in.

Efficient CARN

In order to enhance the efficiency of CARN, the authors propose a residual -E module.
Here Insert Picture DescriptionThis method MobileNet and the like, but the depth is replaced by convolution to convolution packet. Since the intermediate convolution of the packet must have trade-off, so the user may select a suitable packet size.
To further reduce the parameters used in the paper and a recurrent neural network similar trick, is to share the cascade parameter module, the module allows efficient recursive.
Here Insert Picture Description

doubt

How to understand the shared and recursion?

Guess you like

Origin blog.csdn.net/qq_41332469/article/details/91997007