Super-resolution reconstruction network - DASR

video image enhancement


1 Introduction

In order to urge himself to study hard, Bibo writes what he has learned recently as a blog /(ㄒoㄒ)/~~
insert image description here


2. DASR

2.1 Papers and code

If you want to learn more about the idea of ​​this network, I recommend you to read the link of the blogger below. He speaks better than me, so I won’t explain too much.
Interpretation of the paper: https://blog.csdn.net/weixin_43972154/article/details/119327182
As for learning the code, it is ok to follow the author's operation steps step by step. To learn a new network, the most important thing is to reproduce the code.
Code link: https://github.com/The-Learning-And-Vision-Atelier-LAVA/DASR

3. Code reproduction

3.1 Prepare the environment

Since the pytorch environment used by the author is relatively old, there will always be unimaginable problems when using a higher version of pytorch. In order to save time, I suggest that you use a virtual environment. It is very simple to operate, just three steps:

  1. Create a virtual environment conda create -n pytorch1.1 python=3.6and encounter selection options along the wayy
  2. Activate the created virtual environmentactivate pytorch1.1
  3. Install the required packages in the activated virtual environment pip install -i https://pypi.douban.com/simple -r requirement.txt(remember to put the required packages in a txt file)

3.2 Data preparation

insert image description here
Download these two data and put them in a folder. In order to avoid less errors, we name the folder asDF2K

  1. PrepareDIV2K

DIV2KDataset download link: https://data.vision.ee.ethz.ch/cvl/DIV2K/
There are many datasets in this link. I didn’t figure out which datasets to download at the time, and almost downloaded them all. Note that you only need Download these two data sets. As for why, you can read the interpretation of the paper I provided above.
insert image description here

  1. PrepareFlickr2K

Flickr2KDataset download link: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar

  1. Merge data

After the data is downloaded, put it into the folder DF2K, which looks like this.
insert image description here
A careful network may ask why I only have 900 pictures, because my computer has insufficient video memory, and I only use the DIV2Kdata set for training.
insert image description here

3.3 Start training

main.shThe next step is to start training. The training code can be run directly if you use a server . If you have no conditions, you can use the following command:

python main.py --dir_data=./dataset/ --model=blindsr --scale=4 --blur_type=aniso_gaussian --noise=25.0 --lambda_min=0.2 --lambda_max=4.0

The training process is probably like this
insert image description here


Note the following issues:
insert image description here
insert image description here


3.4 Test data

insert image description here

  1. The original paper prepared four test data sets, let's talk about the [download link] of the data. (https://github.com/XPixelGroup/BasicSR/blob/a19aac61b277f64be050cef7fe578a121d944a0e/docs/Datasets.md)
    insert image description here
  2. I only used Set14the dataset when I tested, I used these two folders.

insert image description here

  1. Put Set14the HRimages and LRimages in benchmarkthe folder, and the data storage file format is like this.

insert image description here


3.5 Start the test

insert image description here
The following is to start the test. The test command is as follows:

 python test.py --test_only  --dir_data=./dataset/ --data_test=Set14 --model=blindsr  --scale=4 --resume=114 --blur_type=aniso_gaussian --noise=10.0 --theta=0.0  --lambda_1=0.2 --lambda_2=4.0 --save_results=True

insert image description here
The following questions:
insert image description here
In addition, you can also use the official weights for a quick test.
insert image description here

3.6 Test results

It should be mentioned here that the results of the training process are saved experimentunder the file, and the mode you use will be in the folder.
insert image description here
Here is the following, the results of my training, because I only trained 114epoch, so the effect is not good, let's show it with difficulty.
insert image description here
insert image description here

insert image description here


4. My Code

The following is the link to my code. Lazy students can download my code directly. I have already downloaded all the data, so I don’t need to bother to download the data sets one by one, just use them directly.
Link: https://pan.baidu.com/s/1f_Gq-pvMyfth-FebiYpx7A
Extraction code: fasf
– share from super member V5 of Baidu Netdisk

Guess you like

Origin blog.csdn.net/CharmsLUO/article/details/125332095