video image enhancement
Super resolution reconstruction network
1 Introduction
In order to urge himself to study hard, Bibo writes what he has learned recently as a blog /(ㄒoㄒ)/~~
2. DASR
2.1 Papers and code
If you want to learn more about the idea of this network, I recommend you to read the link of the blogger below. He speaks better than me, so I won’t explain too much.
Interpretation of the paper: https://blog.csdn.net/weixin_43972154/article/details/119327182
As for learning the code, it is ok to follow the author's operation steps step by step. To learn a new network, the most important thing is to reproduce the code.
Code link: https://github.com/The-Learning-And-Vision-Atelier-LAVA/DASR
3. Code reproduction
3.1 Prepare the environment
Since the pytorch environment used by the author is relatively old, there will always be unimaginable problems when using a higher version of pytorch. In order to save time, I suggest that you use a virtual environment. It is very simple to operate, just three steps:
- Create a virtual environment
conda create -n pytorch1.1 python=3.6
and encounter selection options along the wayy
- Activate the created virtual environment
activate pytorch1.1
- Install the required packages in the activated virtual environment
pip install -i https://pypi.douban.com/simple -r requirement.txt
(remember to put the required packages in a txt file)
3.2 Data preparation
Download these two data and put them in a folder. In order to avoid less errors, we name the folder asDF2K
- Prepare
DIV2K
DIV2K
Dataset download link: https://data.vision.ee.ethz.ch/cvl/DIV2K/
There are many datasets in this link. I didn’t figure out which datasets to download at the time, and almost downloaded them all. Note that you only need Download these two data sets. As for why, you can read the interpretation of the paper I provided above.
- Prepare
Flickr2K
Flickr2K
Dataset download link: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
- Merge data
After the data is downloaded, put it into the folder DF2K
, which looks like this.
A careful network may ask why I only have 900 pictures, because my computer has insufficient video memory, and I only use the DIV2K
data set for training.
3.3 Start training
main.sh
The next step is to start training. The training code can be run directly if you use a server . If you have no conditions, you can use the following command:
python main.py --dir_data=./dataset/ --model=blindsr --scale=4 --blur_type=aniso_gaussian --noise=25.0 --lambda_min=0.2 --lambda_max=4.0
The training process is probably like this
Note the following issues:
3.4 Test data
- The original paper prepared four test data sets, let's talk about the [download link] of the data. (https://github.com/XPixelGroup/BasicSR/blob/a19aac61b277f64be050cef7fe578a121d944a0e/docs/Datasets.md)
- I only used
Set14
the dataset when I tested, I used these two folders.
- Put
Set14
theHR
images andLR
images inbenchmark
the folder, and the data storage file format is like this.
3.5 Start the test
The following is to start the test. The test command is as follows:
python test.py --test_only --dir_data=./dataset/ --data_test=Set14 --model=blindsr --scale=4 --resume=114 --blur_type=aniso_gaussian --noise=10.0 --theta=0.0 --lambda_1=0.2 --lambda_2=4.0 --save_results=True
The following questions:
In addition, you can also use the official weights for a quick test.
3.6 Test results
It should be mentioned here that the results of the training process are saved experiment
under the file, and the mode you use will be in the folder.
Here is the following, the results of my training, because I only trained 114epoch, so the effect is not good, let's show it with difficulty.
4. My Code
The following is the link to my code. Lazy students can download my code directly. I have already downloaded all the data, so I don’t need to bother to download the data sets one by one, just use them directly.
Link: https://pan.baidu.com/s/1f_Gq-pvMyfth-FebiYpx7A
Extraction code: fasf
– share from super member V5 of Baidu Netdisk