Super-resolution reconstruction - SAN training own data set and inference test (detailed graphic tutorial)

1. Download the source code package

The source code package is provided by the official website or I modified the code myself. It is recommended that scholars directly download the source code package provided by me and use it, which can avoid many detours.

Official website source code package download link:SAN official website

The source code package I provided:Network disk source code package, extraction code: 0g99

论文地址:论文

The source code package I provided looks like this after downloading and decompressing it:

Insert image description here

2. Data set preparation

There are some training sets and test sets in the source code package I provided, located in the data_data folder in the root directory. The DIV2K training set official website provides 900 images, and the source code package I provide includes 100 images. Scholars can go to the official website to download the full version of the DIV2K data set. I provide it here mainly to tell scholars the directory structure relationship of the training set, as follows:

Insert image description here

3. Pre-training weight file

The pre-trained weight files have been provided in the source code package. The storage locations are as follows. There are pre-trained weight models with super scores of 2 times, 3 times, and 4 times respectively.

Insert image description here

4. Training environment

The test code framework must run in a lower version of Pytorch. If you encounter problems installing a lower version of Pytorch, please refer to my other blog post:_update_worker_pids problem

I trained and tested it myself on Windows environment. The running environment is as follows:

Insert image description here

5. Training

5.1 Hyperparameter modification

All paths in this code framework must use absolute paths to read data correctly. If you don’t believe it, try it!

Insert image description here

All hyperparameter modifications related to training are in the option.py file under the TrainCode folder. Scholars can modify other hyperparameters according to their own circumstances.

5.2 Training model

5.2.1 Command mode training

First enter the training script path in the terminal through the following command:

cd TrainCode

Enter the following command to train:

ython main.py --model san --save save_name --scale 2 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --patch_size 20 --cpu --batch_size 8

5.2.2 Configuration configuration parameter mode training

Insert image description here

Insert image description here

Insert image description here

Both of the above methods of filling in configuration parameters can be trained, and scholars can choose according to their personal preferences.

After filling in the configuration parameters, you can train directly. I trained on the CPU myself, because the Pytorch version was too low, which involved the incompatibility of the CUDA and CuDNN versions. I had to reconfigure the environment and other issues. I was too lazy to bother, so I just It was directly trained and tested using CPU. Scholars use GPU or CPU according to their own situation. If they want to use GPU, comment out the code in the main.py script of the source code package I provided, as follows:

Insert image description here

5.3 Model saving

After running the above command, wait for a while and then start training, as follows:

Insert image description here

The model weights during the training process will be automatically saved to the experiment folder in the root directory, as follows:

Insert image description here

6. Reasoning test

6.1 Hyperparameter modification

The test script has a corresponding configuration file, also named option.py, with many parameters, which can be modified according to the situation, as follows:

Insert image description here

The following is the saving path for modifying the test results. This part of the code was added by myself. There is no script to save the test results in the official website source code package, as follows:

Insert image description here

6.2 Testing

6.2.1 Command mode test

Enter the following command in the terminal to enter the path of the test script:

cd TestCode/code

Then enter the following command and press Enter to test:

python main.py --model san --data_test MyImage --save save_name --scale 4 --n_resgroups 20 --n_resblocks 10 --n_feats 64 --reset --chop --save_results --test_only --testpath F:/Code/Python/SAN/SAN/Test_Images/INF --testset Set5 --pre_train F:/Code/Python/SAN/SAN/experiment/save_name/model/model_best.pt --cpu

The super-resolution multiple can be modified in the above command: –scale parameter; test set path: –testpath; trained model weight path: –pre_train; other parameters can be modified by yourself

6.2.2 Configuration configuration parameter mode test

Insert image description here

Insert image description here

Insert image description here

6.3 Test results

The running process is as follows:

Insert image description here

The result images saved by the test are in the Result_Images folder in the root directory, as follows:

Insert image description here

6.4 Reasoning speed

I only tested the inference speed on the CPU, image size: 12090, super score 4 times, inference speed: 12s/fps. Image size 512512, super score 2 times, inference speed: 39s/fps.

7. Summary

The above is a detailed graphic tutorial on super-resolution reconstruction - SAN network training on your own data set and inference testing. Welcome to leave a message for discussion.

It’s not easy to summarize. Thank you for your support!

Guess you like

Origin blog.csdn.net/qq_40280673/article/details/135033198