Table of contents
1. Source code package preparation
The original version of RMRID was based on the MegEngine framework, and later a Pytorch version was developed. This tutorial is mainly based on the Pytorch version.
Official MegEngine version source code:PMRID(MegEngine)
Official Pytorch version source code:PMRID (Pytorch)
论文地址:论文
The source code package I provide is modified based on the official website, and contains some codes I added. The link to the source code package I provide is:Network disk source code package< /span>, extraction code: he6m
The content of the downloaded and unzipped file of the source code package I provided is as follows:
2. Data set preparation
There are many data sets, it depends on which data set you need. The Kaggle data set download link mentioned on the official website is:Kaggle
Open the URL and download directly, as follows:
If you want to use raw data in .RAW format for training, you must modify the code when reading the data. The code for reading .RAW format data is as follows. About reading .RAW format data and .dng format data For the method, check out another blog postReading .RAW data and .dng data
If the data set is 8-bit data in .jpg, .png, .bmp and other formats, just keep the default, as follows:
2.1 Extract data set name
In the provided script file, extract the noisy image names and the real noise-free image names into a .txt file through generate_list_sidd.py, as follows:
Randomly cut a part of the generated train_list_sidd.txt file into the other two files Sony_val_list.txt and Sony_test_list.txt as the verification set and test set, as follows:
2.2 .txt error reporting problem
In the previous step, extract the image name into the .txt file. Note that the last blank line in the .txt file must be deleted, otherwise an error will be reported, as follows:
2.2.1 Correct format
2.2.2 Wrong format
3. Modify configuration parameters
The hyperparameters that can be modified are as follows:
训练轮数:epoch
训练批次大小:batch_size
学习率:learning_rate
使用GPU或CPU:device
日志文件路径:logs_path
模型保存路径:params_path
训练集路径:train_list_path
验证集路径:value_list_path
是否加载预训练权重文件:is_load_pretrained
预训练训中路径:pretrained_path
The specific modifications in the source code package are as follows, in the main.py script:
4. Training and saving model weights
4.1 Training
After modifying the above parameters, you can start training. Before training, open the training command, as follows:
Training starts after running the script main.py, as follows:
4.2 Save model weight file
During the training process, the model weight file of each epoch will be saved in the params path, as follows:
Model size is 3.98M
5. Model inference test
After training, open the test command before performing inference, as follows:
5.1 Import test set
Due to the code architecture design, the variable name corresponding to the verification set and the variable name corresponding to the test set are the same, so directly replace the verification set path with the test set path, as follows:
5.2 Testing
After modifying the above parameters, run the script main.py and the test results will be generated in the output path:
5.3 Test results
5.3.1 Test scenario 1
The left side is the original noisy image, and the right side is the model test result.
5.3.2 Test scenario 2
5.4 Reasoning speed
5.4.1 CPU inference
The computer I tested is i7-12700H 2.3GHz, the test image size is 256*256, and the average inference time tested on the CPU is 40ms, as follows:
5.4.2 GPU inference
The average inference time on the GPU is: 6ms, some of which are as follows:
6. Summary
The above is a detailed graphic tutorial on using PMRID to train your own data set and inference test. Since the inference speed did not meet the requirements, I did not continue the research in the future. Moreover, this method needs to be combined with the iso parameters of the camera to generate a noise training set to achieve the best effect. , it is normal for the training and test results of different scholars to be different.
Scholars with more in-depth research are welcome to discuss and learn together in the future.
It’s not easy to summarize. Thank you for your support!