MS-Train【1】:nnUNet


foreword

nnUNet is a framework written by engineers at the German Cancer Research Center and is still being maintained and updated. This article mainly records the process of nnUNet from installation to training to inference.

For an explanation of the nnUNet network structure, you can read another article of mine: MS-Model [1]: nnU-Net


1. Installation

  • official system version
    • nnU-Net requires a GPU! For inference, the GPU should have 4 GB of VRAM. For training nnU-Net model, GPU should have at least 10 GB
    • For training, we recommend using a powerful CPU in conjunction with a GPU. At least 6 CPU cores (12 threads) are recommended. CPU requirements are mainly related to data augmentation and scaling of the number of input channels.
  • installation steps
  1. Install hiddenlayer, hiddenlayer enables nnU-net to generate the network topology map it generates
pip install --upgrade git+https://github.com/FabianIsensee/hiddenlayer.git@more_plotted_details#egg=hiddenlayer
  1. Create a path on the servernnUNetFrame
  2. Enter the created path and clone the project to the server
    • It should be noted that you may need to use some accelerated mirror websites in China to clone to your own server more smoothly
cd /root/nnUNetFrame

git clone https://github.com/MIC-DKFZ/nnUNet.git
  1. Enter the clone good file path
cd nnUNet
  1. Execute the installation command
pip install -e .

After the installation is complete, every operation on nnUNet will start with nnUNet_ in the command line, which represents the command for your nnUNet to start working


2. Training and Testing

2.1. Data processing

2.1.1. Cleaning up data paths

  • Enter the created nnFormerFramefile path
cd /root/nnUNetFrame
  • Create a folder DATASETcalled
  • Go to the created DATASETfolder
    • Create in turn
      • nnUNet_preprocessed- Store the data after the preprocessing of the original data
      • nnUNet_raw- store the original training data
      • nnUNet_trained_models- store training results
  • EnternnUNet_raw
    • Create in turn
      • nnUNet_cropped_data- data after crop
      • nnUNet_raw_data- Raw data
  • The final file structure is shown in the figure:
    insert image description here

2.1.2. Set the path for nnUNet to read files

  • Enter .bashrcthe file and add the following command at the end of the file:
export nnUNet_raw_data_base="/root/nnUNetFrame/DATASET/nnUNet_raw"
export nnUNet_preprocessed="/root/nnUNetFrame/DATASET/nnUNet_preprocessed"
export RESULTS_FOLDER="/root/nnUNetFrame/DATASET/nnUNet_trained_models"
  • Update resources:
source .bashrc

2.1.3. Dataset preprocessing

This experiment uses the medical image decathlon Task05_Prostate

  • Transform the dataset so it can be nnUNetrecognized :
nnUNet_convert_decathlon_task -i /root/nnUNetFrame/DATASET/nnUNet_raw/nnUNet_raw_data/Task05_Prostate

  • Perform operations such as interpolation
nnUNet_plan_and_preprocess -t 5

It should be noted that if there is a prompt in this step that some packages have not been installed successfully, it can basically be determined that the previous environment has not been configured properly. If there is an error, please try to configure the environment again

2.2. Training

2.2.1. Training code

  • Find nnUNetTrainerV2.py /root/nnUNetFrame/nnUNet/nnunet/training/network_training/under and modify epochs
    • If you just want to get a training result, you don't need to train so many rounds of epochs (almost 200 epochs will converge)
    • Revise:
self.max_num_epochs = 400
  • train
    • 5 represents your task ID, 4 represents the 4th fold in the 5-fold cross-validation (0 represents the 1st fold after 50-fold)
nnUNet_train 3d_fullres nnUNetTrainerV2 5 4

training results

  • training curve

    • for reference only
      insert image description here
  • training log

2022-12-27 10:31:42.129562: 
epoch:  399 
2022-12-27 10:34:45.765983: train loss : -0.9746 
2022-12-27 10:35:06.985032: validation loss: -0.9035 
2022-12-27 10:35:06.985926: Average global foreground Dice: [0.9274] 
2022-12-27 10:35:06.986063: (interpret this as an estimate for the Dice of the different classes. This is not exact.) 
2022-12-27 10:35:13.525987: lr: 0.002349 
2022-12-27 10:35:13.526254: saving scheduled checkpoint file... 
2022-12-27 10:35:13.607831: saving checkpoint... 
2022-12-27 10:35:14.533169: done, saving took 1.01 seconds 
2022-12-27 10:35:14.534369: done 
2022-12-27 10:35:14.567537: saving checkpoint... 
2022-12-27 10:35:15.225455: done, saving took 0.69 seconds 
2022-12-27 10:35:15.226438: This epoch took 213.096802 s

2.2. Prediction

nnUNet_predict -i /root/autodl-tmp/model/nnUNetFrame/DATASET/nnUNet_raw/nnUNet_raw_data/Task002_Heart/imagesTs/ -o /root/autodl-tmp/model/nnUNetFrame/DATASET/nnUNet_raw/nnUNet_raw_data/Task002_Heart/inferTs -t 2 -m 3d_fullres -f 4
nnUNet_predict -i /root/autodl-tmp/model/nnUNetFrame/DATASET/nnUNet_raw/nnUNet_raw_data/Task005_Prostate/imagesTs/ -o /root/autodl-tmp/model/nnUNetFrame/DATASET/nnUNet_raw/nnUNet_raw_data/Task005_Prostate/inferTs -t 5 -m 3d_fullres -f 4

The meaning of each parameter:

  • nnUNet_predict: Execute the predicted command;
  • -i: input (your test set to be inferred);
  • -o: output (inference results on the test set);
  • -t: the numerical ID corresponding to your task;
  • -m: the corresponding network architecture used during training;
  • -f: The number 4 represents the model trained using 5-fold cross-validation;

Summarize

After following the above procedure, you can get your own nnUNet!

Training Reference
Inference Reference

Guess you like

Origin blog.csdn.net/HoraceYan/article/details/127932447
ms