nnUnet: Use your own (custom) network for training

1. Introduction

nnUnet is an unavoidable topic for supervised medical image segmentation (although I saw that some articles did not add nnUnet in comparative experiments? Maybe it can’t be compared? ), its excellent performance and simple method provide related researchers with a powerful Tool of. However, due to the high degree of encapsulation, it is not very convenient ( at least for me ) to embed a custom network in the original code for training . This article aims to share a little experience in the process of using nnUnet to train a custom network. There may be mistakes (???), welcome to communicate in the discussion area!

2. Preparation

2.1 Hardware requirements

The recommended environment for nnUnet is Linux. If you use Windows, you need to modify the path-related code (replacement of slashes and backslashes), which is very troublesome (not recommended). The blogger uses Pycharm to learn nnUnet in the Ubuntu environment.

2.2 Debugging environment

The official recommended method of nnUnet is to use the command line, but this is not convenient for beginners to learn. Because I have only used Pycharm to debug the code (dish!), so in order to meet my own needs (?), in order to use Pycharm's fool-like debugging button, I modified some codes: nnunet/paths.py and nnunet/ run / run_training.py

2.2.1 Path

Located in the ***nnunet/paths.py*** file, modify the three variable paths to your own path. custom_ is a file defined by the blogger himself, and you can implement it at will.

from custom_ import custom_config
base = custom_config['base']
preprocessing_output_dir = custom_config['preprocessing_output_dir']
network_training_output_dir_base = custom_config['network_training_output_dir_base']

2.2.2 parser

Located in the ***nnunet/run/run_training.py*** file, here is the entry of the nnUnet training code (!!!). Since it is not a command-line calling method, the parser needs to be modified, adding "-" and setting the default value.

    parser = argparse.ArgumentParser()
    parser.add_argument("-network", default='2d')
    parser.add_argument("-network_trainer", default='nnUNetTrainerV2')
    parser.add_argument("-task", default='666', help="can be task name or task id")
    parser.add_argument("-fold", default='0', help='0, 1, ..., 5 or \'all\'')

3. Training

3.1 Building a network

nnUnet requires the network to inherit the SegmentationNetwork class. Here is an achievable method. When using it, modify self.model to a custom network .

from nnunet.network_architecture.neural_network import SegmentationNetwork


class custom_net(SegmentationNetwork):

    def __init__(self, num_classes):
        super(custom_net, self).__init__()
        self.params = {
    
    'content': None}
        self.conv_op = nn.Conv2d
        self.do_ds = True
        self.num_classes = num_classes
        
		######## self.model 设置自定义网络 by Sleeep ########
        self.model = None
        ######## self.model 设置自定义网络 by Sleeep ########
        
        self.name = self.model.name

    def forward(self, x):

        if self.do_ds:
            return [self.model(x), ]
        else:
            return self.model(x)


def create_model():

    return custom_net(num_classes=2)

3.2 Modify configuration

After the network is built, some hyperparameters need to be modified to complete the training. The modified content is located in the /nnunet/training/network_training/nnUNetTrainerV2.py file. Modify the two functions initialize and initialize_network in the nnUNetTrainerV2 class . In order to reduce the training algebra, you can also modify the function ***init***

3.2.1 initialize

def initialize(self, training=True, force_load_plans=False):
        """
        - replaced get_default_augmentation with get_moreDA_augmentation
        - enforce to only run this code once
        - loss function wrapper for deep supervision
        :param training:
        :param force_load_plans:
        :return:
        """
        if not self.was_initialized:
            maybe_mkdir_p(self.output_folder)

            if force_load_plans or (self.plans is None):
                self.load_plans_file()
            # load plan informantion !!!   modify batch_size or patch_size after this
            self.process_plans(self.plans)
            ############## modify para by Sleeep ##############
            self.patch_size = np.array(custom_config['patch_size']).astype(int)
            self.batch_size = custom_config['batch_size']
            self.net_num_pool_op_kernel_sizes = [[2, 2]]
            ############## modify para by Sleeep ##############

self.process_plans(self.plans) : The official function, which will load various parameters generated in the preprocessing stage.
self.patch_size: The image size after the official preprocessing may not meet the needs of the custom network, and can be modified here. For example, in the preprocessing stage, the patch_size automatically determined by nnUnet is [53, 64] (for 2d network), but my network needs to meet the input size is an integer multiple of 32, the automatically generated patch_size can not meet, so Here it can be modified to [64, 64]. self.patch_size will be used in subsequent functions to build data augmentation methods.
self.batch_size : Modify
self.net_num_pool_op_kernel_sizes according to your own hardware configuration : This parameter is very important ! Its role is to determine the number of layers of deep supervision and the size of different layers. Here, the default custom network does not use deep supervision, so it can be set to only one list element, and the value inside is arbitrary ( maybe? For the case where deep supervision is not used )

3.2.2 initialize_network

 def initialize_network(self):
        """
        - momentum 0.99
        - SGD instead of Adam
        - self.lr_scheduler = None because we do poly_lr
        - deep supervision = True
        - i am sure I forgot something here
        Known issue: forgot to set neg_slope=0 in InitWeights_He; should not make a difference though
        :return:
        """
        # if self.threeD:
        #     conv_op = nn.Conv3d
        #     dropout_op = nn.Dropout3d
        #     norm_op = nn.InstanceNorm3d
        #
        # else:
        #     conv_op = nn.Conv2d
        #     dropout_op = nn.Dropout2d
        #     norm_op = nn.InstanceNorm2d
        #
        # norm_op_kwargs = {'eps': 1e-5, 'affine': True}
        # dropout_op_kwargs = {'p': 0, 'inplace': True}
        # net_nonlin = nn.LeakyReLU
        # net_nonlin_kwargs = {'negative_slope': 1e-2, 'inplace': True}
        # self.network = Generic_UNet(self.num_input_channels, self.base_num_features, self.num_classes,
        #                             len(self.net_num_pool_op_kernel_sizes),
        #                             self.conv_per_stage, 2, conv_op, norm_op, norm_op_kwargs, dropout_op,
        #                             dropout_op_kwargs,
        #                             net_nonlin, net_nonlin_kwargs, True, False, lambda x: x, InitWeights_He(1e-2),
        #                             self.net_num_pool_op_kernel_sizes, self.net_conv_kernel_sizes, False, True, True)
        ############## add custom model by Sleeep ##############
        self.network = create_model()
        ############## add custom model by Sleeep ##############
        if torch.cuda.is_available():
            self.network.cuda()
        self.network.inference_apply_nonlin = softmax_helper

Annotate the original nnunet network and add it to your own network.

3.2.3 init

    def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None,
                 unpack_data=True, deterministic=True, fp16=False):
        super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data,
                         deterministic, fp16)
        ##### by Sleeep ####
        self.max_num_epochs = custom_config['epoch']
        self.initial_lr = custom_config['lr']
        ##### by Sleeep ####
        
        self.deep_supervision_scales = None
        self.ds_loss_weights = None

        self.pin_memory = True

The default epoch is 1000, which is a bit long, change it to a smaller value. The default lr is 0.01

4. Matters needing attention

  1. Like, favorite, comment and go
  2. Go through the instructions in the nnunet paper and the official github first . An example provided by the official , in which the data set used is relatively small, can be debugged successfully on the data set first, and then use the custom network

References

  1. nnUnet

Guess you like

Origin blog.csdn.net/qq_42811827/article/details/127632891