mmdetection v2.x model training and testing

Because of work, I came into contact with mmdetection. It is a pytorch-based target detection toolbox open sourced by SenseTime and the Chinese University of Hong Kong, which belongs to the mmlab project.

 mmdetection 2.7 file structure. Among them, the network model file is under the configs folder, and the model weight file (.pth) is under the checkpoints folder.

1. Model training

Before model training, parameters must be configured. The configuration parameters are completed by modifying the file, which mainly includes three parts: configuration data, configuration model and configuration training strategy. The following will take the training model faster_rcnn_r50_fpn_1x_coco as an example (where faster_rcnn represents the algorithm name, r50 represents the backbone uses resnet50, fpn represents the neck using fpn, 1x marks training for 12 epochs, and coco represents the format of the data in coco)

For specific parameter explanations, please refer to the specific explanation of the parameters in the configs of mmdetection_Zangyun Pavilion Master's Blog-CSDN Blog_workers_per_gpu

1) Configuration data

Find faster_rcnn_r50_fpn_1x_coco.py in configs

 Well, matryoshka mode. Let's find ../_base_/datasets/coco_detection.py first. This file is mainly to define a data dictionary, including train, val, test

 data_root The root directory of the data (the data is arranged in the way of /train2017 in the file).

image_scale The size of the uniform scaling of the image.

 samples_per_gpu :batch_size

evaluation : Evaluate every other round, with bbox as the evaluation object.

2) Configuration model

Open ../_base_/models/faster_rcnn_r50_fpn.py

 model[pretrained]: 

 model[roi_head][bbox_head][num_classes]: the number of classification categories, there are only a few categories here, and there is no need to +1 to indicate the background.

3) Configure the training strategy

打开'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'

 optimizer : optimizer

 total_epochs :epochs

 load_from : pre-trained weight file (.pth)

 4) Modify the class name

Open the file /mmdetection/mmdet/datasets/coco.py

Change the CLASSES here to the corresponding category name.

Find the evaluate function in the coco.py file, change the parameter classwise=True to see the AP value of each category during training.

def evaluate(self,
                 results,
                 metric='bbox',
                 logger=None,
                 jsonfile_prefix=None,
                 classwise=False,
                 proposal_nums=(100, 300, 1000),
                 iou_thrs=None,
                 metric_items=None):

Then open the file /mmdetection/mmdet/core/evaluation/class_name.py

 

 Replace the category of return here with your own category name, and be consistent with that in coco.py.

Then you can train the model, the model training uses the command

python tools/train.py configs/xxx.py --work-dir xxx --gpus 1

configs/xxx.py is the model definition file that needs to be trained. The parameter --work-dir indicates the directory where the training model is saved.

2. Model testing

After the model is trained, the map of the verification set is also obtained during the training process. mmdetection also provides a method of testing the test set (labeled), remember the type of test data defined in configs/xxx.py

First, modify the data configuration so that the program can find the test file pictures and annotations

Then modify the class name coco.py, class_name.py as in training

run command

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${RESULT_FILE.pkl}] 
  • config_file: Path to the model configuration file
  • checkpoint_file: Path to the model parameter file
  • gpu_num: GPUthe amount
  • --out: Set the path of the output pkltest result file
  • --eval: set evaluation index ( voc: mAP, recall| coco: bbox, segm, proposal)
  • --show: Display the test set images with predicted boxes
  • --show-dir: Set the directory path for storing the test set images with predicted boxes
  • --show-score-thr: Set the threshold for displaying the prediction box, the default value is0.3

The above command gets the file .pkl, and then evaluates the model to get the AP of each category.

python tools/eval_metric.py ${CONFIG_FILE} ${PKL_RESULTS} [--eval ${EVAL_METRICS}]
  • config_file: Path to the model configuration file
  • pkl_results: pklPath to the test result file
  • --eval: set evaluation index ( voc: mAP, recall| coco: bbox, segm, proposal)

Guess you like

Origin blog.csdn.net/Eyesleft_being/article/details/119672016