Play MMDetection-MMDetection v2 target detection model training and testing (4)

1. Training model

1.1 Single GPU training

python tools/train.py {CONFIG} [--work-dir] [optional arguments] [optional arguments]

1.2 Multi-GPU Training

bash tools/dist_train.sh {CONFIG} {GPUS} [optional arguments]

1.3 Parameters:

config : the path of the training model configuration file
--work-dir : set the path to store the training generated files (log files and weight files)
--no-validate : do not verify the model during training
--gpus : the number of gpus to be used, Only applicable to non-distributed training
--gpu-dis : the id of the gpus to be used, only applicable to non-distributed training,
--seed : Set a random seed to facilitate the reproduction of results
--deterministic : Whether to set it for the CUDNN backend deterministic option

Generally, during training, ① the path of the config training model configuration file and ② --work-dir to set the path to store the training generated files (log files and weight files) are required, ③ --no-validate to set the training Do not validate the model from time to time. If you need to call it, just add --no-validate directly to the training command. Generally, it is not called. ④--The number of gpus to be used by gpus ⑤--The number of gpus to be used by gpu-dis The two parameters of gpus id are only applicable to non-distributed training. If multi-card training needs to be called, select the number of gpus used and the ID of the corresponding gpu.

1.4 Training examples

1.4.1 Single GPU training

python tools/train.py configs/obb/oriented_rcnn/faster_rcnn_orpn_our_imp_swin_fpn_1x_dota10.py --work-dir work_dirs/DOTA2_1K

Use Swin Transformer to train the DOTA dataset, and save the log file and weight file generated by the training in the work_dir/DOTA2_1K path in the root directory.

1.4.2 Multi-GPU training

bash tools/dist_train.sh configs/obb/oriented_rcnn/faster_rcnn_orpn_our_imp_swin_fpn_1x_dota10.py 4

Use Swin Transformer to train the DOTA data set, use 4 cards for training, no GPU ID is defined, the default is 0, 1, 2, 3

2. Test model

2.1 Single GPU Test

python tools/test.py  {CONFIG_FILE}  {CHECKPOINT_FILE} [--out ${RESULT_FILE}]  [--eval ${EVAL_METRICS}]   [optional arguments] # EVAL_METRICS可以为proposal_fast、proposal、bbox、segm、mAP、recall

2.2 Multi-GPU test

tools/dist_test.sh <CONFIG_FILE> <CHECKPOINT_FILE> <GPU_NUM>  [--out ${RESULT_FILE}] [--eval ${EVAL_METRICS}]  [optional arguments]  # EVAL_METRICS可以为proposal_fast、proposal、bbox、segm、mAP、recall 

2.3 parameters

CONFIG_FILE : path to test model configuration file
CHECKPOINT_FILE : path to weight file
--out : output result file in pickle format
--fuse-conv-bn : whether to fuse conv and bn, which will greatly improve inference speed
--format-only : Format output results without performing evaluation. This is useful when you want the results to be formatted in a specific format and submitted to a test server. When called, it is generally also used to generate files in other formats.
--eval : The item to evaluate against the result. Allowed values ​​depend on the dataset, for example, proposal_fast, proposal, bbox, segm can be used for COCO, mAP, recall can be used for PASCAL VOC. Cityscapes can be evaluated by Cityscapes as well as all COCO metrics.
--show : Visualization results with prediction boxes
--show-dir : Path to save drawn images with prediction boxes
--show-score-thr : Set threshold for displaying prediction boxes
--gpu-collect : Whether to use gpu Collect results
--tmpdir : The tmp directory is used to collect results from multiple worker processes, available when gpu collect is not specified

2.4 Test case

2.4.1 Single GPU Test

python tools/test.py configs/obb/oriented_rcnn/faster_rcnn_orpn_our_imp_swin_fpn_1x_dota10.py weights/DOTA/xxx.pth --eval mAP

Use Swin Transformer to test the DOTA dataset, use xxx.pth weight, and verify the mAP accuracy.

2.4.2 Multi-GPU test

bash tools/dist_train.sh configs/obb/oriented_rcnn/faster_rcnn_orpn_our_imp_swin_fpn_1x_dota10.py 4

Use Swin Transformer to test the DOTA dataset, use xxx.pth weight, and verify the mAP accuracy. , use 4 cards for training, no GPU ID is defined, the default is 0, 1, 2, 3.

Figure 1 Test results of DOTA dataset

Figure 1 shows the test results of the DOTA data set. It can be clearly seen that the target detection annotation frames are all rotating frames with angular rotation, and have high detection accuracy.

Figure 2 Test results of HRSC2016 dataset

Figure 2 shows the test results of the HRSC2016 data set. It can be clearly seen that the target detection annotation frames are all rotating frames with angular rotation, and have high detection accuracy.

Guess you like

Origin blog.csdn.net/weixin_42715977/article/details/131707853