[YOLO] Solve the problem that AP is 0 during YOLOX training

The training Coco data set can be viewed: YOLOX training COCO data set

ap=0 due to the absence of difficult parameters in the xml file can be viewed: About the ap is 0 caused by the absence of difficult parameters in the xml file

Some time ago, it was swiped by YOLOX, and various official accounts are pushing it, claiming: the performance exceeds Yolov5, and it beats all YOLO

So, full of expectations, I downloaded the source code and prepared to give it a try.

1. Problem description

Well, that’s right, I encountered a bunch of bugs, but fortunately they were all resolved. In the last training,
insert image description here
good guy, the AP was always 0. I searched on the Issues of YOLOX on Github. It seems that I am not the only one who encountered the same problem. insert image description here
Moreover, I revised it according to some answers, and found that it was wrong, and finally fell into deep thought.

Until yesterday in the group, someone accidentally said that he had successfully used YOLOX and deployed it, so I stayed and told him that my AP was 0 during training, and asked him how to use it. Of course, the answer I got didn’t really solve the problem. my question.

Second, find the problem

So this morning I started debugging and checking again, and I suddenly thought that there was something wrong with the data loading

It turned out that this was really the problem.

3. Solve the problem

  • First, ensure that the custom VOC format data is correct

    ├─datasets
    │  └─VOCdevkit
    │      └─VOC2007
    │          ├─Annotations
    │          ├─ImageSets
    │          │  └─Main
    │          └─JPEGImages
    

    There Mainmust be corresponding train.txtandval.txt

  • Then, yolox/data/datasets/voc_classes.pymodify it to the category of your training data ( 保险起见,最好每一个类别后都加上 ,):

    VOC_CLASSES = (
    "panda",
    "tiger",
    )
    
  • Then modifyyolox/exp/yolox_base.py

    (这里应该也可以不用修改,因为后面的exps/example/yolox_voc/yolox_voc_s.py会对self.num_classes进行重载)
    将self.num_classes修改为自己的类别数
    self.num_classes = 2 (我的数据集是 2)
    
    你还可以修改 self.inputsize, self.random_size 改变训练尺寸大小
    
    你还可以修改 self.test_size 改变测试的尺寸大小
    

    The next two are the key points:

  • Modify the yolox/data/datasets/voc.pymethod under_do_python_eval

    annopath = os.path.join(rootpath, "Annotations", "{:s}.xml")
    
    修改为
    
    annopath = os.path.join(rootpath, "Annotations", "{}.xml")
    
  • My problem is mainly reflected in the fact exps/example/yolox_voc/yolox_voc_s.pythat I believe that most of the partners have AP=0 during training because of this reason

    修改 self.num_classes = 2
    
    修改 get_data_loder 下的 VOCDetection 下的 image_sets=[('2007', 'trainval'), ('2012', 'trainval')],
    修改为 image_sets=[('2007', 'train')]
    
    修改 get_eval_loader 下的 VOCDetection 下的 image_sets=[('2007', 'test')],
    修改为 image_sets=[('2007', 'val')]
    

    important thing again

    get_data_loder down VOCDetectiondown

    image_sets=[('2007', 'trainval'), ('2012', 'trainval')],

    change intoimage_sets=[('2007', 'train')]


    More importantly: the get_eval_loadernextVOCDetection

    image_sets=[('2007', 'test')],

    Be sure to modify it to image_sets=[('2007', 'val')], because the data we verify is in val.txt, not test.txt, which should be the reason why AP is always 0

  • Finally modify tools/train.pythe parameter configuration in

    设置 default="Animals", 训练后结果就会保存在 tools/YOLOX_outputs/Animals下
    parser.add_argument("-expn", "--experiment-name", type=str, default=None)
    
    设置 model_name,我也不太清楚这是不是必须项 (我觉得不是)
    parser.add_argument("-n", "--name", type=str, default="yolox-s", help="model name")
    
    设置 batch_size
    parser.add_argument("-b", "--batch-size", type=int, default=64, help="batch size")
    
    设置gpu,因为我只有一张卡,所以设 default=0
    parser.add_argument(
        "-d", "--devices", default=0, type=int, help="device for training"
    )
    
    设置你的数据配置的路径,default="../exps/example/yolox_voc/yolox_voc_s.py"
    parser.add_argument(
        "-f",
        "--exp_file",
        default="../exps/example/yolox_voc/yolox_voc_s.py",
        type=str,
        help="plz input your expriment description file",
    )
    
    设置权重路径, default="../weights/yolox_s.pth"
    parser.add_argument("-c", "--ckpt", default="../weights/yolox_s.pth", type=str, help="checkpoint file")
    

    After completing all the configurations of the appeal, you can start training

4. Training and testing the effect

You can see that there is an AP, which means it is successful.
insert image description here
Finally, look at the test results.
insert image description here
insert image description here

insert image description here
The overall effect is still very good, the accuracy is above 90%

When testing, pay attention to:

  • Since demo.pyit is called by default COCO_CLASSES, if you want to display the results correctly, you must change yolox/data/datasets/coco_classes.pythe under COCO_CLASSESto your data category

  • or change it yolox/data/datasets/__init__.pytofrom .voc import VOCDetectionfrom .voc import VOCDetection, VOC_CLASSES

  • Then modify tools/demo.pyin to ,from yolox.data.datasets import COCO_CLASSESfrom yolox.data.datasets import COCO_CLASSES, VOC_CLASSES

  • Then change tools/demo.pyall COCO_CLASSESthe places where you use to change VOC_CLASSESit to

Guess you like

Origin blog.csdn.net/weixin_42166222/article/details/119637797