YOLO outputs the AP value of large, medium and small targets

In the process of conducting the experiment, the blogger has been using the COCO data set, and its evaluation indicators are as follows, including the AP value and AR value of large, medium and small targets. The blogger chose yolov5 and yolov7 for the experiment. Among them, yolov5 was successful, but yolov7 had problems.

insert image description here

Then the blogger used the YOLO model in the comparison experiment, which does not output the AP value of large, medium and small targets by default. In order to obtain this evaluation index value, we need to modify the val.py file.

download dependencies

The first is the dependency package we need: pycocotools

pip install pycocotools

dataset test

yolov5To execute val.py. In yolov7 test.py, we need to load our trained model file, and its parameters are roughly the same as those during training.

modify file

The first is to --save-jsonadddefault="True"

parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')

Next, comment out the following line of code:

opt.save_json |= opt.data.endswith('coco.yaml')

Error handling

Error 1:

File "/home/ubuntu/anaconda3/envs/python/lib/python3.8/site-packages/torch/serialization.py", line 1033, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: STACK_GLOBAL requires str

Just delete the cache file in the data set, and then run again.

insert image description here

The running results are as follows, indicating that the program has no problems.

insert image description here

Error 2:

You can see the following error report, this is because our COCO dataset verification requires an annotation file in json format, and this file should be placed in a ./coco/annotations/folder, then we can copy instances_val2017.jsonthe file to the root directory of yolov7./coco/annotations

Speed: 9.4/1.1/10.4 ms inference/NMS/total per 640x640 image at batch-size 4
Evaluating pycocotools mAP... saving runs/test/exp7/best_predictions.json...
loading annotations into memory...
pycocotools unable to run: [Errno 2] No such file or directory: './coco/annotations/instances_val2017.json'

It should be noted that the error reported here by yolov5 is slightly different from that of yolov7:
here he wants to store the address of json as../datasets/coco/annotations/instances_val2017.json

loading annotations into memory...
pycocotools unable to run: [Errno 2] No such file or directory: '../datasets/coco/annotations/instances_val2017.json'
Results saved to runs/val/exp3

error 3

However, yolov7 has the following problems:

insert image description here

At first I made yolov7 first, and thought it was a problem with the data set, but it was normal in yolov5, so it can only be caused by the modification method of yolov7 and yolov5.
I saw that someone said that the test.py file should manually set is_coco to True. I don’t know why the result here is False. If it is manually set to True, the following coco80->91 function will be executed, because the coco tag file is up to 91. If not The transfer evaluation results are all 0. But after the blogger experimented, he found that it didn't work.

Later, when I checked the information, I made a mistake and modified the following code. To be honest, I really don’t know why I changed it like this, but it turned out to be a success.
original code

'category_id': coco91class[int(p[5])] if is_coco else int(p[5]),

Modified code:

'category_id':  int(p[5])

insert image description here

It is OK to run again.

yolov7 running results

insert image description here

yolov5 running results

The operation effect of yolov5 is shown in the figure below, but it is found that the evaluation result value of yolo is a little higher than the value of coco, but it is not harmful.

insert image description here

Guess you like

Origin blog.csdn.net/pengxiang1998/article/details/131146262