目标检测比赛---Google AI Open Images - Object Detection Track

https://www.kaggle.com/c/google-ai-open-images-object-detection-track#Evaluation

Submissions are evaluated by computing mean Average Precision (AP), modified to take into account the annotation process of Open Images dataset (mean is taken over per-class APs). The metric is described on the Open Images Challenge website.

The final mAP is computed as the average AP over the 500 classes. The participants will be ranked on this final metric.

Kaggle's production code in C# can be viewed here. The metric is also implemented as a part of Tensorflow Object Detection API. See this Tutorial on running the evaluation in Python.

Kernel Submissions

You can make submissions directly from Kaggle Kernels. By adding your teammates as collaborators on a kernel, you can share and edit code privately with them.

Submission File

For each image in the test set, you must predict a list of boxes describing objects in the image. Each box is described as

ImageID,PredictionString
ImageID,{Label Confidence XMin YMin XMax YMax},{...}




tensorflow 自带评测函数----https://github.com/tensorflow/models/tree/master/research/object_detection
评测函数介绍: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/challenge_evaluation.md


猜你喜欢

转载自www.cnblogs.com/Allen-rg/p/10645224.html