How to quickly write your first SCI series: Commonly used evaluation indicators for deep learning target detection algorithms - understand it in one article!

YOLOv8 latest improvement series

For detailed improvement tutorials and source code, click here! Click here! ! Click here! ! ! Station B: The source code of the AI ​​academic beeping beast is in the link in the album, and there is also a link in the news. Thank you for your support! May scientific research be far ahead!

As of press time, 22 types of source code packages for the latest improved series of YOLOv8 on site B have been updated!
After permuting and combining 2-4 types, there are about 6000-7000 types!


1. Updates on workshop work

1.1 YOLOv8 series improved source code package (22 improvement methods have been updated)

Insert image description here

Part of the improvement tutorial video is here: Detailed improvement tutorial and source code, click here! Click here! ! Click here! ! ! Station B: The source code of the AI ​​academic beeping beast is in the link in the album, and there is also a link in the news. Thank you for your support! May scientific research be far ahead!

1.2 Small gifts for academic writing

In addition, some gadgets, new ideas, recommendations for innovative directions and writing entry points will also be updated from time to time. After the SCI writing course is launched, writing-related work will be updated into the course.
Insert image description here

Part of the improvement tutorial video is here: Detailed improvement tutorial and source code, click here! Click here! ! Click here! ! ! Station B: The source code of the AI ​​academic beeping beast is in the link in the album, and there is also a link in the news. Thank you for your support! May scientific research be far ahead!

2. Thinking inertia? Article reading.

Is a new algorithm an innovation?

Peking University core journals in the video

SCI journals in the video

3. Commonly used evaluation indicators

3.1. Positive samples and negative samples

Samples are a very important concept in the evaluation of computer vision. Positive samples are easier to understand and are objects to be detected, while negative samples are targets that should not be detected. There are some problems with negative samples here. Firstly, the definition of negative samples is relatively subjective. Secondly, the dimensions of negative samples and positive samples are not at the same level. In the actual algorithm, part of the detected candidate area will be used as positive samples and part of it will be used as negative samples. , such as YOLO, Faster-RCNN and SSD, etc.

For example, when testing masks, masks are positive samples, and non-masks are negative samples, such as faces next to them, mobile phones, etc.

3.2 True (TP), False Positive (FP), True Negative (TN), False Negative (FN)

Insert image description here

3.2.1 True Positive (TP)

Three conditions need to be met:

  1. The confidence score (Confidence Score) is greater than the threshold. In fact, all predicted boxes must meet this condition;
  2. The prediction type matches the label type;
  3. The predicted intersection over union (IoU) of the Bounding Box and the Ground Truth is greater than the threshold (eg 0.5). When there are multiple pre-selected boxes that meet the conditions, the one with the highest confidence is selected as the TP, and the rest are of as FP.

3.2.2 False Positive (FP)

The number of negative samples that are detected as positive samples, also called false positives. The IoU of the predicted Bounding Box and Ground Truth is less than the threshold (positioning error) or the predicted type does not match the label type (classification error)

3.2.3 False Negative (FN)

The number of positive samples that are not detected as negative samples, also called false negatives, refers to the Ground Truth area that is not detected.

3.2.4 True Negative (TN)

The number of negative samples that are detected cannot be calculated. In target detection, TN is usually not paid attention to.

3.3 Precision

Accuracy (Percision), also called precision rate, is the proportion of correct positive predictions (True Positive, TP) among the recognized objects.
10 detection frames, 7 are detected correctly, then 70%

Insert image description here

3.4 Recall

Recall is the ratio of correctly recognized objects to the total number of objects.

10 objects, 8 are framed, then 80%.
Insert image description here

3.5 F1 score

Insert image description here

3.6 PR curve

Draw the corresponding precision and recall when selecting different thresholds.
Insert image description here

3.7 [email protected]

3.7.1 AP

The number of correctly identified samples accounts for the percentage of the total number of identified samples, corresponding to the area under the PR curve.

3.7.2 mAP.5

The average of AP values ​​across all categories, mAP is often used as the final metric to evaluate model performance

AP\category (IoU=0.5)

3.8 FPS(Frames Per Second)

Frames per second is used to evaluate the speed of model detection. The higher the FPS, the better the real-time performance.

knock off!

Far ahead of my family!
For detailed improvement tutorials and source code, click here! Click here! ! Click here! ! ! Station B: The source code of the AI ​​academic beeping beast is in the link in the album, and there is also a link in the news. Thank you for your support! May scientific research be far ahead!

Guess you like

Origin blog.csdn.net/weixin_51692073/article/details/132765690