YOLOv3: An Incremental Improvement (YOLOv3 论文翻译)


英文版论文原文:https://pjreddie.com/media/files/papers/YOLOv3.pdf


YOLOv3:一个渐进的改进

YOLOv3: An Incremental Improvement

Joseph Redmon& Jinsong Zhao

  • 华盛顿大学
  • University of Washington

Abstract

我们向YOLO提供一些更新! 我们做了一些小的设计更改以使其更好。 我们还培训了这个相当庞大的新网络。 比上次要大一点,但更准确。 不过速度还是很快的,请放心。 YOLOv3以320×320的速度运行时,在28.2 mAP的速度下运行时间为22毫秒,与SSD一样精确,但速度提高了三倍。 当我们查看旧的.5 IOU mAP检测指标YOLOv3时,它是相当不错的。 在Titan X上,它在51毫秒内可达到57:9的AP50,相比之下,RetinaNet在198毫秒内可达到57:5的AP50,性能相似,但速度快3.8倍。 与往常一样,所有代码都可以在https://pjreddie.com/yolo/在线获得。

We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that’s pretty swell. It’s a little bigger than last time but more accurate. It’s still fast though, don’t worry. At 320 × 320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57:9 AP50 in 51 ms on a Titan X, compared to 57:5 AP50 in 198 ms by RetinaNet, similar performance but 3.8× faster. As always, all the code is online at https://pjreddie.com/yolo/.

1. Introduction

有时候,您只需要拨入一年的电话,就知道吗? 我今年没有做很多研究。 在Twitter上花费了很多时间。 和GAN一起玩了一点。 去年[12] [1]我剩下一点动力。 我设法对YOLO进行了一些改进。 但是,老实说,没有什么比超级有趣的了,只是一堆小小的改进而已。 我也帮助了其他人的研究。

Sometimes you just kinda phone it in for a year, you know? I didn’t do a whole lot of research this year. Spent a lot of time on Twitter. Played around with GANs a little. I had a little momentum left over from last year [12] [1]; I managed to make some improvements to YOLO. But, honestly, nothing like super interesting, just a bunch of small changes that make it better. I also helped out with other people’s research a little.

实际上,这就是今天把我们带到这里的原因。 我们有一个可随时使用相机的截止日期[4],我们需要引用我对YOLO所做的一些随机更新,但我们没有消息来源。 因此,准备一份技术报告

Actually, that’s what brings us here today. We have a camera-ready deadline [4] and we need to cite some of the random updates I made to YOLO but we don’t have a source. So get ready for a TECH REPORT!

技术报告的优点在于它们不需要介绍,大家都知道我们为什么在这里。 因此,本导论的结尾将为本文的其余部分指明路标。 首先,我们将告诉您与YOLOv3达成的交易。 然后,我们将告诉您我们的做法。 我们还将告诉您一些我们尝试过的无效的事情。 最后,我们将考虑所有这些。

The great thing about tech reports is that they don’t need intros, y’all know why we’re here. So the end of this introduction will signpost for the rest of the paper. First we’ll tell you what the deal is with YOLOv3. Then we’ll tell you how we do. We’ll also tell you about some things we tried that didn’t work. Finally we’ll contemplate what this all means.

2. The Deal

因此,这是与YOLOv3达成的交易:我们大多从别人那里吸取了好主意。 我们还培训了一个新的分类器网络,该网络要比其他分类器更好。 我们将带您从头开始学习整个系统,以便您可以全部了解。

So here’s the deal with YOLOv3: We mostly took good ideas from other people. We also trained a new classifier network that’s better than the other ones. We’ll just take you through the whole system from scratch so you can understand it all.

在这里插入图片描述
图1.我们根据Focal Loss论文[9]修改了该图。 YOLOv3的运行速度明显快于其他具有可比性能的检测方法。 从M40或Titan X来看,它们基本上是相同的GPU。

Figure 1. We adapt this figure from the Focal Loss paper [9]. YOLOv3 runs significantly faster than other detection methods with comparable performance. Times from either an M40 or Titan X, they are basically the same GPU.

2.1. 边界框预测

2.1. Bounding Box Prediction

遵循YOLO9000,我们的系统使用尺寸簇作为锚定框来预测边界框[15]。 网络为每个边界框 t x t_x , t y t_y , t w t_w , t h t_h 预测4个坐标。 如果单元格从图像的左上角偏移 ( c x , c y ) (c_x, c_y) ,并且先验边界框的宽度和高度为 p w p_w , p h p_h ,则预测对应于:

Following YOLO9000 our system predicts bounding boxes using dimension clusters as anchor boxes [15]. The network predicts 4 coordinates for each bounding box, t x t_x , t y t_y , t w t_w , t h t_h . If the cell is offset from the top left corner of the image by ( c x , c y ) (c_x, c_y) and the bounding box prior has width and height p w p_w , p h p_h , then the predictions correspond to:

b x = σ ( t x ) + c x b_x=\sigma(t_x)+c_x
b y = σ ( t y ) + c y b_y=\sigma(t_y)+c_y
b w = p w e t w b_w=p_we^{t_w}
b h = p h e t h b_h=p_he^{t_h}

在训练期间,我们使用平方误差损失之和。 如果某个坐标预测的地面真实值为 t ^ \hat{t}_* ,则我们的梯度为地面真实值(从地面真实框计算得出)减去我们的预测: t ^ t \hat{t}_* − t_* 。 通过倒转上述公式,可以很容易地计算出地面真实值。

During training we use sum of squared error loss. If the ground truth for some coordinate prediction is t ^ \hat{t}_* our gradient is the ground truth value (computed from the ground truth box) minus our prediction: t ^ t \hat{t}_* − t_* . This ground truth value can be easily computed by inverting the equations above.

在这里插入图片描述
图2.具有尺寸先验和位置预测的边界框。 我们预测盒子的宽度和高度为与簇质心的偏移量。 我们使用sigmoi函数预测盒子相对于过滤器应用位置的中心坐标。 这个数字公然自夸[15]。

Figure 2. Bounding boxes with dimension priors and location prediction. We predict the width and height of the box as offsets from cluster centroids. We predict the center coordinates of the box relative to the location of filter application using a sigmoi function. This figure blatantly self-plagiarized from [15].

YOLOv3使用逻辑回归预测每个边界框的客观性得分。 如果边界框先验与地面真值对象的重叠量大于任何其他边界框先验,则应为1。 如果边界框先验不是最好的,但是与地面真实对象的重叠超过某个阈值,我们将忽略预测[17]。 我们使用的阈值为:5。 与[17]不同,我们的系统仅为每个地面真值对象分配一个边界框。 如果没有将边界框先验分配给地面真理对象,则不会对坐标或类别预测造成任何损失,而只会造成客观性的损失。

YOLOv3 predicts an objectness score for each bounding box using logistic regression. This should be 1 if the bounding box prior overlaps a ground truth object by more than any other bounding box prior. If the bounding box prior is not the best but does overlap a ground truth object by more than some threshold we ignore the prediction, following [17]. We use the threshold of :5. Unlike [17] our system only assigns one bounding box prior for each ground truth object. If a bounding box prior is not assigned to a ground truth object it incurs no loss for coordinate or class predictions, only objectness.

2.2. Class Prediction

每个框使用多标签分类预测边界框可能包含的类。 我们不使用softmax,因为我们发现它不需要良好的性能,而是仅使用独立的逻辑分类器。 在训练期间,我们使用二进制交叉熵损失进行类别预测。

Each box predicts the classes the bounding box may contain using multilabel classification. We do not use a softmax as we have found it is unnecessary for good performance, instead we simply use independent logistic classifiers. During training we use binary cross-entropy loss for the class predictions.

当我们移至开放图像数据集[7]等更复杂的领域时,这种表达方式会有所帮助。 在此数据集中,有许多重叠的标签(即“女人”和“人”)。 使用softmax会假设每个盒子只有一个类,而通常并非如此。 多标签方法可以更好地对数据建模。

This formulation helps when we move to more complex domains like the Open Images Dataset [7]. In this dataset there are many overlapping labels (i.e. Woman and Person). Using a softmax imposes the assumption that each box has exactly one class which is often not the case. A multilabel approach better models the data.

2.3. Predictions Across Scales

YOLOv3预测3种不同比例的盒子。 我们的系统使用相似的概念从金字塔尺度中提取特征,以金字塔网络为特征[8]。 从基本特征提取器中,我们添加了几个卷积层。 这些中的最后一个预测3D张量编码边界框,客观性和类预测。 在我们用COCO [10]进行的实验中,我们预测每个尺度上有3个盒子,因此对于4个边界框偏移,1个客观性预测和80个类预测,张量为 N × N × [ 3 × ( 4 + 1 + 80 ) ] N × N × [3 \times (4 + 1 + 80)]

YOLOv3 predicts boxes at 3 different scales. Our system extracts features from those scales using a similar concept to feature pyramid networks [8]. From our base feature extractor we add several convolutional layers. The last of these predicts a 3-d tensor encoding bounding box, objectness, and class predictions. In our experiments with COCO [10] we predict 3 boxes at each scale so the tensor is N × N × [ 3 × ( 4 + 1 + 80 ) ] N × N × [3 \times (4 + 1 + 80)] for the 4 bounding box offsets, 1 objectness prediction, and 80 class predictions.

接下来,我们从先前的2层中获取特征图,并将其上采样2倍。 我们还从网络中较早的地方获取了一个特征图,并使用串联将其与我们的上采样特征合并。 这种方法使我们能够从上采样的特征中获取更有意义的语义信息,并从较早的特征图中获取更细粒度的信息。 然后,我们再添加一些卷积层来处理此组合特征图,并最终预测出相似的张量,尽管现在的大小是原来的两倍。

Next we take the feature map from 2 layers previous and upsample it by 2×. We also take a feature map from earlier in the network and merge it with our upsampled features using concatenation. This method allows us to get more meaningful semantic information from the upsampled features and finer-grained information from the earlier feature map. We then add a few more convolutional layers to process this combined feature map, and eventually predict a similar tensor, although now twice the size.

我们再执行一次相同的设计,以预测最终比例的盒子。 因此,我们对第3层的预测受益于所有先前的计算以及网络早期的细粒度功能。

We perform the same design one more time to predict boxes for the final scale. Thus our predictions for the 3rd scale benefit from all the prior computation as well as fine-grained features from early on in the network.

我们仍然使用k均值聚类来确定边界框先验。 我们只是随意选择了9个聚类和3个比例,然后将这些聚类在各个比例之间平均分配。 在COCO数据集上,9个聚类为: ( 10 × 13 ) , ( 16 × 30 ) , ( 33 × 23 ) , ( 30 × 61 ) ; ( 62 × 45 ) , ( 59 × 119 ) , ( 116 × 90 ) , ( 156 × 198 ) , ( 373 × 326 ) (10×13), (16×30), (33×23), (30×61); (62×45), (59×119), (116 × 90), (156 × 198), (373 × 326)

We still use k-means clustering to determine our bounding box priors. We just sort of chose 9 clusters and 3 scales arbitrarily and then divide up the clusters evenly across scales. On the COCO dataset the 9 clusters were: ( 10 × 13 ) , ( 16 × 30 ) , ( 33 × 23 ) , ( 30 × 61 ) ; ( 62 × 45 ) , ( 59 × 119 ) , ( 116 × 90 ) , ( 156 × 198 ) , ( 373 × 326 ) (10×13), (16×30), (33×23), (30×61); (62×45), (59×119), (116 × 90), (156 × 198), (373 × 326) .

2.4. Feature Extractor

我们使用一个新的网络来执行特征提取。 我们的新网络是YOLOv2,Darknet-19中使用的网络与新的残留网络内容之间的一种混合方法。 我们的网络使用了连续的3×3和1×1卷积层,但现在也有了一些快捷连接,并且规模更大。 它有53个卷积层,所以我们称它为…等待它… Darknet-53!

We use a new network for performing feature extraction. Our new network is a hybrid approach between the network used in YOLOv2, Darknet-19, and that newfangled residual network stuff. Our network uses successive 3 × 3 and 1 × 1 convolutional layers but now has some shortcut connections as well and is significantly larger. It has 53 convolutional layers so we call it… wait for it… Darknet-53!

在这里插入图片描述
Table 1. Darknet-53.

这个新网络比Darknet-19强大得多,但仍比ResNet-101或ResNet-152高效。 这是一些ImageNet结果:

This new network is much more powerful than Darknet-19 but still more efficient than ResNet-101 or ResNet-152. Here are some ImageNet results:

每个网络都经过相同设置的训练,并以256×256的单作物精度进行测试。 运行时间是在Titan X上以256×256进行测量的。因此Darknet-53与最新的分类器具有同等的性能,但浮点运算更少,速度更高。 Darknet-53比ResNet-101更好,且速度是1:5倍。 Darknet-53具有与ResNet-152相似的性能,并且快2倍。

在这里插入图片描述
表2.骨干的比较。 精度,数十亿次操作,每秒十亿次浮点操作以及各种网络的FPS。

Table 2. Comparison of backbones. Accuracy, billions of operations, billion floating point operations per second, and FPS for various networks.

Each network is trained with identical settings and tested at 256×256, single crop accuracy. Run times are measured on a Titan X at 256 × 256. Thus Darknet-53 performs on par with state-of-the-art classifiers but with fewer floating point operations and more speed. Darknet-53 is better than ResNet-101 and 1:5× faster. Darknet-53 has similar performance to ResNet-152 and is 2× faster.

Darknet-53还实现了每秒最高的测量浮点运算。 这意味着网络结构可以更好地利用GPU,从而使其评估效率更高,从而速度更快。 这主要是因为ResNets层太多了,效率也不高。

Darknet-53 also achieves the highest measured floating point operations per second. This means the network structure better utilizes the GPU, making it more efficient to evaluate and thus faster. That’s mostly because ResNets have just way too many layers and aren’t very efficient.

2.5. Training

我们仍然会训练完整的图像,而不会进行任何艰苦的负面挖掘。 我们使用多尺度培训,大量数据扩充,批处理规范化以及所有标准内容。 我们使用Darknet神经网络框架进行培训和测试[14]。

We still train on full images with no hard negative mining or any of that stuff. We use multi-scale training, lots of data augmentation, batch normalization, all the standard stuff. We use the Darknet neural network framework for training and testing [14].

3. How We Do

YOLOv3很好! 参见表3。就COCO而言,平均平均AP度量标准很奇怪,与SSD变体相当,但速度提高了3倍。 不过,在此指标上,它仍然比其他模型(例如RetinaNet)要落后很多。

YOLOv3 is pretty good! See table 3. In terms of COCOs weird average mean AP metric it is on par with the SSD variants but is 3× faster. It is still quite a bit behind other models like RetinaNet in this metric though.

但是,当我们以IOU =:5(或图表中的AP50)查看mAP的“旧”检测指标时,YOLOv3非常强大。 它几乎与RetinaNet相当,并且远远超过SSD变体。 这表明YOLOv3是一个非常强大的检测器,擅长于为物体制造体面的盒子。 但是,随着IOU阈值的增加,性能会显着下降,这表明YOLOv3难以使盒子与对象完美对齐。

However, when we look at the “old” detection metric of mAP at IOU= :5 (or AP50 in the chart) YOLOv3 is very strong. It is almost on par with RetinaNet and far above the SSD variants. This indicates that YOLOv3 is a very strong detector that excels at producing decent boxes for objects. However, performance drops significantly as the IOU threshold increases indicating YOLOv3 struggles to get the boxes perfectly aligned with the object.

过去,YOLO一直在努力处理小物件。 但是,现在我们看到了这种趋势的逆转。 通过新的多尺度预测,我们看到YOLOv3具有相对较高的APS性能。 但是,它在中型和大型对象上的性能相对较差。 要深入了解这一点,还需要进行更多调查。

In the past YOLO struggled with small objects. However, now we see a reversal in that trend. With the new multi-scale predictions we see YOLOv3 has relatively high APS performance. However, it has comparatively worse performance on medium and larger size objects. More investigation is needed to get to the bottom of this.

当我们在AP50度量标准上绘制精度与速度的关系时(参见图5),我们看到YOLOv3比其他检测系统具有明显的优势。 即更快,更好。

When we plot accuracy vs speed on the AP50 metric (see figure 5) we see YOLOv3 has significant benefits over other detection systems. Namely, it’s faster and better.

4.我们尝试过的无效的事情

4. Things We Tried That Didn’t Work

在开发YOLOv3时,我们尝试了很多东西。 很多都行不通。 这是我们能记住的东西。

We tried lots of stuff while we were working on YOLOv3. A lot of it didn’t work. Here’s the stuff we can remember.

锚框 x x , y y 偏移量预测。 我们尝试使用普通锚框预测机制,在该机制中,您可以使用线性激活将 x x , y y 偏移量预测为框宽度或高度的倍数。 我们发现此公式降低了模型的稳定性,并且效果不佳。

Anchor box x x , y y offset predictions. We tried using the normal anchor box prediction mechanism where you predict the x x , y y offset as a multiple of the box width or height using a linear activation. We found this formulation decreased model stability and didn’t work very well.

线性 x x , y y 预测而非逻辑预测。 我们尝试使用线性激活来直接预测 x x , y y 偏移量,而不是逻辑激活。 这导致mAP下降了两点

Linear x x , y y predictions instead of logistic. We tried using a linear activation to directly predict the x x , y y offset instead of the logistic activation. This led to a couple point drop in mAP

失焦。 我们尝试使用焦点损失。 它降低了我们的mAP大约2点。 YOLOv3可能已经对焦点损失试图解决的问题具有鲁棒性,因为它具有独立的客观性预测和条件类预测。 因此,对于大多数示例而言,分类预测不会带来损失吗? 或者其他的东西? 我们不太确定。

Focal loss. We tried using focal loss. It dropped our mAP about 2 points. YOLOv3 may already be robust to the problem focal loss is trying to solve because it has separate objectness predictions and conditional class predictions. Thus for most examples there is no loss from the class predictions? Or something? We aren’t totally sure.

双IOU阈值和真值分配。 更快的RCNN在训练期间使用两个IOU阈值。 如果预测与基本事实的重叠量为0.7,则为正例;由[.3-.7]的预测将被忽略,对于所有基本实物,小于0.3则为否定例。 我们尝试了类似的策略,但未取得良好的效果。

Dual IOU thresholds and truth assignment. Faster RCNN uses two IOU thresholds during training. If a prediction overlaps the ground truth by .7 it is as a positive example, by [.3−.7] it is ignored, less than .3 for all ground truth objects it is a negative example. We tried a similar strategy but couldn’t get good results.

我们非常喜欢我们目前的表述,似乎至少是局部最优。 这些技术中的某些可能最终会产生良好的结果,也许它们只需要进行一些调整即可稳定训练。

We quite like our current formulation, it seems to be at a local optima at least. It is possible that some of these techniques could eventually produce good results, perhaps they just need some tuning to stabilize the training.
在这里插入图片描述

表3.我很认真地只是从[9]中偷走了所有这些表格,它们花了很长时间才能从头开始制作。 好的,YOLOv3一切正常。 请记住,RetinaNet的图像处理时间要长3:8倍。 YOLOv3比SSD变种要好得多,可与AP50指标上的最新模型相媲美。

Table 3. I’m seriously just stealing all these tables from [9] they take soooo long to make from scratch. Ok, YOLOv3 is doing alright. Keep in mind that RetinaNet has like 3:8× longer to process an image. YOLOv3 is much better than SSD variants and comparable to state-of-the-art models on the AP50 metric.

在这里插入图片描述

图3.再次根据[9]进行改编,这次显示了在mAP上以0.5 IOU度量标准的速度/精度折衷。 您可以说YOLOv3很好,因为它很高而且离左边很远。 你可以引用自己的论文吗? 猜猜谁会尝试,这个家伙![16]。 哦,我忘了,我们还修复了YOLOv2中的数据加载错误,该错误通过2 mAP的帮助而得以解决。 只是潜入这里不放弃布局。

Figure 3. Again adapted from the [9], this time displaying speed/accuracy tradeoff on the mAP at .5 IOU metric. You can tell YOLOv3 is good because it’s very high and far to the left. Can you cite your own paper? Guess who’s going to try, this guy ! [16]. Oh, I forgot, we also fix a data loading bug in YOLOv2, that helped by like 2 mAP. Just sneaking this in here to not throw off layout.

5. What This All Means

YOLOv3是一个很好的检测器。 快速,准确。 在.5至.95 IOU度量标准之间的COCO平均AP效果不佳。 但是,对于.5 IOU的旧检测指标而言,这非常好。

YOLOv3 is a good detector. It’s fast, it’s accurate. It’s not as great on the COCO average AP between .5 and .95 IOU metric. But it’s very good on the old detection metric of .5 IOU.

为什么我们仍要转换指标? 原始的COCO论文只是这样一个含糊的句子:“评估服务器完成后,将添加对评估指标的完整讨论”。 Russakovsky等人的报告指出,人类很难区分.3和.5的IOU! “训练人员视觉检查IOU为0.3的边界框并将其与IOU 0.5的边界框区别开来是非常困难的。” [18]如果人类很难分辨出差异,那么这有多重要?

Why did we switch metrics anyway? The original COCO paper just has this cryptic sentence: “A full discussion of evaluation metrics will be added once the evaluation server is complete”. Russakovsky et al report that that humans have a hard time distinguishing an IOU of .3 from .5! “Training humans to visually inspect a bounding box with IOU of 0.3 and distinguish it from one with IOU 0.5 is surprisingly difficult.” [18] If humans have a hard time telling the difference, how much does it matter?

但是也许更好的问题是:“既然有了探测器,我们将如何处理这些探测器?”许多从事这项研究的人都在Google和Facebook上。 我想至少我们知道该技术掌握得很好,并且绝对不会被用来收集您的个人信息并将其出售给…。等等,您是在说这正是它的用途?? 哦。

But maybe a better question is: “What are we going to do with these detectors now that we have them?” A lot of the people doing this research are at Google and Facebook. I guess at least we know the technology is in good hands and definitely won’t be used to harvest your personal information and sell it to… wait, you’re saying that’s exactly what it will be used for?? Oh.

好吧,那些为视觉研究投入大量资金的人是军人,他们从未做过任何可怕的事情,例如用新技术杀死许多人,等等… 1

Well the other people heavily funding vision research are the military and they’ve never done anything horrible like killing lots of people with new technology oh wait…1

我非常希望大多数使用计算机视觉的人都在用它做快乐的好事,例如计算国家公园中斑马的数量[13]或在猫徘徊在房子周围时追踪它们的猫[19]。]。 但是计算机视觉已经被质疑使用,作为研究人员,我们有责任至少考虑我们的工作可能造成的危害并想办法减轻它。 我们欠世界那么多。

I have a lot of hope that most of the people using computer vision are just doing happy, good stuff with it, like counting the number of zebras in a national park [13], or tracking their cat as it wanders around their house [19]. But computer vision is already being put to questionable use and as researchers we have a responsibility to at least consider the harm our work might be doing and think of ways to mitigate it. We owe the world that much.

最后,不要@我。 (因为我终于退出了Twitter)。

In closing, do not @ me. (Because I finally quit Twitter).

References

[1] Analogy. Wikipedia, Mar 2018. 1

[2] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge. International journal of computer vision, 88(2):303–
338, 2010. 6

[3] C.-Y. Fu, W. Liu, A. Ranga, A. Tyagi, and A. C. Berg. Dssd: Deconvolutional single shot detector. arXiv preprint arXiv:1701.06659, 2017. 3

[4] D. Gordon, A. Kembhavi, M. Rastegari, J. Redmon, D. Fox, and A. Farhadi. Iqa: Visual question answering in interactive environments. arXiv preprint arXiv:1712.03316, 2017. 1

[5] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. 3

[6] J. Huang, V. Rathod, C. Sun, M. Zhu, A. Korattikara, A. Fathi, I. Fischer, Z. Wojna, Y. Song, S. Guadarrama, et al. Speed/accuracy trade-offs for modern convolutional object detectors. 3

[7] I. Krasin, T. Duerig, N. Alldrin, V. Ferrari, S. Abu-El-Haija, A. Kuznetsova, H. Rom, J. Uijlings, S. Popov, A. Veit, S. Belongie, V. Gomes, A. Gupta, C. Sun, G. Chechik, D. Cai, Z. Feng, D. Narayanan, and K. Murphy. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2017. 2

[8] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2117–2125, 2017. 2, 3

[9] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar. ´ Focal loss for dense object detection. arXiv preprint arXiv:1708.02002, 2017. 1, 3, 4

[10] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft coco: Com- ´ mon objects in context. In European conference on computer vision, pages 740–755. Springer, 2014. 2

[11] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.- Y. Fu, and A. C. Berg. Ssd: Single shot multibox detector. In European conference on computer vision, pages 21–37. Springer, 2016. 3

[12] I. Newton. Philosophiae naturalis principia mathematica. William Dawson & Sons Ltd., London, 1687. 1

[13] J. Parham, J. Crall, C. Stewart, T. Berger-Wolf, and D. Rubenstein. Animal population censusing at scale with citizen science and photographic identification. 2017. 4

[14] J. Redmon. Darknet: Open source neural networks in c. http://pjreddie.com/darknet/, 2013–2016. 3

[15] J. Redmon and A. Farhadi. Yolo9000: Better, faster, stronger. In Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on, pages 6517–6525. IEEE, 2017. 1, 2, 3

[16] J. Redmon and A. Farhadi. Yolov3: An incremental improvement. arXiv, 2018. 4

[17] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497, 2015. 2

[18] O. Russakovsky, L.-J. Li, and L. Fei-Fei. Best of both worlds: human-machine collaboration for object annotation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2121–2131, 2015. 4

[19] M. Scott. Smart camera gimbal bot scanlime:027, Dec 2017. 4

[20] A. Shrivastava, R. Sukthankar, J. Malik, and A. Gupta. Beyond skip connections: Top-down modulation for object detection. arXiv preprint arXiv:1612.06851, 2016. 3

[21] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. 2017. 3

反驳

Rebuttal

在这里插入图片描述

图4.零轴图表可能在理论上更诚实……我们仍然可以使用变量来使自己看起来不错!

Figure 4. Zero-axis charts are probably more intellectually honest… and we can still screw with the variables to make ourselves look good!

我们要感谢Reddit评论员,同事,电子邮件发送者,以及走廊上的欢呼声,感谢他们的可爱,由衷的话。 如果您像我一样,正在审查ICCV,那么我们知道您可能还会阅读其他37篇论文,您将不可避免地推迟到最后一周,然后在该领域中有一些传奇人物通过电子邮件向您发送有关您应该如何完成的论文 这些评论只是不清楚他们在说什么,也许他们来自未来? 无论如何,如果没有您过去自己过去所做的所有工作,这篇论文将不会变成及时的事情,但是只有一点点前进,直到现在为止都没有。 而且,如果您发了推文,我不会知道。 只是在说。

We would like to thank the Reddit commenters, labmates, emailers, and passing shouts in the hallway for their lovely, heartfelt words. If you, like me, are reviewing for ICCV then we know you probably have 37 other papers you could be reading that you’ll invariably put off until the last week and then have some legend in the field email you about how you really should finish those reviews execept it won’t entirely be clear what they’re saying and maybe they’re from the future? Anyway, this paper won’t have become what it will in time be without all the work your past selves will have done also in the past but only a little bit further forward, not like all the way until now forward. And if you tweeted about it I wouldn’t know. Just sayin.

审稿人#2 AKA丹·格罗斯曼(笑的是谁呢)坚持认为,我在这里指出我们的图不是一个而是两个非零的原点。 丹,您说的完全正确,这是因为看起来比承认我们自己都在战胜2-3%的行动计划更好。 但是这是要求的图形。 我也加入了FPS,因为当我们在FPS上绘图时,我们看起来就像是超级棒。

Reviewer #2 AKA Dan Grossman (lol blinding who does that) insists that I point out here that our graphs have not one but two non-zero origins. You’re absolutely right Dan, that’s because it looks way better than admitting to ourselves that we’re all just here battling over 2-3% mAP. But here are the requested graphs. I threw in one with FPS too because we look just like super good when we plot on FPS.

评论者4在Reddit上的AKA JudasAdventus写道:“有趣的阅读,但反对MSCOCO指标的论点似乎有些虚弱”。 好吧,我一直都知道你会成为打开我犹大的人。 您知道在进行项目时是如何进行的,而且只能顺利进行,因此您必须找出某种方法来证明您所做的工作真的很酷吗? 我基本上是想这样做,并且对COCO指标大加抨击。 但是,既然我已经放完了这座山丘,我不妨死在它上面。

Reviewer #4 AKA JudasAdventus on Reddit writes “Entertaining read but the arguments against the MSCOCO metrics seem a bit weak”. Well, I always knew you would be the one to turn on me Judas. You know how when you work on a project and it only comes out alright so you have to figure out some way to justify how what you did actually was pretty cool? I was basically trying to do that and I lashed out at the COCO metrics a little bit. But now that I’ve staked out this hill I may as well die on it.

看到问题了,mAP已经有点坏了,因此对其进行更新也许可以解决一些问题,或者至少说明为什么更新版本在某种程度上更好。 这就是我遇到的最大问题是缺乏合理性。 对于PASCAL VOC,将IOU阈值“故意设置得较低,以解决地面真实数据中边界框中的不准确性” [2]。 COCO的标签是否比VOC更好? 绝对有可能,因为COCO带有分割蒙版,也许标签更值得信赖,因此我们不必担心准确性。 但同样,我的问题是缺乏正当性。

See here’s the thing, mAP is already sort of broken so an update to it should maybe address some of the issues with it or at least justify why the updated version is better in some way. And that’s the big thing I took issue with was the lack of justification. For PASCAL VOC, the IOU threshold was ”set deliberately low to account for inaccuracies in bounding boxes in the ground truth data“ [2]. Does COCO have better labelling than VOC? This is definitely possible since COCO has segmentation masks maybe the labels are more trustworthy and thus we aren’t as worried about inaccuracy. But again, my problem was the lack of justification.

COCO度量标准强调更好的边界框,但强调必须意味着它不再强调其他内容,在这种情况下,是分类准确性。 是否有充分的理由认为更精确的边界框比更好的分类更重要? 未分类的示例比稍微移动的边界框更明显。

The COCO metric emphasizes better bounding boxes but that emphasis must mean it de-emphasizes something else, in this case classification accuracy. Is there a good reason to think that more precise bounding boxes are more important than better classification? A miss-classified example is much more obvious than a bounding box that is slightly shifted.

mAP已经搞砸了,因为重要的是按类别排序。 例如,如果您的测试集仅包含这两个图像,则根据mAP,产生这些结果的两个检测器就如常:

mAP is already screwed up because all that matters is per-class rank ordering. For example, if your test set only has these two images then according to mAP two detectors that produce these results are JUST AS GOOD:

在这里插入图片描述
图5.根据这两个图像的mAP,这两个假设检测器是完美的。 他们俩都是完美的。 完全相等。

Figure 5. These two hypothetical detectors are perfect according to mAP over these two images. They are both perfect. Totally equal.

现在,这显然是对mAP问题的过分夸张,但是我想我最近重新定义的一点是,“现实世界”中的人们所关心的与我们当前的度量标准之间存在如此明显的差异。 要提出新的指标,我们应该关注这些差异。 另外,例如,它已经是平均精度了,我们甚至可以将COCO指标称为平均平均年龄精度吗?

Now this is OBVIOUSLY an over-exaggeration of the problems with mAP but I guess my newly retconned point is that there are such obvious discrepancies between what people in the “real world” would care about and our current metrics that I think if we’re going to come up with new metrics we should focus on these discrepancies. Also, like, it’s already mean average precision, what do we even call the COCO metric, average mean average precision?

这是一个建议,给人们真正关心的是图像和检测器,检测器对图像中的对象进行查找和分类的程度如何。 摆脱每个类别的AP而仅执行全球平均精度又如何呢? 还是对每个图像进行AP计算并求平均值?

Here’s a proposal, what people actually care about is given an image and a detector, how well will the detector find and classify objects in the image. What about getting rid of the per-class AP and just doing a global average precision? Or doing an AP calculation per-image and averaging over that?

无论如何,盒子都是愚蠢的,我可能是面具的真正信徒,但我无法让YOLO来学习它们。

Boxes are stupid anyway though, I’m probably a true believer in masks except I can’t get YOLO to learn them.

发布了43 篇原创文章 · 获赞 19 · 访问量 8516

猜你喜欢

转载自blog.csdn.net/weixin_43590290/article/details/101446314