1. The loss weight of target detection YOLOv5
There are three losses in YOLOv5 are box, obj, cls
Basic values can be set in the hyperparameter configuration file hyp.*.yaml, for example
box: 0.05
cls: 0.5
obj: 1
When training, update in train.py
hyp['box'] *= 3 / nl # scale to layers
hyp['cls'] *= nc / 80 * 3 / nl # scale to classes and layers
hyp['obj'] *= (imgsz / 640) ** 2 * 3 / nl # scale to image size and layers
It can be seen that the loss is related to nl (number of detection layers, the number of layers of the detection layer, here is 3) and the image size. It is easy to understand that it is related to layers because the sum of multiple layers is lost. Related to image size needs to be further explored.
nl = model.model[-1].nl # number of detection layers (used for scaling hyp['obj'])
Then, the respective weight loss.py will be multiplied during the loss calculation
lbox *= self.hyp['box']
lobj *= self.hyp['obj']
lcls *= self.hyp['cls']
Boxes smaller than 2 pixels are filtered out during training
# Filter
i = (wh0 < 3.0).any(1).sum()
if i:
print(f'{prefix}WARNING: Extremely small objects found. {i} of {len(wh0)} labels are < 3 pixels in size.')
wh = wh0[(wh0 >= 2.0).any(1)] # filter > 2 pixels
2. The relationship between obj loss and image size
obj loss is adjusted by image size
hyp['obj'] *= (imgsz / 640) ** 2 * 3. / nl
Take a look at the respective weights when the image sizes are 1280, 640, 320, and 224
Image sizes 1280
nl: 3
hyp['box']: 0.05
hyp['obj']: 4.0
hyp['cls']: 0.5
Image sizes 640
nl: 3
hyp['box']: 0.05
hyp['obj']: 1.0
hyp['cls']: 0.5
Image sizes 320
nl: 3
hyp['box']: 0.05
hyp['obj']: 0.25
hyp['cls']: 0.5
Image sizes 224
nl: 3
hyp['box']: 0.05
hyp['obj']: 0.12249999999999998
hyp['cls']: 0.5
reference:
https://blog.csdn.net/flyfish1986/article/details/116832354