Note connection: Summary of training methods-continue to see the ability to update and improve. The
following are probably the things and frameworks to be updated. Now it is a pit. When I become a little stronger, I can slowly refine and refine the content of the specific things
. The frame diagram of common training methods for convolutional neural networks-from the screenshot in the original YOLOv4
- Activation function
1.1
What is ReLU :
Features:
1.2
What is leaky-ReLU :
Features:
1.3
What is parametric-ReLU :
Features:
1.4 What
is ReLU6 :
Features:
1.5 What
is SELU :
Features:
1.6
What is Swish :
Features:
1.7 Mish
What:
Features: - Bounding box regression loss
2.1
What is MSE :
Features:
2.2 What
is IoU :
Features:
2.3 What
is GIoU :
Features:
2.4 What
is CIoU :
Features:
2.5 What
is DIoU :
Features: - Data enhancement
3.1 What
is CutOut :
Features:
3.2 What
is MixUp :
Features:
3.3 What
is CutMix :
Features: - Regularization
4.1 What
is DropOut :
Features:
4.2 What
is DropPath :
Features:
4.3
What is Spatial DropOut :
Features:
4.4 What
is DropBlock :
Features: - Normalized
5.1 Batch Normalization (BN)
What is:
Features:
5.2 Cross-GPU Batch Normalization (CGBN or SyncBN)
What is:
Features:
5.3 the Filter the Response Normalization (FRN)
What is:
Features:
5.4 Cross-the Iteration Batch Normalization (CBN )
What:
features: - Remaining connected
6.1 Residual connections
What is:
Features:
6.2 Weighted
What is:
Features:
6.3 residual Connections
What is:
Features:
6.4 Multi-the INPUT Weighted residual
Connections
What is:
Features:
6.5 Cross Stage partial Connections (CSP)
What is:
Features: