YOLOv5 training tuning skills

This article is compiled from the original English https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results , the article explains how to improve the mAP and training effect of Yolov5.

Most of the time, without changing the model or training configuration, if you can provide enough data sets and good labels , you can get good results. If the results you get are not satisfactory when you first start training, then you will want to take measures to improve. Before you take measures, we still recommend that you use the default settings. This helps establish a baseline for improvement, as well as identify areas for improvement.

The following is a complete guide provided by the official - how to get good results in YOLOv5 training.

data set

  • images for each category

≥1500 images are recommended.

  • Instances of each class

≥10000 instances per category is recommended.

  • variety of images

It is suggested that images come from different time periods, different seasons, different weather, different lighting, different angles, different

source (online collection, local collection, different cameras), etc.

  • Labeling Consistency

All instances of all classes in all images must be labeled.

  • Labeling accuracy

Dimensions must fit tightly around the object. There should be no space between the object and its bounding box. No object can lack a callout.

  • Markup Verification

See the train_batch and val_batch images in the train/exp folder.

  • background image

Those images that do not contain objects are added to the dataset as background images to reduce False Positives (FP). A background image of 0-10% is recommended to help reduce FPs. Background images do not need to be labeled.

model selection

Larger models like YOLOv5 and YOLOv5x6 can produce better results in almost all cases, but because they also have more parameters, they require more CUDA memory for training and are slower to train. For mobile deployment, YOLOv5s/m is recommended; for cloud deployment, YOLOv5l/x is recommended.

  • Start with pretrained weights

Suitable for medium and small scale datasets (VOC, VisDrone, GlobalWheat). Pass the model name to the parameter --weight.

Model download

python train.py --data custom.yaml --weights yolov5s.pt
  • start from scratch

Suitable for large datasets (COCO, Objects365, Olv6). Pass in the model architecture of interest, followed by an empty --weights parameter:

python train.py --data custom.yaml --weights '' --cfg yolov5s.yaml

training settings

Before modifying anything, the default settings are used for the first training to establish a baseline of performance. The full list of settings can be found in train.py.

  • Epochs training times

The default is 300. If overfitting occurs prematurely, epochs can be reduced appropriately. If there is no overfitting, you can set 600, 1200.

  • I mage size Image size

--img defaults to 640, if the dataset contains a lot of small objects, it is recommended to use --img 1280. If the training setting --img is 1280, then the test and detection should also be set to 1280.

  • Batch size batch size

It is recommended to use the maximum value allowed by the hardware. Small batch sizes lead to poor batch value statistics.

  • Hyperparameters Hyperparameters

The default hyperparameter is hyp.scratch-low.yaml. It is recommended to use the default values ​​for training for the first time.

Guess you like

Origin blog.csdn.net/Kigha/article/details/129178428