Training YOLOv5 on VisDrone2019 (using ultralytics)

Package library used: ultralytics

Environment: python3.8, torch=1.7.0

git clone the ultralytics code locally

git clone https://github.com/ultralytics/ultralytics/

Create a new train.py locally, and write the following content.

from ultralytics import YOLO

# Load a model
model = YOLO("yolov5n.yaml")  # build a new model from scratch

# Use the model
# model.train(data="VisDrone.yaml", epochs=1,batch=1)  # train the model
model.train(data="coco128.yaml", epochs=1,batch=1)

metrics = model.val()  # evaluate model performance on the validation set

The 128 pictures of coco128 are used for local testing. It worked. When I was hungry on the cloud server, I replaced it with VisDrone.yaml, the data set we want to train.

40G video memory, the best batch size is 25, the displayed real-time GPU usage is only 11.4G, but the highest video memory usage is 95%. You need to reserve some space to prevent the memory from bursting.

yolov5n.yaml

yolov6n.yaml

Guess you like

Origin blog.csdn.net/Albert233333/article/details/131927196