【目标检测】YOLOV8实战入门(三)模型训练


train模式用于在自定义数据集上训练YOLOv8模型。在此模式下,使用指定的数据集和超参数训练模型。训练过程涉及优化模型的参数,以便它可以准确预测图像中对象的类别和位置。
Note:YOLOv8数据集,如COCO、VOC、ImageNet和许多其他数据集在首次使用时自动下载,即yolo train data=coco.yaml

model = YOLO('yolov8n.yaml')
# 利用官方提供的数据集配置文件进行训练,如COCO、VOC、ImageNet和许多其他数据集,在首次使用时自动下载
results = model.train(data='coco128.yaml', epochs=3)

# 不提供数据集配置文件,根据预训练文件中提供的相关信息进行训练
model = YOLO('yolov8n.pt') 
model.train(epochs=5)

# 恢复上次中断的训练
model = YOLO("last.pt")
model.train(resume=True)

YOLOv8模型的训练设置是指用于在数据集上训练模型的各种超参数和配置。这些设置会影响模型的性能、速度和准确性。一些YOLOv8的常见训练设置包括批量大小、学习率、动量和权重衰减。其他可能影响训练过程的因素包括优化器的选择、loss函数的选择以及训练集的大小和组成。仔细调整和试验这些设置以实现给定任务的最佳性能非常重要。

相关参数如下:

Key Value Description
model None path to model file, i.e. yolov8n.pt, yolov8n.yaml
data None path to data file, i.e. coco128.yaml
epochs 100 number of epochs to train for
patience 50 epochs to wait for no observable improvement for early stopping of training
batch 16 number of images per batch (-1 for AutoBatch)
imgsz 640 size of input images as integer or w,h
save True save train checkpoints and predict results
save_period -1 Save checkpoint every x epochs (disabled if < 1)
cache False True/ram, disk or False. Use cache for data loading
device None device to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
workers 8 number of worker threads for data loading (per RANK if DDP)
project None project name
name None experiment name
exist_ok False whether to overwrite existing experiment
pretrained False whether to use a pretrained model
optimizer 'SGD' optimizer to use, choices=[‘SGD’, ‘Adam’, ‘AdamW’, ‘RMSProp’]
verbose False whether to print verbose output
seed 0 random seed for reproducibility
deterministic True whether to enable deterministic mode
single_cls False train multi-class data as single-class
rect False rectangular training with each batch collated for minimum padding
cos_lr False use cosine learning rate scheduler
close_mosaic 0 (int) disable mosaic augmentation for final epochs
resume False resume training from last checkpoint
amp True Automatic Mixed Precision (AMP) training, choices=[True, False]
lr0 0.01 initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
lrf 0.01 final learning rate (lr0 * lrf)
momentum 0.937 SGD momentum/Adam beta1
weight_decay 0.0005 optimizer weight decay 5e-4
warmup_epochs 3.0 warmup epochs (fractions ok)
warmup_momentum 0.8 warmup initial momentum
warmup_bias_lr 0.1 warmup initial bias lr
box 7.5 box loss gain
cls 0.5 cls loss gain (scale with pixels)
dfl 1.5 dfl loss gain
pose 12.0 pose loss gain (pose-only)
kobj 2.0 keypoint obj loss gain (pose-only)
label_smoothing 0.0 label smoothing (fraction)
nbs 64 nominal batch size
overlap_mask True masks should overlap during training (segment train only)
mask_ratio 4 mask downsample ratio (segment train only)
dropout 0.0 use dropout regularization (classify train only)
val True validate/test during training

猜你喜欢

转载自blog.csdn.net/qq_43456016/article/details/130448115