yolov8 use

detection

Usage reference: Detect - Ultralytics YOLOv8 Docs

Dataset preparation

Put the picture under /home/data/images

Put the labeled txt file under /home/data/labels

Labels is the label file (txt) of each picture, the format is <object-class> <cx> <cy> <width> <height>, and the coordinates of the middle point and the width and height have been normalized.

Put train.txt and val.txt under /project/train/src_repo/yolov8/dataset/, the form of train.txt and val.txt is as follows:

train

Copy coco128.yaml from ultralytics/datasets and name it my.yaml, and fill in the paths of train and test (generated when the dataset is divided) and categories

train:  /project/train/src_repo/yolov8/dataset/train.txt
val: /project/train/src_repo/yolov8/dataset/val.txt  
names:
  0: head
  1: person
  2: hat

Copy yolov8.yaml from ultralytics/models/v8 (according to your own needs), and change the number of categories to your own.

yolo task=detect mode=train data=ultralytics/datasets/my.yaml model=ultralytics/models/v8/yolov8s.yaml pretrained=yolov8s.pt epochs=2 imgsz=640 batch=8 save_period=2

Reference: https://docs.ultralytics.com/modes/train/#arguments

export

yolo export model=yolov8s.pt format=onnx opset=12 
或者:
from ultralytics import YOL
model = YOLO('yolov8s.pt') 
model.export(format='onnx', opset=12)

predict

yolo predict model=yolov8s.pt source='bus.jpg'  
# source写链接(https://ultralytics.com/images/bus.jpg)也可,model也支持onnx.输出结果在runs/detect/predict下
或者:
results = model('bus.jpg') # 包含前处理、推理以及后处理

Split

Usage reference: https://docs.ultralytics.com/tasks/segment/

Dataset preparation (take the coco dataset as an example)

Put the picture under /home/data/images

Put the split and labeled txt file under /home/data/labels

Put train.txt and val.txt under /project/train/src_repo/yolov8/dataset/

Copy coco128-seg.yaml from ultralytics/datasets to the current directory, name it my-seg.yaml, change the path and category of train and val datasets

Copy yolov8-seg.yaml from ultralytics/models/v8 to the current directory.

Labels is the label file (txt) of each picture, the format is <class-index> <x1> <y1> <x2> <y2> ... <xn> <yn>, and the coordinates are normalized.

The forms of train.txt and val.txt are as follows:

train

Parameter reference: Configuration - Ultralytics YOLOv8 Docs

yolo task=segment mode=train data=my-seg.yaml model=yolov8s-seg.yaml pretrained=yolov8s-seg.pt epochs=2 save_period=2 imgsz=640

The trained model will be saved in: runs/segment/train/ by default. Various evaluation files are saved in addition to the model.

 By default, verification will be performed after training:

predict

 Parameter reference: Segment - Ultralytics YOLOv8 Docs

yolo task=segment mode=predict model=yolov8s-seg.pt source='bus.jpg'

 Prediction results are saved by default in: runs/segment/predict

export

yolo export model=path/to/best.pt format=onnx 

pose

Usage reference: Pose - Ultralytics YOLOv8 Docs

Dataset preparation (take the coco dataset as an example)

Put the picture under /home/data/images

Put the labeled txt file under /home/data/labels

Put train.txt and val.txt under /project/train/src_repo/yolov8/dataset/

Copy yolov8-pose.yaml from ultralytics/datasets to the current directory and name it my-pose.yaml

train:  /project/train/src_repo/yolov8/dataset/train.txt
val: /project/train/src_repo/yolov8/dataset/val.txt 

# Keypoints
kpt_shape: [17, 2]  # number of keypoints, number of dims (2 for x,y or 3 for x,y,visible)
flip_idx: [0, 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15]

# Classes
names:
  0: person

Copy yolov8-pose.yaml from ultralytics/models/v8 to the current directory.

Labels is the label file (txt) of each picture, and the coordinates are normalized.

2d:<class-index> <x> <y> <width> <height> <px1> <py1> <px2> <py2> ... <pxn> <pyn>

3d:<class-index> <x> <y> <width> <height> <px1> <py1> <p1-visibility> <px2> <py2> <p2-visibility> <pxn> <pyn> <p2-visibility>

  The forms of train.txt and val.txt are as follows:

 train

yolo task=pose mode=train data=my-pose.yaml model=yolov8s-pose.yaml pretrained=yolov8s-pose.pt epochs=2 imgsz=640

 The trained model will be saved in: runs/pose/train/ by default, and will be verified after the default training:

predict

yolo task=pose mode=predict model=yolov8s-pose.pt source='bus.jpg'

  Prediction results are saved by default in: runs/pose/predict

Guess you like

Origin blog.csdn.net/qq_39066502/article/details/130942633