Yolov8 can be used for target detection, segmentation, posture, and tracking. Here is an example of the process of target detection from labeling to training.
Official website connection
Download the code first, it goes without saying.
Then prepare the dataset and create a folder dataset (name it yourself) with images and labels underneath.
Create train and val folders under the images and labels folders respectively,
put the images used for training into train, and the images used for validation into val.
At this time, both train and val under the labels folder are empty, because they are not available yet. Label.
mark
You can use labelme or other tools for annotation. Here we use makesense.ai .
This only supports target boxes and image classification, and does not support segmentation.
Click get started, open the image of the entire train or val folder, select "Target Detection", and then click start project to start annotation.
After marking the box, select select label on the right, and you will be prompted that the label is empty. Click where the label is empty, and the edit label will appear.
After editing and saving, you can directly select the label for the next labeling.
After marking all the images in the train folder or val folder, select "Export Annotations". Then choose YOLO format. You will get the zip file.
Unzip it into train and val respectively under the labels folder, and the labeling is completed.
train
The data set is placed under ultralytics/datasets.
Create your own custom.yaml file and place it under ultralytics/datasets.
Edit path, label.
Write training code:
from ultralytics import YOLO
if __name__ == "__main__":
#train
model = YOLO('yolov8n.pt')
model.train(
data="ultralytics/datasets/custom.yaml",
epochs=50,
imgsz=640,
batch=2,
save_period=10,
)
Test effect