Use windows for custom data training of YOLO8

Use windows for custom data training of YOLO8

[YOLOv8]( ultralytics/README.zh-CN.md at main ultralytics/ultralytics (github.com) ) is the latest version of the YOLO target detection and image segmentation model developed by Ultralytics . Compared with the previous version, YOLOv8 can Identify and locate objects in images more quickly and efficiently, and classify them more accurately.

1. Environment preparation

It is strongly recommended to use GPU training! ! ! Nvidia graphics cards are available, and other graphics cards such as ATI can be skipped directly! ! !

My environment is as follows:

  • Windows 11
  • torch 1.7.1+cu110
  • torchvision 0.8.2+cu110
  • python 3.7.10

Other dependent libraries can be installed according to the requirements.txt file.

First, you need to install ultralytics. Currently, the core code of YOLOv8 is encapsulated in this dependency package:

pip install ultralytics

1. Run the tests

Run D:\your virtual installation environment Venv\Lib\site-packages\ultralytics\yolo\v8\detect\predict.py

Check whether the result is CPU inference or GPU inference

image-20230227110822830

If this is the CPU, please follow my previous article 【Windows11】Cuda and Cudnn Detailed Installation Tutorial_Jin

Nvidia graphics cards are available, and other graphics cards such as ATI can be skipped directly! ! !

After installation, install PyTorch according to the version supported by your own cuda, and select the +cu command

Enter the PyTorch link

My installation command is as follows:

pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html

1. Data preparation

1. Build your own dataset

In order not to make this process so painful and time-consuming, we can use Roboflow as a tool (but you need to go online scientifically)

Log in to the Roboflow website and start by creating an account

Because I have successfully labeled and exported the data, here are two Roboflow tutorials for you, please eat:

Select the data set exported by YOLOv8 as follows:

Open the "data.yaml" in the self-created dataset in "pycharm" and modify the path in the file. Here, because there are many problems with the relative path and many errors are reported, I directly changed it to the absolute path, especially pay attention not to appear the Chinese path (don’t ask, you just miss it)

2. Start training

After the modification is saved, enter in the terminal in the yolo environment

yolo task=detect mode=train model=yolov8n.pt data=flyerdata/data.yaml epochs=100 imgsz=640 workers=4 batch=4

At the same time, modify the "data=" suffix to the path of your own data set, and press Enter to start training. After the training is complete, the result will be saved in the path indicated by the last line.

The above parameters are explained as follows:

task: Select the task type, optional ['detect', 'segment', 'classify', 'init']

mode: Choose whether to train, validate or predict the task Lacey Optional ['train', 'val', 'predict']

model: Select different model configuration files of yolov8, optional yolov8s.yaml, yolov8m.yaml, yolov8l.yaml, yolov8x.yaml

data: Select the generated dataset configuration file

epochs: refers to how many times the entire data set will be iterated during the training process. If the graphics card is not working, you can adjust it smaller.
Batch: How many pictures are viewed at a time before the weight is updated, the mini-batch with gradient descent, if the graphics card is not good, you can adjust it smaller.
The training process is as follows:

3. Validation dataset

After the training is complete, you will get your own model, the best model weight and the final model weight

Modify the following verified data path according to your own data set path

yolo task=detect mode=val model=runs/detect/train5/weights/best.pt data=flyerdata/data.yaml

4. Test on new data

Next, modify the following code according to the location of the training result model of the dataset. "source" refers to the location of the new image folder. Since we have placed the folder in the main path of the warehouse, we can directly modify it to "source=images" , and start predicting.

yolo detect mode=predict model=runs/detect/train33/weights/best.pt source=images

Images are saved in the .../runs/detect/predict folder

Due to the sensitive data, I will not post pictures for the time being.

Guess you like

Origin blog.csdn.net/Jin1Yang/article/details/129243590