Only use CPU to run Yolov5 in real time on the drone (implemented by OpenVINO) (Part 2)

​In the previous issue, we talked about the environment configuration and simple operation of the first two sections of Yolov5. In this issue, we will bring the experimental data of the next two sections under different processors and how to train our own model.

3. Delay and effect on different processors

In order to check the delay and effect of Yolov5 on different devices, we conducted experiments on Inter's i3, i5, and i7 processors in the same environment.

1、 Intel® Core™ i3-8145UE CPU

After configuring the environment according to the above process, open the terminal and enter the following command:

cd <path-to-Prometheus>/./Scripts/start_yolov5openvino_server.shroslaunch prometheus_detection yolov5_intel_openvino.launch

You can see the following effect:

Yolov5 test on i3 -8 generation

Latency of Yolov5 on i3-8 generation

2、Intel® Core™ i5-8265U CPU

The configuration environment and input instructions are the same as those in i3, let’s see the test results directly

Yolov5 test on i5-8 generation

Latency of Yolov5 on i5-8 generation

3、Intel® Core™ i7-8665UE CPU

The configuration environment and input instructions are the same as those in i3, let’s see the test results directly

Yolov5 test on i7-8

Latency of Yolov5 on i7-8 generation

4、 11th Gen Intel® Core™ i5-1135G7 @ 2.40GHz × 8

The configuration environment and input instructions are the same as those in i3, let’s see the test results directly

Yolov5 test on i5-11 generation

Latency of Yolov5 on i5-11 generation

Conclusion: From the above experimental data, the frame rate of yolov5 increases with the increase of i3, i5, and i7, and increases with the increase of 7th, 8th, and 9th generation CPUs. With the main frequency of the computer at 1.1GHz, 1.6GHz, 2.4GHz increase and increase.

4. Train your own yolov5 model and deploy it

1. Data labeling

Download the dataset annotation tool, download address: Spire Web or Baidu Netdisk (password: l9e7), dataset management software SpireImageTools: gitee address or github address.

Unzip and open the annotation softwareSpireImageTools_x.x.x.exe

First click Tools->Setting...and fill in one save path(all annotation files will be stored in this folder)

Convert the captured video to an image (skip this step if the captured image is an image), click to Input->Videoselect the video to be marked.

Then, click `Tools->Video to Image`

After clicking OK, wait for completion, the result will be stored in:

Open the image that needs to be labeled, click the menu Input->Image Dir, find the folder where the image needs to be labeled, press Ctrl+A, select all, and open all images:

Click the menu: Tools->Annotate Image->Box Label, to start labeling images

Fill in the name of the target to be labeled in label, and then drag the dialog box aside.

Start labeling, start labeling in the main window, zoom in and out of the image with the mouse wheel, press and hold the left button to move the visible image area and click the left button to surround the target frame. When using training, Yoloclick 2 points:

When marking, if you make a mistake, you can cancel it by pressing the right mouse button. After marking, if you are not satisfied, you can click the green border (the border will turn red, as shown in the figure below), and press `Delete` to delete

Continue labeling pedestrian categories:

After all the annotations are completed, output the annotations as Yoloa format to prepare for training——after the annotations are completed, pressCtrl+o

Just click OKand wait for the conversion.

Note that the following two folders are what we Yolov5need for training

2. Start training Yolov5

After preparing scaled_imagesand Yolo_labelstwo folders, we can train Yolov5. First , create one car_person.yamland put it <path-to-Prometheus>/Modules/object_detection_yolov5openvino/data/under a folder. car_person.yamlThe specific content is as follows:

# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]train: data/car_person/images/train/val: data/car_person/images/train/# number of classesnc: 2# class namesnames: ['car', 'person']

Note 1 : car_personIt is a custom name, and the dataset we marked this time only has these 2 categories.

Note 2 : names: ['car', 'person']The order of categories here needs to Yolo_categories.namesbe consistent with the order of categories in .

Copy the training image and label to the corresponding location

First , create a new folder <path-to-Prometheus>/Modules/object_detection_yolov5openvino/data/under car_person. Thencar_person , create 2 more folders imagesand labels. Finally , copy the prepared scaled_imagescopy to images, and rename to train; Copy the prepared Yolo_labelscopy labelsto , and rename to train.

Combining car_person.yamlthe content in the above, I think you should understand the meaning of the above directory structure.

start training

cd <path-to-Prometheus>/Modules/object_detection_yolov5openvino/python3 train.py --img 640 --batch 16 --epochs 5 --data data/car_person.yaml --weights weights/yolov5s.pt

If the above content is displayed, the training is successful! You can increase the number of training epochs (`--epochs 5`) to improve the effect.

Deploy the trained model

The model that has just been trained will be saved in <path-to-Prometheus>/Modules/object_detection_yolov5openvino/runs/exp?/weights/best.pt, ?according to your own situation (the latest trained model ?is the largest number), will best.ptbe renamed yolov5s.pt, copied to <path-to-Prometheus>/Modules/object_detection_yolov5openvino/weights/the next, and then perform the operation of the first part 3-5to deploy OpenVINO.

Guess you like

Origin blog.csdn.net/bruce__ray/article/details/131144164