Use Yolov3 to train yourself to make a data set and get started quickly

In terms of target detection and classification, Yolo can quickly and well solve many problems. Here is a summary of how to quickly get started with Yolov3 , and directly and quickly train your own data set for use.

I provide a source code package that I have debugged myself, including the data set and source code. Learners can download it first and then learn it together. My next explanation will be based on this source code package. The source code package download link address For: add link description Extraction code is: 6vxv
After downloading and decompressing the file sample paper, see below:
insert image description here
Let's start to introduce how to use the source code package to train your own data set:

1.1 The storage location of image data sets and tags, see below:
insert image description here
insert image description here
insert image description here
1.2 The sample paper in the JPEGImages file is shown below:
insert image description here
1.3 The sample paper in the Annotations file is shown below:
insert image description here
1.3.1 The content of each .xml file is shown below:
insert image description here
in When training your own data set, you only need to copy your own data set pictures to the folder JPEGImages, and copy the label file to the file Annotations. You don’t need to rename the folder yourself, just use the framework I gave you.

2 Make dataset labels:
For the detailed method of making VOC dataset and yolo dataset , please refer to my other blog, link: add link description
For the detailed method of making COCO dataset , see my other blog, link: add link describe

3.1 Write the class name when labeling in the cls_classes.txt file in the folder model_data file, see below:
insert image description here
3.2 The yolo_anchors.txt file in the folder model_data file, here is mainly to introduce the contents of the file, learners do not need to modify, keep The original default is enough, see below:
insert image description here
3.3 Modify the path of classes_path in the voc_annotion.py file:
insert image description here
3.4 Running the voc_anntion.py file will generate 6 .txt files for training, and the 6 .txt files are as follows:
insert image description here

3.5 Modify the classes_path in the training file train.py, see below:
insert image description here
3.6 Run the train.py file directly to start training, see below:
insert image description here

4.1 Test after training the model, copy the trained model to the yolo.py file, and modify the classes_path, see below:
insert image description here
4.2 Start to verify the detection effect of the model after training, directly run the file predict.py file, see below:
insert image description here
4.3 .1 The output after running is as follows:
insert image description here
4.3.2 The detection results are as follows:
insert image description here
4.4.1 The code modification when you want to use video detection is as follows:
insert image description here
4.4.2 The real-time detection effect of video is as follows (only one frame is intercepted here, Running the code video can detect faces in real time and efficiently):
insert image description here
The above is a quick way to use Yolov3 to train the data set made by yourself. When using it, learners only need to follow my steps above and modify a few file parameters You can train your own data set. I hope it will be helpful to you who are learning Yolov3. For scholars who want to learn Yolov5 quickly, please refer to my other blog for details. Thank you for your support!

Guess you like

Origin blog.csdn.net/qq_40280673/article/details/125162408