Deep learning darknet framework training model

deep learning

foreword

Earlier, we configured the deep learning environment and prepared the data set.

1. Preparation for training

Before training your own dataset, you need to modify some configuration files

1. Create a new data/voc.names file

You can copy the original data/voc.names content and modify it according to your own situation, you can rename it, such as data/voc-dp.names. The content is all the category names, and the order of the names needs to be consistent with the format conversion in the previous section.
insert image description here

2. Create a new cfg/voc.data file

You can copy the original content of cfg/voc.data and modify it according to your own situation. You can rename it, such as cfg/voc-dp.data. The parameters are explained as follows:
classes: The total number of categories, which can be modified as appropriate.
train: The path of the training set, which is the path of the 2007_train.txt file generated in the format conversion content in the previous section, which can be modified as appropriate.
valid: The path of the test set, which is the path of the 2007_val.txt file generated in the format conversion content in the previous section, which can be modified as appropriate.
names: The path of the newly created .names file at point 1. You can write a relative path, as shown in the following figure: data/voc-dp.names. backup: The weight file saving path during training, generally fill in backup directly
insert image description here

3. Create a new cfg/yolov3.cfg file

You can copy the content of cfg/yolov3.cfg or cfg/yolov3-tiny.cfg and modify it according to your own situation; you can rename cfg/yolov3-tiny-dp.cfg. This .cfg file is used to configure network parameters. yolov3.cfg and yolov3-tiny.cfg are two different templates. The number of network layers of yolov3-tiny.cfg is less than that of yolov3.cfg, which simplifies the configuration and occupies more resources. Less, the trained weight file is suitable for use on nano, so we use the yolov3-tiny.cfg template.
insert image description here
Generally, the parameters that need to be modified according to the situation are described as follows:
during training, comment the batch and subdivisions under "Testing", and uncomment the batch and subdivisions under "Training"; while the subsequent testing is the opposite.
When training, it is recommended that batch=64, subdivisions=16; if the GPU memory is large, subdivisions can be filled in 8, and the video memory can be filled in 32 when the video memory is small.

Width and height can be filled with 416, or 608, etc., which need to be multiples of 32. max_batches is the maximum number of training times, which can be set to classes*2000. For example, if there are 5 categories in total, set it to 10000. After the initial training, it can be adjusted according to the situation.

The steps are changed to 80% and 90% of max_batches.

In addition, there are [yolo] layers and his previous [convolutional] convolution layer in the .cfg file that need to be modified. If it is the yolov3.cfg template, there are three layers of yolo layers, while the yolov3-tiny.cfg template has two layers.
insert image description here
The classes in the [yolo] layer modify the number of categories according to the situation , filters in the [convolutional] layer before each [yolo] layer = (number of categories + 5)* 3, such as classes=5, filters = (5+5)* 3=30. This must be modified, otherwise it will give you an error.

4. Anchor clustering

Finally, the anchors anchor box size can also be modified, we can cluster on our own dataset to make the anchor box size more suitable for our data. You can run the code to get the clustering results

./darknet detector calc_anchors cfg/voc-dp.data -num_of_clusters 6 -width 512 -height 512

The first bold marked cfg/voc-dp.data is the file newly created in step ②, which can be modified as needed. The second bold marked is the parameter value of -num_of_clusters. If you use yolov3.cfg, fill in 9, and use yolov3- Fill in 6 in tiny.cfg template.
The clustering results are as follows, replace the data in the red box with the anchors parameter of the [yolo] layer of the .cfg file
insert image description here

5. Download the pretrained weights file

Stored in the darknet directory

wget https://pjreddie.com/media/fifiles/darknet53.conv.74

2. Start training

After completing the above steps, you can open the terminal in the darknet directory and run the command to train:

./darknet detector train cfg/voc-dp.data cfg/yolov3-tiny-dp.cfg darknet53.conv.74

insert image description here

3. Test

The weight file .weights will be generated while training, and after all the training times, there will be the following weight files.
insert image description here
You can use the following command to calculate the AP value of the weight file to see the training effect of our model

./darknet detector map cfg/voc-dp.data cfg/yolov3-tiny-dp.cfg backup/yolov3-tiny-dp_last.weights

Or during our training process, the training effect can also be evaluated through the real-time updated mAP value of the chart_yolov3-tiny-dp.png window. The following is the chart_yolov3-tiny-dp.png image generated after completing the entire training. (Running the above training command will appear) When
insert image description here
the training effect meets our requirements, you can copy the file to the nano side for use

Summarize

When we train next time,
1. Prepare the data set and convert it to yolo format (note that you need to modify the ninth line of voc_label.py to your own category name)
2. Create a new data/voc.names file and store it Category name
3. Create a new cfg/voc.data file, store the number of categories, training data set path, test data set path (it will be generated when converting to yolo format)
4. Create a new cfg/yolov3.cfg file, see the above requirements for details, remember The classes in the [yolo] layer are modified according to the situation (must be modified)
5. Anchor clustering
6. Start training

Guess you like

Origin blog.csdn.net/qq_51963216/article/details/124239087