YOLOv5 trains its own txt label dataset

The data set in xml format is here: (15 messages) YOLOv5 trains its own data set_xukobe97's blog-CSDN blog_yolov5 training

Version: YOLOv5-5.0 Weight file: yolov5s.pt

Delete the images that come with YOLOv5;

1. Create three directories images, ImageSets and labels under the data folder;

images exists as a picture jpg (put it in yourself);

ImageSets store the following four txt files (none);

labels store txt files (put them in yourself).

2. Paste makeTxt_for_txt.py and voc_label_for_txt.py in the YOLOv5 directory

(1) The content of makeTxt_for_txt.py is as follows:

import os
import random


trainval_percent = 0.99
train_percent = 0.99
xmlfilepath = 'data/labels'
txtsavepath = 'data/ImageSets'
total_xml = os.listdir(xmlfilepath)

num = len(total_xml)
list = range(num)
tv = int(num * trainval_percent)
tr = int(tv * train_percent)
trainval = random.sample(list, tv)
train = random.sample(trainval, tr)

ftrainval = open('data/ImageSets/trainval.txt', 'w')
ftest = open('data/ImageSets/test.txt', 'w')
ftrain = open('data/ImageSets/train.txt', 'w')
fval = open('data/ImageSets/val.txt', 'w')

for i in list:
    name = total_xml[i][:-4] + '\n'
    if i in trainval:
        ftrainval.write(name)
        if i in train:
            ftrain.write(name)
        else:
            fval.write(name)
    else:
        ftest.write(name)

ftrainval.close()
ftrain.close()
fval.close()
ftest.close()

(2) The content of voc_label_for_txt.py is as follows:

# xml解析包
import xml.etree.ElementTree as ET
import pickle
import os
# os.listdir() 方法用于返回指定的文件夹包含的文件或文件夹的名字的列表
from os import listdir, getcwd
from os.path import join


sets = ['train', 'test', 'val']
classes= ['00000', '00001', '00010', '00011', '00100', '00101', '00110', '00111', '01000', '01001', '01010', '01011', '01100', '01101', '01110', '01111', '10000', '10001', '10010', '10011', '10100', '10101', '10110', '10111', '11000', '11001', '11010', '11011', '11100', '11101', '11110', '11111']

for image_set in sets:
    '''
    对所有的文件数据集进行遍历
    做了两个工作:
    1.将所有图片文件都遍历一遍,并且将其所有的全路径都写在对应的txt文件中去,方便定位
    2.同时对所有的图片文件进行解析和转化,将其对应的bundingbox 以及类别的信息全部解析写到label 文件中去
         最后再通过直接读取文件,就能找到对应的label 信息
    '''
    # 先找labels文件夹如果不存在则创建
    if not os.path.exists('data/labels/'):
        os.makedirs('data/labels/')
    # 读取在ImageSets/Main 中的train、test..等文件的内容
    # 包含对应的文件名称
    image_ids = open('data/ImageSets/%s.txt' % (image_set)).read().strip().split()
    # 打开对应的2012_train.txt 文件对其进行写入准备
    list_file = open('data/%s.txt' % (image_set), 'w')
    # 将对应的文件_id以及全路径写进去并换行
    for image_id in image_ids:
        list_file.write('data/images/%s.jpg\n' % (image_id))
        # 调用  year = 年份  image_id = 对应的文件名_id
        # try:
        #     convert_annotation(image_id)
        # except:
        #     continue
    # 关闭文件
    list_file.close()

First run makeTxt_for_txt.py to create four txt files under ImageSets;

Then run voc_label_for_txt.py to change the names in train.txt, val.txt, and test.txt under ImageSets to images of data/train.txt, data/train.txt, data/val.txt, and data/test.txt route.

3. Glue the weight file yolov5s.pt in the YOLOv5 directory

4. Copy the coco128.yaml file under data, paste it in the data directory, and change it to your own name (such as phone.yaml)

 

 The content is changed as shown in the figure

 

 5. Change the nc in the file under models (for example, I chose yolov5s.yaml) to the number of your own types

 

 6. Change the three items under the train.py file as shown in the figure to their corresponding files

 

 For example, I use yolov5s.pt, cfg uses yolov5s.yaml, and data uses my own phone.yaml

Then you can start training.

The training is completed to get last.py and best.py

7. Open detect.py and change the first two lines

Change the first line to best.py generated by training

Change the second line to the image data you want to test. 

Then run detect.py to get the speculative result.

 Then click runs\detect\exp11 to view the training effect.

 

Guess you like

Origin blog.csdn.net/weixin_52950958/article/details/125676839