[Target detection series] yolov5 trains its own data set (pytorch version)

0. Some things are mentioned in the previous yolov3 training and will be noted in the text:

1. Download the code;

https://github.com/ultralytics/yolov5

2. Prepare the data set:

        The data format is exactly the same as yolov3. If you have done the data set of yolov3, you can use it directly. For details, see https://blog.csdn.net/gbz3300255/article/details/106276897  , Section 3------------- Steps to train your own data set.

The text files to be prepared are:  train.txt test.txt val.txt lables text file

train.txt, record the name of the image under the data set, similar to this, the data set image is stored in the /data/images/ directory.

BloodImage_00091.jpg
BloodImage_00156.jpg
BloodImage_00389.jpg
BloodImage_00030.jpg
BloodImage_00124.jpg
BloodImage_00278.jpg
BloodImage_00261.jpg

test.txt, same as the surface format, the content is the file name of the graph to be tested

BloodImage_00258.jpg
BloodImage_00320.jpg
BloodImage_00120.jpg

val.txt, the same as the face format, the content is the file name of the image in the verification set

BloodImage_00777.jpg
BloodImage_00951.jpg

Lables type text, each image in images corresponds to a text about labels, in the form as follows, and the name is similar to BloodImage_00091.txt.

0 0.669 0.5785714285714286 0.032 0.08285714285714285

The lables text is unified in /data/lables/ of the above code

3. Modify the configuration file:

3.1 Make a new yml file in the data folder and call it trafficsigns.yaml. The content is as follows. The path of train is the path where the text of train.txt in step 2 is written, and the last two are similar. nc is the number of categories, I only test 4 categories and write 4. Just change the names to your category, but the lazy didn't change it.

# COCO 2017 dataset http://cocodataset.org
# Train command: python train.py --data coco.yaml
# Default dataset location is next to /yolov5:
#   /parent_folder
#     /coco
#     /yolov5


# download command/URL (optional)
download: bash data/scripts/get_coco.sh

# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../ImageSets/train.txt
val: ../ImageSets/val.txt
test: ../ImageSets/test.txt

# number of classes
nc: 4

# class names
names: ['0', '1', '2', '3']

# Print classes
# with open('data/coco.yaml') as f:
#   d = yaml.load(f, Loader=yaml.FullLoader)  # dict
#   for i, x in enumerate(d['names']):
#     print(i, x)

3.2 Modify the network configuration file in models. For example, if I plan to use the yolov5I model, I will modify yolov5l.yaml to what I need. There are several places to pay attention to: 

    a.nc should be changed to your category number

    b. The size of anchors should be changed to the size of your own data set. For details, please  refer to https://blog.csdn.net/gbz3300255/article/details/106276897 to modify the specific method of anchors. The code is ready, which is clustering.

# parameters
nc: 4  # number of classes
depth_multiple: 1.0  # model depth multiple
width_multiple: 1.0  # layer channel multiple

# anchors
anchors:
  - [12,15, 14,20, 18,25]  # P3/8
  - [24,32, 24,18, 33,44]  # P4/16
  - [39,28, 59,49, 115,72]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, BottleneckCSP, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, BottleneckCSP, [1024, False]],  # 9
  ]

# YOLOv5 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, BottleneckCSP, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, BottleneckCSP, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, BottleneckCSP, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, BottleneckCSP, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

4. Training:

python train.py --data data/trafficsigns.yaml --cfg models/yolov5I.yaml --weights '' --batch-size 16 --epochs 100

5. Test:

python detect.py --weights best.pt --img 320 --conf 0.4

I haven't trained yet, because the torch version is too low to use cuda's amp module. Let's do it again when I have time, but that's it for the road.

 

 

Guess you like

Origin blog.csdn.net/gbz3300255/article/details/108790056