Train your own dataset with yolov7 on a Linux server

foreword

yolov7 has been out for a while before the author wrote a blog. There are already many blogs on the Internet to describe the innovations of their papers and the network structure of yolov7. I won’t go into details here. This blog mainly writes how to use yolov7 to train your own data set.

Environment build

First download all the project code from the yolov7 official website [https://github.com/WongKinYiu/yolov7], and then create a conda virtual environment on the server. I use the python3.7 version here

	```
	conda create -n yolo7 python=3.7
	```

Then activate the virtual environment, enter the yolov7-main folder, and run the following command

	```
	pip install -r requirements.txt  -i https://pypi.tuna.tsinghua.edu.cn/simple/ 
	```

The source of Tsinghua University is used here to install the corresponding dependency library of python, which will be much faster.
If there is no error in the installation process, this environment can be used directly. If a package installation error is reported during the installation process, install it again with pip bag on the line

Finally, you can run the command for model training. If no error is reported, it means that the environment is completely set up.

	```
	python train.py --workers 8 --device 0 --batch-size 32 --data data/coco.yaml --img 640 640 --cfg cfg/training/yolov7.yaml --weights '' --name yolov7 --hyp data/hyp.scratch.p5.yaml
	```

Modify the configuration file

On the basis of setting up the environment, use your own data set for training, mainly after labeling your own data set, convert it into coco data format (there are many online tutorials, so I won’t go into details here). On the basis of preparing the data, you only need to modify a few configuration files to train normally. I am using the same coco data set as the official one. If it is your own data set, you should pay attention to the following points Modifications:
(1) In cfg/training/yolov7.yaml, the number of categories needs to be modified

	```
	# parameters
	nc: 80  # number of classes  ##类别的数量需要进行修改
	depth_multiple: 1.0  # model depth multiple
	width_multiple: 1.0  # layer channel multiple
	```

(2) In data/coco.yaml, the relevant information of the read data needs to be modified

	```	
# download command/URL (optional)
#download: bash ./scripts/get_coco.sh

# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: /data/benchmark/COCO2017/coco/train2017.txt  # 118287 images ## 需要修改
val: /data/benchmark/COCO2017/coco/val2017.txt  # 5000 images  ## 需要修改
test: /data/benchmark/COCO2017/coco/test-dev2017.txt  #    ## 需要修改

# number of classes
nc: 80    #需要修改

# class names 
names: [ 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
     'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
     'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
     'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
     'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
     'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
     'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
     'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
     'hair drier', 'toothbrush' ]   ##需要修改
	```

at last

After modifying the above related content, you can directly run the model training command to train the model

The author set the single-card V100 batchsize to 32 to train yolov7 for a total of 300 epochs, which took about 2 weeks. Finally, the changes of the three losses on the training set are as follows: box_loss is about 0.026, cls_loss is reduced to 0.0065,
Training loss transformation
obj_loss Around 0.024

The metrics trend graph for the training set is roughly as follows:
Accuracy transformation of the training set

Guess you like

Origin blog.csdn.net/weixin_42280271/article/details/127752851