YOLOv4-pytorch trains its own data set
Introduction to YOLOv4-pytorch
Github address: argusswift/YOLOv4-pytorch: https://github.com/argusswift/YOLOv4-pytorch
This is a reproduction of the PyTorch version based on darknet's YOLOv4 structure, and also provides useful modules such as Mobilenetv3-YOLOv4, attentive YOLOv4, etc. Easy to operate and easy to read.
Environment configuration
operating environment
- Nvida GeForce RTX 2080TI
- CUDA10.0
- CUDNN7.0
- windows or linux
- python 3.6
Install dependencies
pip3 install -r requirements.txt --user
Preparation
Git clone YOLOv4
git clone github.com/argusswift/YOLOv4-pytorch.git
Prepare dataset
The model provides three supported data formats (PASCAL VOC, COCO, Customer).
Download the PascalVOC/MSCOCO 2017 dataset
PascalVOC:VOC 2012_trainval 、VOC 2007_trainval、VOC2007_test
MSCOCO 2017:train2017_img 、train2017_ann 、val2017_img 、val2017_ann 、test2017_img 、test2017_list
- Put the data set in the directory, and update " DATA_PATH " in config/ yolov4_config.py to the data set location;
- (For COCO dataset) Use utils/ coco_to_voc.py to convert COCO data type to VOC data type;
- Use utils/ voc.py to convert PascalVOC's *.xml format to *.txt format or utils/coco.py to convert COCO's *.json format to *.txt format (Image_path xmin0, ymin0, xmax0, ymax0, class0 xmin1 ,ymin1,xmax1,ymax1,class1...).
Prepare your own dataset
Build your own dataset similar to the PascalVOC type:
- VOC
- JPEGImage #原图片文件
- Annotations #标注*.xml文件
- ImageSets
- Main #训练、测试集
- train.txt
- test.txt
- Put the picture in the JPEGImage folder, and put the annotation file in the Annotations folder;
- Use utils/ xml_to_txt.py to write training set and test set to ImageSets/Main/*.txt ;
- Use utils/ voc.py to convert PascalVOC's *.xml format to *.txt format or utils/coco.py to convert COCO's *.json format to *.txt format (Image_path xmin0, ymin0, xmax0, ymax0, class0 xmin1 ,ymin1,xmax1,ymax1,class1...);
- Modify the NUM and CLASSES of Customer_DATA in config/ yolov4_config.py .
Customer_DATA = {
"NUM": 2, # your dataset number
"CLASSES": [
"name",
"flag"
], # your dataset class
}
Download the weights file
- Darknet pre-training weight: YOLOv4 ;
- Mobilenet pre-training weights: mobilenetv2 , mobilenetv3 (decompression password: args);
- Create a new folder weight/, put the weight file into it;
- Modify MODEL_TYPE in config/ yolov4_config.py .
MODEL_TYPE = {
"TYPE": "YOLOv4"
} # YOLO type:YOLOv4, Mobilenet-YOLOv4 or Mobilenetv3-YOLOv4
train
Modify the parameters in config/yolov4_config.py:
TRAIN = {
"DATA_TYPE": "Customer", # DATA_TYPE: VOC ,COCO or Customer
"TRAIN_IMG_SIZE": 416,
"AUGMENT": True,
"BATCH_SIZE": 8,
"MULTI_SCALE_TRAIN": False,
"IOU_THRESHOLD_LOSS": 0.5,
"YOLO_EPOCHS": 4000,
"Mobilenet_YOLO_EPOCHS": 120,
"NUMBER_WORKERS": 0,
"MOMENTUM": 0.9,
"WEIGHT_DECAY": 0.0005,
"LR_INIT": 1e-4,
"LR_END": 1e-6,
"WARMUP_EPOCHS": 2, # or None
}
Training instructions:
python -u train.py --weight_path weight/yolov4.weights --gpu_id 0
or (nohup)
CUDA_VISIBLE_DEVICES=0 nohup python -u train.py --weight_path weight/yolov4.weights --gpu_id 0 > nohup.log 2>&1 &
or (with --resume, automatically calling last.pt)
CUDA_VISIBLE_DEVICES=0 nohup python -u train.py --weight_path weight/last.pt --gpu_id 0 > nohup.log 2>&1 &
test
picture test
for VOC dataset:
CUDA_VISIBLE_DEVICES=0 python3 eval_voc.py --weight_path weight/best.pt --gpu_id 0 --visiual $DATA_TEST --eval --mode det
for COCO dataset:
CUDA_VISIBLE_DEVICES=0 python3 eval_coco.py --weight_path weight/best.pt --gpu_id 0 --visiual $DATA_TEST --eval --mode det
video test
CUDA_VISIBLE_DEVICES=0 python3 video_test.py --weight_path best.pt --gpu_id 0 --video_path video.mp4 --output_dir --output_dir
problems encountered
- evaluateer.py cannot find the *.xml annotation file
FileNotFoundError: [Errno 2] No such file or directory: '/home/my/YOLOv4-pytorch/data/VOC/Annotations\\18_3_dets0.xml'
报错原因:路径地址不正确
解决方法:
1.检查yolov4_config.py中DATA_PATH地址是否正确
2.evaluater.py,221 改为 self.val_data_path, "Annotations/" + "{:s}.xml"
References:
[1]: https://github.com/argusswift/YOLOv4-pytorch