YOLO V5 implémente la détection de comportement en classe

1. Construction de l'environnement

conda create -n yolov5 python=3.6
conda activate yolov5
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0
pip install  yolov5/requirements.txt

2. Téléchargement du code YOLOV5

git clone https://github.com/ultralytics/yolov5/

3. Division et préparation des ensembles de données

链接:https://pan.baidu.com/s/1-v3jZpOGL86sw-R4W6YL5w 
提取码:7vej

4. Chargement des données

#这个地方的路径需要与解压后的数据路径对应上
train: ./dataset/train/images
val: ./dataset/valid/images

nc: 5
names: ['listen', 'write''val','sleep','phone']
#由于我们只检测两个类别,所以label 只有'listen', 'write',同理可换成你自己的数据类别

5. Fichiers de configuration personnalisés

nc: 5 # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple

# anchors
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Focus, [64, 3]],  # 0-P1/2
   [-1, 1, Conv, [128, 3, 2]],  # 1-P2/4
   [-1, 3, BottleneckCSP, [128]],
   [-1, 1, Conv, [256, 3, 2]],  # 3-P3/8
   [-1, 9, BottleneckCSP, [256]],
   [-1, 1, Conv, [512, 3, 2]],  # 5-P4/16
   [-1, 9, BottleneckCSP, [512]],
   [-1, 1, Conv, [1024, 3, 2]],  # 7-P5/32
   [-1, 1, SPP, [1024, [5, 9, 13]]],
   [-1, 3, BottleneckCSP, [1024, False]],  # 9
  ]

# YOLOv5 head
head:
  [[-1, 1, Conv, [512, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 6], 1, Concat, [1]],  # cat backbone P4
   [-1, 3, BottleneckCSP, [512, False]],  # 13

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 4], 1, Concat, [1]],  # cat backbone P3
   [-1, 3, BottleneckCSP, [256, False]],  # 17 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 14], 1, Concat, [1]],  # cat head P4
   [-1, 3, BottleneckCSP, [512, False]],  # 20 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P5
   [-1, 3, BottleneckCSP, [1024, False]],  # 23 (P5/32-large)

   [[17, 20, 23], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

6. Formation

python train.py --img 416 --batch 64 --epochs 100 --data 'data.yaml' --cfg ./models/my.yaml --weights '' --name yolov5s_results  --cache


7. Testez

python detect.py --weights best.pt --img 416 --conf 0.7 --source ./test

L'effet de test est bon, chat privé si nécessaire.

Je suppose que tu aimes

Origine blog.csdn.net/hasque2019/article/details/126875850
conseillé
Classement