【Yolov5】Deep model training-CPU version

1.yolov5’s own model training

1.1 git downloads the corresponding source code to the server

  • Clone the corresponding code and install requirements.txt in the Python>=3.7.0 environment, including PyTorch>=1.7
git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

1.2 Automatically download models from the latest YOLOv5 version.

  • Create python files yourself
import torch

# Model
model = torch.hub.load("ultralytics/yolov5", "yolov5s")  # or yolov5n - yolov5x6, custom
# Images
img = "https://ultralytics.com/images/zidane.jpg"  # or file, Path, PIL, OpenCV, numpy, list
# Inference
results = model(img)
# Results
results.print()  # or .show(), .save(), .crop(), .pandas(), etc.

1.3 detect.py inference

  • Various parameters can be passed here
python detect.py --weights yolov5s.pt --source 0                               # webcam
                                               img.jpg                         # image
                                               vid.mp4                         # video
                                               screen                          # screenshot
                                               path/                           # directory
                                               list.txt                        # list of images
                                               list.streams                    # list of streams
                                               'path/*.jpg'                    # glob
                                               'https://youtu.be/Zgi9g1ksQHc'  # YouTube
                                               'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream

1.4 train.py for training

python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml  --batch-size 128
                                                                 yolov5s                    64
                                                                 yolov5m                    40
                                                                 yolov5l                    24
                                                                 yolov5x                    16

1.5 Principle of yolo

  • By inputting a picture - performing convolution - getting the output result
    The Yolo algorithm uses a separate CNN model to achieve end-to-end target detection. The entire system is shown as follows: First Resize the input image to 448x448, then send it to the CNN network, and finally process the network prediction results to obtain the detected target. Compared with the R-CNN algorithm, it is a unified framework, it is faster, and the training process of Yolo is also end-to-end.
    Insert image description here
    Before training, pre-training was performed on ImageNet. The pre-trained classification model uses the first 20 convolutional layers in Figure 8, and then adds an average-pool layer and a fully connected layer. . After pre-training, 4 randomly initialized convolution layers and 2 fully connected layers are added to the 20 convolution layers obtained from pre-training. Since detection tasks generally require higher-definition images, the input of the network is increased from 224x224 to 448x448. The flow of the entire network is shown in the figure below:
    Insert image description here

2. Train your own model

  • Directory Structure
    Insert image description here

2.1 Create a new directory Mydata as shown below under Data

  • mydata
    • images
      • test
      • train (storage original image)
    • labels
      • test
      • train (storage labeled images)
        Insert image description here
  • After saving the picture

Insert image description here

  • As shown in the figure, it is a building block diagram

Insert image description here

2.1 Labelimg graphics for data annotation

  • If lableimg is not installed, you can install it.
    Insert image description here

2.2 Dataset

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: /home/hfg/Soft/Idea_Project/deep/yolov5/data/mydata/images/train  # dataset root dir
train: /home/hfg/Soft/Idea_Project/deep/yolov5/data/mydata/images/train  # train images (relative to 'path') 128 images
val: /home/hfg/Soft/Idea_Project/deep/yolov5/data/mydata/images/train  # val images (relative to 'path') 128 images
test:  # test images (optional)

# Classes
names:
  0: 积木

Insert image description here

2.3 train.py adjusts the data dataset

Insert image description here

2.4 Start training

  • Training is performed as shown in the figure, this epochs is 100
    Insert image description here
  • The training result set is shown in the figure
    Insert image description here
  • training results
    Insert image description here

Guess you like

Origin blog.csdn.net/h609232722/article/details/128933007