[Target Detection] YOLOv5 model trains its own data set—YOLOv5 implements animal detection (animal cow and horse detection and recognition based on pytorch framework)

illustrate

This experimental case is based on the pytorch framework, using the yolov5 model to achieve target detection of cattle and horses. The data set is produced by myself. For the specific production method, please refer to my other blog labelme to produce the data set. For your convenience, you can also download it directly . Use the data set to reproduce the model. The network disk link is as follows: Data set link: Extraction code: xss6 .

1. Environment installation

(This step can be skipped. It has nothing to do with the content of this article. I am just thinking about the text.)
(1) Open the Anaconda Prompt tool of anaconda: Insert image description here
(2) Perform the following operations to install the conda environment:
Insert image description here

conda env list#查看conda环境
conda create -n yolov5 python=3.8#创建conda环境命名为yolov5,python解释器版本指定3.8版本

The environment just created during the creation process will be displayed in the process, as shown in the red box in the picture above, where the environment is located. Next, go to the location on the computer to find the created environment, as shown below: (3) Next, open the pycharm tool and perform the following
Insert image description here
operations
Insert image description here
Insert image description here
The interpreter created so far has been added.
Insert image description here
Note: My pycharm is the Chinese version after Chinese version. For the Chinese version of pycharm installation, please refer to my other blog pycharm installation and Chinese version.
At this point, the conda environment is completed.

2. Install dependency packages

Open the yolov5 model and you can see the requirement.txt file. In the file are the packages that need to be installed for the yolov5 model:
Insert image description here
(1) Enter the terminal of pycharm
Insert image description here
(2).
The steps to enter the environment are as follows: 1 is the terminal path to enter the yolov5 model, 2 is Check the current conda environment, "*" indicates the current conda environment, 3 is to switch to the conda environment of yolov5.
Insert image description here
(3) Install dependency packages

pip install -r requirements.txt

Insert image description here

3. Test whether the dependencies are installed successfully

Find this file and run it. Insert image description here
You can find that the images of the test model are placed under data/images in the current directory:
Insert image description here
run the detect.py file, and the last line feeds back the test result saving path, as shown below:
Insert image description here
Click to enter the runs/detect/exp directory and you can find Test result:
Insert image description here
The test is successful, which means the environment installation is successful.

4. File configuration

(1) Put in the prepared data set
. Note the reference for the data set production process: yolov5 data set production
or you can use the annotated data set (at the beginning of the article):
Insert image description here
(2) Create a .yaml file

Insert image description here
I named it horse.yaml (no requirement, name it as you like depending on personal needs):
Insert image description here
Write it as follows according to the path of the data set, one is the path of the data set, and the other is the category labels 0 and 1 of the target detection.
Insert image description here

# parent
# ├── yolov5
#     └── dataset
#         └── train
#            └── images
#         └── val
#             └── images
#         └── test
#             └── images
train: ../dataset/train/images/
val: ../dataset/val/images/
test: ../dataset/test/images/

# Classes
names:
  0: cattle
  1: horse

(3) Modify the category of target detection.
This experiment is to detect and identify the two categories of cattle and horses. All categories are modified to 2 and the
Insert image description here
configuration is ready.

5. Model training

Enter the following code in the terminal:

python train.py --img 640 --batch 32 --epoch 100 --data data/horse.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt

–img: input image size (640)
–batch: number of batch files (32)
–epoch: training rounds (100)
–data: path to the data set configuration file yaml file (data/horse.yaml)
–cfg: model The path address of the yaml file (models/yolov5s.yaml)
–weights: The path address of the initialized weight file (weights/yolov5s.pt)

Insert image description here
Insert image description here
As shown in the figure above, it can be seen that after 100 rounds of model training, the results are saved in the runs/train/exp path. Enter this path to see some results of model training:
Insert image description here

  • Just show a few results pictures:
    Insert image description here
    Insert image description here

6. Model evaluation

To evaluate the model and make predictions, the code is as follows:

python val.py --weights runs/train/exp/weights/best.pt --data ./data/horse.yaml --img 640

–weights: the model path after training (runs/train/exp/weights/best.pt, the number after exp should be consistent with the path of the training result output) –data
: the path of the data set configuration file yaml file (data/horse. yaml)
–img: input image size (640)

  • Running screenshot:
    Insert image description here
    You can find that the model evaluation results are saved in the runs/val/exp2 path and enter this folder: The
    Insert image description here
    following pictures show the changes in different indicators of the model during the training process
    Recall rate (R_curve.png)
    Insert image description here

●F1-Score(F1_curve.png)
Insert image description here

●Accuracy rate (P_curve.png)
Insert image description here

●PR curve (PR_curve.png)
Insert image description here

7. Model reasoning

python detect.py --source ../dataset/test/images --weights ./runs/train/exp/weights/best.pt

–weights: model path after training (runs/train/exp/weights/best.pt, consistent with the path in model evaluation)
–source: path of the test set (…/dataset/test/images)

  • Running screenshot:
    Insert image description here

Insert image description here

  • Inference results display:
    Insert image description here

Guess you like

Origin blog.csdn.net/weixin_45736855/article/details/129625070