illustrate
This experimental case is based on the pytorch framework, using the yolov5 model to achieve target detection of cattle and horses. The data set is produced by myself. For the specific production method, please refer to my other blog labelme to produce the data set. For your convenience, you can also download it directly . Use the data set to reproduce the model. The network disk link is as follows: Data set link: Extraction code: xss6 .
1. Environment installation
(This step can be skipped. It has nothing to do with the content of this article. I am just thinking about the text.)
(1) Open the Anaconda Prompt tool of anaconda:
(2) Perform the following operations to install the conda environment:
conda env list#查看conda环境
conda create -n yolov5 python=3.8#创建conda环境命名为yolov5,python解释器版本指定3.8版本
The environment just created during the creation process will be displayed in the process, as shown in the red box in the picture above, where the environment is located. Next, go to the location on the computer to find the created environment, as shown below: (3) Next, open the pycharm tool and perform the following
operations
The interpreter created so far has been added.
Note: My pycharm is the Chinese version after Chinese version. For the Chinese version of pycharm installation, please refer to my other blog pycharm installation and Chinese version.
At this point, the conda environment is completed.
2. Install dependency packages
Open the yolov5 model and you can see the requirement.txt file. In the file are the packages that need to be installed for the yolov5 model:
(1) Enter the terminal of pycharm
(2).
The steps to enter the environment are as follows: 1 is the terminal path to enter the yolov5 model, 2 is Check the current conda environment, "*" indicates the current conda environment, 3 is to switch to the conda environment of yolov5.
(3) Install dependency packages
pip install -r requirements.txt
3. Test whether the dependencies are installed successfully
Find this file and run it.
You can find that the images of the test model are placed under data/images in the current directory:
run the detect.py file, and the last line feeds back the test result saving path, as shown below:
Click to enter the runs/detect/exp directory and you can find Test result:
The test is successful, which means the environment installation is successful.
4. File configuration
(1) Put in the prepared data set
. Note the reference for the data set production process: yolov5 data set production
or you can use the annotated data set (at the beginning of the article):
(2) Create a .yaml file
I named it horse.yaml (no requirement, name it as you like depending on personal needs):
Write it as follows according to the path of the data set, one is the path of the data set, and the other is the category labels 0 and 1 of the target detection.
# parent
# ├── yolov5
# └── dataset
# └── train
# └── images
# └── val
# └── images
# └── test
# └── images
train: ../dataset/train/images/
val: ../dataset/val/images/
test: ../dataset/test/images/
# Classes
names:
0: cattle
1: horse
(3) Modify the category of target detection.
This experiment is to detect and identify the two categories of cattle and horses. All categories are modified to 2 and the
configuration is ready.
5. Model training
Enter the following code in the terminal:
python train.py --img 640 --batch 32 --epoch 100 --data data/horse.yaml --cfg models/yolov5s.yaml --weights weights/yolov5s.pt
–img: input image size (640)
–batch: number of batch files (32)
–epoch: training rounds (100)
–data: path to the data set configuration file yaml file (data/horse.yaml)
–cfg: model The path address of the yaml file (models/yolov5s.yaml)
–weights: The path address of the initialized weight file (weights/yolov5s.pt)
As shown in the figure above, it can be seen that after 100 rounds of model training, the results are saved in the runs/train/exp path. Enter this path to see some results of model training:
- Just show a few results pictures:
6. Model evaluation
To evaluate the model and make predictions, the code is as follows:
python val.py --weights runs/train/exp/weights/best.pt --data ./data/horse.yaml --img 640
–weights: the model path after training (runs/train/exp/weights/best.pt, the number after exp should be consistent with the path of the training result output) –data
: the path of the data set configuration file yaml file (data/horse. yaml)
–img: input image size (640)
- Running screenshot:
You can find that the model evaluation results are saved in the runs/val/exp2 path and enter this folder: The
following pictures show the changes in different indicators of the model during the training process
Recall rate (R_curve.png)
●F1-Score(F1_curve.png)
●Accuracy rate (P_curve.png)
●PR curve (PR_curve.png)
7. Model reasoning
python detect.py --source ../dataset/test/images --weights ./runs/train/exp/weights/best.pt
–weights: model path after training (runs/train/exp/weights/best.pt, consistent with the path in model evaluation)
–source: path of the test set (…/dataset/test/images)
- Running screenshot:
- Inference results display: