Completion Series - Detection Topics - Animal Recognition System Based on Convolutional Neural Networks

We have done a Yolov5-based mask detection system before ( teach you to use YOLOV5 to train your own target detection model-mask detection-video tutorial_dejahu's blog-CSDN blog ), the code inside is developed based on YOLOV5 6.0, and It is applicable to other data sets, you only need to modify the data set and retrain it, which is very convenient, but some good brothers are beginners and may not know much about data processing, so we will make a derivative series of this video, mainly about Hopefully, these series will teach you how to train and use your own datasets.

Station B Video: Completed Series - Detection Special - Gesture Recognition System Based on YOLOV5

Blog address: (2 messages) Completed series - detection topic - animal recognition system based on convolutional neural network - 412 Blog - CSDN Blog

Code address: YOLOV5-animal-42: Animal detection system based on YOLOV5 (gitee.com)

Dataset and trained model address: YOLOV5 animal detection dataset + code + model 2000 labeled data + teaching video

In the comment area of ​​the last issue, a good friend left a message and wanted to see the animal detection system. Then we will update the animal detection system in this issue of the detection series, and add the counting function on the basis of the previous function. Let’s see the effect first.

image-20220309211305462

Considering the lack of computing power of some friends, I also provide the marked data set and trained model here. The way to obtain it is to download it through CSDN (^^ does not need to open a membership), the resource address is as follows:

YOLOV5 animal detection data set + code + model 2000 labeled data + teaching video

Small partners who need remote debugging and course design customized small partners can add QQ 3045834499, the price is fair, and the old man is not deceived.

download code

The download address of the code is: YOLOV5-animal-42: Animal Detection System Based on YOLOV5 (gitee.com)

image-20220407163525849

Configuration Environment

For anaconda friends who are not familiar with pycharm, please read this csdn blog first to understand the basic operations of pycharm and anaconda

How to configure the virtual environment of anaconda in pycharm_dejahu's blog - CSDN blog_how to configure anaconda in pycharm

After the anaconda installation is complete, please switch to the domestic source to improve the download speed. The command is as follows:

conda config --remove-key channels
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/main/
conda config --add channels https://mirrors.ustc.edu.cn/anaconda/pkgs/free/
conda config --add channels https://mirrors.bfsu.edu.cn/anaconda/cloud/pytorch/
conda config --set show_channel_urls yes
pip config set global.index-url https://mirrors.ustc.edu.cn/pypi/web/simple

First create a virtual environment for python3.8, please execute the following operations on the command line:

conda create -n yolo5 python==3.8.5
conda activate yolo5

pytorch installation (installation of gpu version and cpu version)

The actual test situation is that YOLOv5 can be used in both CPU and GPU conditions, but the speed of training under CPU conditions will be outrageous, so the conditional friends must install the GPU version of Pytorch, and the unconditional friends are the most Better to rent a server to use.

For the specific steps of GPU version installation, please refer to this article: Install GPU version of Tensorflow and Pytorch under Windows in 2021_dejahu's blog - CSDN Blog

The following points need to be noted:

  • Before installing, be sure to update your graphics card driver, go to the official website to download the corresponding model driver installation
  • 30 series graphics cards can only use the cuda11 version
  • Be sure to create a virtual environment so that there is no conflict between the various deep learning frameworks

What I created here is the python3.8 environment, the installed version of Pytorch is 1.8.0, and the command is as follows:

conda install pytorch==1.8.0 torchvision torchaudio cudatoolkit=10.2 # 注意这条命令指定Pytorch的版本和cuda的版本
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cpuonly # CPU的小伙伴直接执行这条命令即可
conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch # 30系显卡的小伙伴执行这里的指令

After the installation is complete, let's test whether the GPU is

image-20210726172454406

Installation of pycocotools

Later, I found a simpler installation method under Windows. You can use the following command to install directly without downloading and then installing

pip install pycocotools-windows

Installation of other packages

In addition, you also need to install other packages required by the program, including opencv, matplotlib and these packages, but the installation of these packages is relatively simple, and can be executed directly through the pip command. We cd to the directory of the yolov5 code, and execute the following commands directly: The installation of the package can be completed.

pip install -r requirements.txt
pip install pyqt5
pip install labelme

data processing

Realize the ready-to-process data set in yolo format. Generally, the data in yolo format is a picture corresponding to a txt format annotation file.

image-20220219192930908

The annotation file records the class center point coordinates and width and height information of the target, as shown in the following figure:

image-20220219193042855

Remember the location of the dataset here, we will use it in later configuration files, for example, the location of my dataset here is:C:/Users/chenmingsong/Desktop/hand/hand_gesture_dataset

Configuration file preparation

  • Preparation of data profiles

    The configuration file is in the data directory animal_data.yaml, you only need to change the dataset location here to your local dataset location.

    image-20220309215138666

  • Preparation of model configuration files

    There are three main configuration files of the model, namely animal_yolov5s.yaml, animal_yolov5m.yaml, and animal_yolov5l.yaml, which correspond to the three models of yolo, large, medium and small. The main thing is to modify the nc in the configuration file to the 6 categories corresponding to our data set.

    image-20220309215624860

model training

The main file of model training is train.pythat the following three instructions correspond to the training of the three models of small, medium and large respectively. Students with GPU can change the device to 0, which means that the GPU card No. 0 is used. Students with large memory can adjust the batchsize. 4 or 16, it is faster to train.

python train.py --data animal_data.yaml --cfg animal_yolov5s.yaml --weights pretrained/yolov5s.pt --epoch 100 --batch-size 2 --device cpu
python train.py --data animal_data.yaml --cfg animal_yolov5l.yaml --weights pretrained/yolov5l.pt --epoch 100 --batch-size 2
python train.py --data animal_data.yaml --cfg animal_yolov5m.yaml--weights pretrained/yolov5m.pt --epoch 100 --batch-size 2

The following progress bar will appear during the training process

image-20220219202818016

After the training is completed, the training results will be saved in the runs/traindirectory, and there are various schematic diagrams for everyone to use.

image-20220219202857929

model usage

The use of the model is all integrated in the detect.pydirectory, you can refer to the content you want to detect according to the following instructions

 # 检测摄像头
 python detect.py  --weights runs/train/exps/weights/best.pt --source 0  # webcam
 # 检测图片文件
  python detect.py  --weights runs/train/exps/weights/best.pt --source file.jpg  # image 
 # 检测视频文件
   python detect.py --weights runs/train/exps/weights/best.pt --source file.mp4  # video
 # 检测一个目录下的文件
  python detect.py --weights runs/train/exps/weights/best.pt path/  # directory
 # 检测网络视频
  python detect.py --weights runs/train/exps/weights/best.pt 'https://youtu.be/NUsoVlDFqZg'  # YouTube video
 # 检测流媒体
  python detect.py --weights runs/train/exps/weights/best.pt 'rtsp://example.com/media.mp4'  # RTSP, RTMP, HTTP stream                            

For example, taking our mask model as an example, if we execute python detect.py --weights runs/train/exps/weights/best.pt --source data/images/0023.pngthe command, we can get such a detection result.

single_result

Build a visual interface

The part of the visual interface is in the window.pyfile, which is the interface design completed by pyqt5. Before starting the interface, you need to replace the model with the model you trained. The replacement position is in window.pyline 60, and you can modify it to your model address. , if you have a GPU, you can set the device to 0, which means to use the 0th row GPU, which can speed up the recognition of the model.

image-20220309214131940

Start it now and see the effect.

find me

You can find me in these ways.

Station B: Four Twelve-

CSDN: Four Twelve

Know: Four Twelve

Weibo: Four Twelve-

Follow now and be an old friend!

image-20211212195912911

Guess you like

Origin blog.csdn.net/ECHOSON/article/details/123389178