Project Introduction
The previous article introduced traffic sign classification and recognition based on convolutional neural network Python traffic sign recognition based on convolutional neural network nanny level tutorial (Tensorflow) , and finally implemented a pyqt5 GUI interface, and also made a simple The Falsk front-end webpage realizes a simple interaction between the front-end and the front-end. It can only realize the classification of a single traffic sign image, has no position detection function, and does not support real-time detection and recognition of video. Generally speaking, it is relatively simple. This article introduces an advanced project of traffic sign recognition - the detection and recognition of strawberry pests and diseases based on Yolov5. It can not only realize the multi-target detection and recognition of pictures, but also realize the real-time detection and recognition of videos. You can watch the video display effect at the following link
Video Demo: Traffic Sign Detection Video Demo
The video demonstration includes the following content:
1. Traffic sign detection
2. Helmet detection
3. Mask detection
4. Fruit detection 5.
Gesture detection 6.
Fire detection 7.
Fall detection
8. Elevator battery car
9. Mycobacterium tuberculosis detection
10. Pest detection
11. UAV detection
12. Fire and smoke detection
13. Strawberry disease detection
14. Supermarket fruit and vegetable detection
Dataset introduction
get code
Create a virtual environment
conda create -n yolov5 python=3.8.5
Install pytorch (if you don't know how to directly install the CPU version of the GPU)
Install CPU version torch
pip install torch==1.11.0+cpu torchvision==0.12.0+cpu torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cpu -i https://pypi.tuna.tsinghua.edu.cn/simple
Install GPU version torch (take my personal example: my graphics card is 3060Ti, CUDA version is 11.7)
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113 -i https://pypi.tuna.tsinghua.edu.cn/simple
Install other dependencies
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install pyqt5==5.15.6 -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install pycocotools-windows==2.0.0.2 -i https://pypi.tuna.tsinghua.edu.cn/simple
Test whether the code can run
python detect.py --weights runs/train/exp17/weights/best.pt --source data/images/strawberry.jpeg
What the project folder data/images/bus.jpg picture looks like before recognition
Successfully run the command as shown in the above figure: the recognition result is stored in the project folder runs\detect\exp file:
training (can be ignored)
Next, open the project with pycharm, and then operate in Terminal, enter the following command
python train.py --data strawberry_data.yaml --cfg mask_yolov5s.yaml --weights pretrained/yolov5s.pt --epoch 32 --batch-size 16
Training is time-consuming, and the project compression package I gave has already been trained. So the training step can be skipped.
Model evaluation results after training: After the model training is completed, an exp file will be generated in the runs/train directory, which contains the training results and some evaluation indicators.
According to the output recognition results, we can know that the accuracy of the model is very high! ! !
Run the GUI interface
After training, open the "display_interface.py" code, click to run, the result is as follows: