Article directory
- foreword
-
- I have used yolov5 for nearly 2 years. Before that, I mainly did target detection, and also did yolov5 segmentation tasks and model conversion. Now I have new tasks. Time to play around with the new model. The best way is to look at the documentation: https://docs.ultralytics.com/ ![Insert picture description here](https://img-blog.csdnimg.cn/e7a401eb65914e5db58e7e0006617f30.png) ![Insert picture description here ](https://img-blog.csdnimg.cn/4af4d64555984cd182fd9bde1433788d.png)
- 1. Task tracking
- 2. Cross-camera tracking (advanced)
- Summarize
foreword
I have used yolov5 for nearly 2 years. Before that, I mainly did target detection, and also did yolov5 segmentation tasks and model conversion. Now I have new tasks. Time to play around with the new model.
The best way is to look at the documentation: https://docs.ultralytics.com/
It can be seen that compared with the past, yolov has changed from a target monitoring model to a comprehensive magic board, which can be an excellent choice for tasks including object inspection, tracking, instance segmentation, image classification and pose estimation.
Its pre-trained model can be downloaded here: https://github.com/ultralytics/ultralytics/blob/main/README.zh-CN.md
For example, monitoring:
segmentation:
classification:
pose estimation:
1. Task tracking
git:https://github.com/mikel-brostrom/yolov8_tracking
Real-time multi-object, segmentation and pose tracking using Yolov8 with DeepOCSORT and LightMBN
1.1 Build environment
# yolov8 现在格高了,封装成库了
pip install ultralytics
pip install lap filterpy easydict
pip install gdown
The installation requirement.txt has not been completed here, and it has a torch environment, which does not start from 0.
Download here, re-identify the weight: https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO
and put it in the weights folder.
Just run the command:
$ python track.py --yolo-model yolov8n.pt # bboxes only
yolov8n-seg.pt # bboxes + segmentation masks
yolov8n-pose.pt # bboxes + pose estimation
This is the easiest way, everything else is the default
$ python track.py --source 0 --yolo-model yolov8n.pt --img 640
yolov8s.tflite
yolov8m.pt
yolov8l.onnx
yolov8x.pt --img 1280
...
source 0 is the default webcam, computer camera
With a camera, the default is to have a graphics card. A picture takes 10ms, and the graphics card stands at 30% 12G 3070. It feels okay.
Because it can be tracked, it only needs to be commercialized and draw a line to count. Leave a message if you need it , I will implement it.
You can also add a few more parameters: --show --save will keep the file and display the recognition status.
–classes 16 17 can filter categories
2. Cross-camera tracking (advanced)
https://blog.csdn.net/qq_42312574/article/details/128880805
Cross-camera has always been the direction I want to try, and finally found the keyword: Multi-Target Multi-Camera Tracking (MTMC Tracking)
https://zhuanlan. zhihu.com/p/35391826, Great God Luo, but the age is a bit old, I guess there is a new sota now.
https://github.com/JunweiLiang/Object_Detection_Tracking base Tensorflow is cross-camera tracking, there are renderings
https://github.com/Jason-cs18/Awesome-Multi-Camera-Network A lot of learning materials are listed, no code
https://github.com/SurajDonthi/Multi-Camera-Person-Re-Identification/tree/master This is based on torch in 2021
https://github.com/cw1204772/AIC2018_iamai This is from 2018, and it also has code , seems to be tracking vehicles
Summarize
The author of yolov8 analyzes the content of yolov8
https://www.bilibili.com/video/BV17D4y1N7Zz/?spm_id_from=333.337.search-card.all.click&vd_source=3f7ae4b9d3a2d84bf24ff25f3294d107