YOLOv5+DeepSort achieves target tracking

This article will show how to combine YOLOv5 and Deepsort to achieve functions such as target tracking and counting. Due to the limited level, this article will focus more on how to achieve it. In the principle part, I will recommend a few blogs for your reference.

Principle introduction

I recommend the following blog, which is very detailed:

Yolov5_DeepSort_Pytorch code learning and modification records__helen_520's blog

YOLOv5 has the function of target detection. When the video is decomposed into multiple images and executed frame by frame, if there are multiple targets in the video frame, how to know that the target in one frame is the same object as the previous frame is the work of target tracking.

DeepSort is an algorithm to achieve target tracking, evolved from sort (simple online and realtime tracking), uses Kalman filter to predict the trajectory of detected objects, and the Hungarian algorithm matches them with new detection targets.

The combination of the two enables simple object tracking functionality.

Preparation

Deepsort source code download

https://github.com/mikel-brostrom/Yolov5_DeepSort_Pytorch.git

Note the version correspondence:

DeepSort v3.0 ~ YOLOv5 v5.0

DeepSort v4.0 ~ YOLOv5 v6.1

Since the code I used for the previous demo is version v5.0, DeepSort v3.0 should be downloaded

DeepSort v3.0

If the network is not very good, you can use this one I shared (some necessary files have been prepared):

Link: https://pan.baidu.com/s/1rZm1XgPDzpCAJc6JyY8PQQ

Extraction code: atld

Environment configuration

The environment required by Deepsort is slightly different from that of YOLOv5, so it is recommended that you use Anaconda to create a new environment. Of course, if it takes time to reconfigure all of them, here is a lazy method (please skip the environment configuration boss).

Open Anaconda Prompt and enter:

conda create -n XXX --clone yolo5

This operation is to copy an existing environment, and XXX is the name of the copied environment. The version of the library required by different codes is different. In order to avoid conflicts, a mirror environment can be obtained in this way. Deleting and installing libraries in the mirror environment will not affect the previous environment.

This is the new environment I copied

Let's review the configuration steps of importing the environment into pycharm. For details, please refer to a blog I wrote before:

Some details in the YOLOv5 environment configuration

Then enter the following command in the terminal to import the library with one click.

pip install -r requirements.txt

If the library installation fails due to network reasons, repeat the above command. Of course, in the process of running the code, the compiler will also prompt that some libraries are not installed and cause the operation to fail (No Module named XXX). In the Anaconda Prompt, activate the environment first, and then install the corresponding library pip.

Put into the target detection project

Put the previous YOLOv5 project file into the general project folder. In order to fit the source code in track.py, rename the yolo project folder, so that there is no need to change the path of the imported package in track.py (being lazy).

Achieve goal tracking

change parameter path

The first line is to use the official weight yolov5s.pt in the yolov5 project (note that your path is not necessarily the same as mine, as long as you lock to yolov5s.pt, of course you can lock the path to your own training weight)

The second line is to call the official model ckpt.t7 in the deep_sort project (other models can be used)

The third line is to identify the object, here to identify the video 1.mp4

The fourth line indicates that the result will be output in the output folder

After the change, the parameters of track.py are set as follows

if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--yolo_weights', type=str, default='yolov5/weights/yolov5s.pt', help='model.pt path')
    parser.add_argument('--deep_sort_weights', type=str, default='deep_sort_pytorch/deep_sort/deep/checkpoint/ckpt.t7', help='ckpt.t7 path')
    # file/folder, 0 for webcam
    #parser.add_argument('--source', type=str, default='0', help='source')
    parser.add_argument('--source', type=str, default='1.mp4', help='source')
    parser.add_argument('--output', type=str, default='output', help='output folder')  # output folder
    parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
    parser.add_argument('--conf-thres', type=float, default=0.4, help='object confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.5, help='IOU threshold for NMS')
    parser.add_argument('--fourcc', type=str, default='mp4v', help='output video codec (verify ffmpeg support)')
    parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
    parser.add_argument('--show-vid', action='store_true', help='display tracking video results')
    parser.add_argument('--save-vid', action='store_true', help='save video tracking results')
    parser.add_argument('--save-txt', action='store_true', help='save MOT compliant results to *.txt')
    # class 0 is person, 1 is bycicle, 2 is car... 79 is oven
    parser.add_argument('--classes', nargs='+', default=[0], type=int, help='filter by class')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')
    parser.add_argument('--evaluate', action='store_true', help='augmented inference')
    parser.add_argument("--config_deepsort", type=str, default="deep_sort_pytorch/configs/deep_sort.yaml")
    args = parser.parse_args()
    args.img_size = check_img_size(args.img_size)

    with torch.no_grad():
        detect(args)

run

Enter the following command in the terminal

python track.py --source 1.mp4 --save-vid --yolo_weights yolov5/weights/yolov5s.pt

After running, an output folder will be generated in the same directory as track.py, which contains the recognized video.

Effect

You can also use the following command to run the code, this command will pop up a window to display the video of the recognition mark but the operation will be slower

python track.py --source 1.mp4 --show-vid --save-vid --yolo_weights yolov5/weights/yolov5s.pt

On the way to school, you and I encourage each other (๑•̀ㅂ•́)و✧

Guess you like

Origin blog.csdn.net/Albert_yeager/article/details/129321339