jetson nano model conversion, tensorrt acceleration, python inference

reference

jetson nano: python calls the tensorrt accelerated c++ code of yolov5-v6.0, the inference speed is stable at 12fps, and the video memory occupied is within 0.8G 

 Playing with Jetson Nano (5): TensorRT accelerates YOLOv5 target detection_51CTO blog_jetson nano deploys yolov5

Install pycuda, tensorrt

sudo ln -s /usr/lib/python3.6/dist-packages/tensorrt ~/miniforge3/envs/safeguard/lib/python3.6/site-packages/tensorrt

Execute python yolov5_det_trt.py 

Detection speed is slow

Guess you like

Origin blog.csdn.net/qq_32636415/article/details/134447533