Jetson Nano Creating a Human Pose Estimation Application with NVIDIA DeepStream

Reference: DeepStream-l4t

1. Jetson Nano install DeepStream-l4t


  1. docker pull: pull the image
docker pull nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples
  1. Run the container: Run the mirror
    2.1 Allow external applications to connect to the host's X display: (Allow external applications to connect to the host's X display)
    xhost +
    
    2.2 Run the docker container using the nvidia-docker (use the desired container tag in the command line below):启动容器
    sudo docker run -it --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples
    

2. Creating a Human Pose Estimation Application with NVIDIA DeepStream

Reference: Creating a Human Pose Estimation Application with NVIDIA DeepStream
Environment dependencies:

  1. DeepStream SDK 5.0: already installed above
  2. CUDA 10.2: jetsonnano pre-installed
  3. TensorRT 7.x: jetsonnano pre-installed

Step 1: Write the post-processing code required for your model

Download deepstream_pose_estimationCodes NVIDIA-AI-IOT / deepstream_pose_estimation

git clone https://github.com/NVIDIA-AI-IOT/deepstream_pose_estimation.git

Step 2: Download the human pose estimation model and convert it to ONNX

Download NVIDIA-AI-IOT/trt_poseCodes NVIDIA-AI-IOT / trt_pose

git clone https://github.com/NVIDIA-AI-IOT/trt_pose.git

Step 3: Replace the OSD library in the DeepStream install directory

Step 4: Edit the DeepStream configuration file

Step 5: Edit the makefile to include platform-specific build flags

Step 6: Compile and run the DeepStream app

Guess you like

Origin blog.csdn.net/qq122716072/article/details/112391188