Reference: DeepStream-l4t
1. Jetson Nano install DeepStream-l4t
- docker pull: pull the image
docker pull nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples
- Run the container: Run the mirror
2.1 Allow external applications to connect to the host's X display: (Allow external applications to connect to the host's X display)
2.2 Run the docker container using the nvidia-docker (use the desired container tag in the command line below):启动容器xhost +
sudo docker run -it --net=host --runtime nvidia -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.0 -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples
2. Creating a Human Pose Estimation Application with NVIDIA DeepStream
Reference: Creating a Human Pose Estimation Application with NVIDIA DeepStream
Environment dependencies:
- DeepStream SDK 5.0: already installed above
- CUDA 10.2: jetsonnano pre-installed
- TensorRT 7.x: jetsonnano pre-installed
Step 1: Write the post-processing code required for your model
Download deepstream_pose_estimation
Codes NVIDIA-AI-IOT / deepstream_pose_estimation
git clone https://github.com/NVIDIA-AI-IOT/deepstream_pose_estimation.git
Step 2: Download the human pose estimation model and convert it to ONNX
Download NVIDIA-AI-IOT/trt_pose
Codes NVIDIA-AI-IOT / trt_pose
git clone https://github.com/NVIDIA-AI-IOT/trt_pose.git