Jetson TX2 install TensorRT

1. deepstream-l4t mirroring

1.1 Pull image

Pull the image with the following command

docker pull nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples

l4t means See https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t for instructions Linux for Tegra

on Base , Samples and IoT mirroring

1.2 Boot image

docker run --gpus all --name=jetson_test --privileged --ipc=host -p 23222:22 -p 34389:3389 -itd -v /data/yzm_iavs/:/data/yzm_iavs nvcr.io/nvidia/deepstream-l4t:5.0.1-20.09-samples /bin/bash

1.3 Mirror change source

Through modification /etc/apt/sources.list, the apt source of jetson can be changed to a domestic source

deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main restricted universe multiverse
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ xenial main multiverse restricted universe # for opencv

# deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-proposed main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-proposed main restricted universe multiverse

2. Install the software

2.1 Software upgrade

apt-get update && apt-get upgrade -y

2.2 Install gstreamer

apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libgstrtspserver-1.0-dev libx11-dev

2.3 Install opencv

  • opencv dependency
apt-get install cmake git libgtk2.0-dev pkg-config  libavcodec-dev libavformat-dev libswscale-dev
apt-get install libtbb2  libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev
add-apt-repository "deb http://security.ubuntu.com/ubuntu xenial-security main"
apt-get update
apt install libjasper1 libjasper-dev 
apt-get install qtbase5-dev qtdeclarative5-dev
  • Configure cmake
cmake -D WITH_QT=ON \
      -D WITH_CUDA=ON \
      -D BUILD_TIFF=ON \
      -D BUILD_TESTS=OFF \
      -D BUILD_PERF_TESTS=OFF \
      -D OPENCV_GENERATE_PKGCONFIG=ON \
      -D CMAKE_INSTALL_PREFIX=/usr/local \
      -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules/ \
      -D BUILD_opencv_xfeatures2d=OFF  ..
  • build
make -j4
make install

3. Install TensorRT

3.1 TensorRT installation reference

https://developer.download.nvidia.com/assets/embedded/docs/JP_4.5_DP/20.09-Jetson-CUDA-X-AI-Developer-Preview-Installation-Instructions.pdf?Gs3vQ_4IJu3viKzX7vFd0sk11MkIJy_o3nRlQ2mp-GdTRuA4nFdxQMJ2rGcUuqUAiFBn7Hnph_TmrAzEZvj1oCVw-7k2yVfc358KOHTd4BHm54xMjkPELxMp1S2yKPvSJ26plQCKMd5940KCDhjkLjy6HtB43j6rSgqJNvCuUQJD8Eo8Ihw9jcXaO21XgrqOzvdXko4MRI9pzhc2mPURjlvkB7c

3.2 Download the installation package

Installation package link: https://developer.nvidia.com/20.09_Jetson_CUDA-X_AI_DP_Xavier , download and unzip it.

3.3 Install cuda

dpkg -i cuda-repo-l4t-10-2-local-10.2.89_1.0-1_arm64.deb
apt-key add /var/cuda-repo-10-2-local-10.2.89/7fa2af80.pub
apt-get -y update
apt-get -y install cuda-toolkit-10-2 libgomp1 libfreeimage-dev libopenmpi-dev

3.4 Install cudnn

dpkg -i libcudnn8_8.0.2.39-1+cuda10.2_arm64.deb
dpkg -i libcudnn8-dev_8.0.2.39-1+cuda10.2_arm64.deb
dpkg -i libcudnn8-doc_8.0.2.39-1+cuda10.2_arm64.deb

3.5 Install TensorRT

dpkg -i libnvinfer7_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvinfer-dev_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvinfer-plugin7_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvinfer-plugin-dev_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvonnxparsers7_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvonnxparsers-dev_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvparsers7_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvparsers-dev_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvinfer-bin_7.2.0-1+cuda10.2_arm64.deb
dpkg -i libnvinfer-doc_7.2.0-1+cuda10.2_all.deb
dpkg -i libnvinfer-samples_7.2.0-1+cuda10.2_all.deb
dpkg -i tensorrt_7.2.0.14-1+cuda10.2_arm64.deb
dpkg -i python-libnvinfer_7.2.0-1+cuda10.2_arm64.deb
dpkg -i python-libnvinfer-dev_7.2.0-1+cuda10.2_arm64.deb
dpkg -i python3-libnvinfer_7.2.0-1+cuda10.2_arm64.deb
dpkg -i python3-libnvinfer-dev_7.2.0-1+cuda10.2_arm64.deb
dpkg -i graphsurgeon-tf_7.2.0-1+cuda10.2_arm64.deb
dpkg -i uff-converter-tf_7.2.0-1+cuda10.2_arm64.deb

4. Test the installation

  • Download the TensorRT demo code
git clone https://github.com/linghu8812/tensorrt_inference.git
  • Test yolov5 demo After
    converting the PyTorch model of yolov5 to ONNX model, compile and test
cd tensorrt_inference/yolov5
mkdir build && cd build
cmake ..
make -j
./yolov5_trt ../config.yaml ../samples

Guess you like

Origin blog.csdn.net/linghu8812/article/details/113702539
Recommended