Jetson Nano-Deploy Paddle Infence prediction library based on python API

System environment


  • I tried JetPack4.4 version 4.3, the following error will appear! ! Everyone must use version 4.4, which is ok.
    Insert picture description here

If you need this image, you can download it from Jetson Download Center .

Install PaddlePaddle

There are two ways, because nano officially has compiled whl of python3.6, so we just download it directly, without compiling.

1. Download or compile the prediction library directly

(1) Directly download the officially compiled Jetson nano prediction library

download link
download

Select the download of python3.6 version.
Insert picture description here

(2) Compile the official prediction library

Compile paddlepaddle (with TensorRT) on Jetson nano and run Paddle-Inference-Demo

2. Install whl

Transfer the downloaded whl file to nano, and then install whl:

pip3 install paddlepaddle_gpu-2.0.0-cp36-cp36m-linux_aarch64.whl

Screenshot of successful installation:
Insert picture description here

test

Open python3:

import paddle
paddle.fluid.install_check.run_check()

Just report the warning and ignore it, and it will not affect the use.
Insert picture description here

Test Paddle Inference

Environmental preparation

拉取Paddle-Inference-Demo:

git clone https://github.com/PaddlePaddle/Paddle-Inference-Demo.git

Pull slower, then you can build a warehouse for download on gitee, I built a warehouse: https://gitee.com/irvingao/Paddle-Inference-Demo.git.

Test run through the GPU prediction model

Give executable permissions:

cd Paddle-Inference-Demo/python
chmod +x run_demo.sh

It should be noted that you need to modify the run.shlast in all subfolders pythonto python3:
Insert picture description here
Insert picture description here

./run_demo.sh

You can also choose to run a single model run.sh.

Reference article:

Guess you like

Origin blog.csdn.net/qq_45779334/article/details/114094097