Paddle-Lite-compile the source code on the Raspberry Pi 4B (Ubuntu 18.04) and deploy the Paddle model based on the python API

The essence of experience summed up after countless mining pits, basically solved the battle within 2 hours. I hope everyone can compile successfully! Follow the steps to be sure no problem!

Linux deployment environment preparation

Here I provide the compiled whl under the Python 3.6 environment of Paddle Lite2.6 version .

1. Compilation environment preparation

sudo apt update
sudo apt-get install -y gcc g++ make wget python unzip patchelf python-dev

# 2. install cmake 3.10 or above
wget https://www.cmake.org/files/v3.10/cmake-3.10.3.tar.gz
tar -zxvf cmake-3.10.3.tar.gz
cd cmake-3.10.3
./configure
make
sudo make install

Check whether cmake is installed successfully:

cmake --version

Insert picture description here

2. Install python dependencies

sudo apt-get install python3-pip
pip3 install --upgrade pip

3. Install the pit avoidance item

If it is not installed patchelf, an error will be reported when the compilation reaches 100%. See the error message below. In order to avoid pits, you can install it directly here:

sudo apt-get install patchelf

4. Compile Paddle-Lite's python Whl package

# 1. 下载Paddle-Lite源码 并切换到release分支,这里从gitee上下载,节约时间
git clone https://gitee.com/paddlepaddle/paddle-lite
cd paddle-lite && git checkout release/v2.6

# 删除此目录,编译脚本会自动从国内CDN下载第三方库文件
rm -rf third-party

Compile

Because the python that comes with Ubuntu 18.04 is 3.6.9, so here is 3.6.

./lite/tools/build_linux.sh --with_python=ON --python_version=3.6 --with_log=ON
Error report and solution

Insert picture description here
If you follow my operation, you should not report an error, so you can just skip it.

sudo apt-get install patchelf

Compilation is successful!

Insert picture description here

5. Install Paddle Lite prediction library

cd /build.lite.linux.armv8.gcc/inference_lite_lib.armlinux.armv8/python/install/dist

pip3 install xxxxxx.whl

Successful installation!

Insert picture description here

Model file preparation

You must select a model supported by Paddle Lite, otherwise an error will be reported when the operator op does not support it.

Specific reference: Paddle Lite support model

(1) PaddleHub downloads the pre-trained model

PaddleHub-fast white prostitution massive pre-training model

(2) own model

The model used by Paddle for reasoning is save_inference_modelsaved through this API. There are two formats for saving. Here, the model parameter file generated by running on AI studio will be downloaded, and then the model will be converted into a format by referring to the blog above , and linked Loaded on the Raspberry Pi:optmodel.nb

Two model formats

  • non-combined form : a separate file to save parameters, such as set model_filenameto None, params_filenametoNone
    Insert picture description here

  • combined form : the arguments on the same file, such as setting model_filenameis model, params_filenameasparams

Insert picture description here

Be sure to convert all to the xxx.nbformat at the end:
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_45779334/article/details/114553308