PaddleOCR uses C++ to predict ultra-lightweight Chinese detection and recognition models

1. Preparation

1.1 Operating environment

The recommended operating system is Linux. I'm using Ubuntu 16.04.6, and the processor is a 4-core Intel® Core™ i5-7500 CPU @ 3.40GHz.

1.2 Compile dependent libraries

This demo mainly uses the third-party library opencv and paddle prediction library.

1.2.1 Compile Opencv

It is recommended to use opencv 3.0 or above (take opencv3.4.7 as an example).

1) Download and decompress

wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz
    
tar xvf 3.4.7.tar.gz

2) Compile

cd opencv-3.4.7
mkdir build
cd build

cmake .. \
-DCMAKE_INSTALL_PREFIX= {设定的opencv要install的目录} \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=OFF \
-DWITH_IPP=OFF \
-DBUILD_IPP_IW=OFF \
-DWITH_LAPACK=OFF \
-DWITH_EIGEN=OFF \
-DCMAKE_INSTALL_LIBDIR=lib64 \
-DWITH_ZLIB=ON \
-DBUILD_ZLIB=ON \
-DWITH_JPEG=ON \
-DBUILD_JPEG=ON \
-DWITH_PNG=ON \
-DBUILD_PNG=ON \
-DWITH_TIFF=ON \
-DBUILD_TIFF=ON

make -j4
make install

After make install is completed, opencv header files and library files will be generated in the directory of opencv install for subsequent OCR code compilation. The final file structure in the installation path is as follows:

opencv3/
|-- bin
|-- include
|-- lib
|-- lib64
|-- share
1.2.2 Compile Paddle prediction library

1) Download the code

git clone https://github.com/PaddlePaddle/Paddle.git

2) Compile

mkdir build
cd build

cmake  .. \
-DWITH_CONTRIB=OFF \
-DWITH_MKL=OFF \
-DWITH_MKLDNN=OFF  \
-DWITH_TESTING=OFF \
-DCMAKE_BUILD_TYPE=Release \
-DWITH_INFERENCE_API_TEST=OFF \
-DON_INFER=ON \
-DWITH_PYTHON=ON

make -j4
make inference_lib_dist

After the compilation is completed, you can see that the following files and folders have been generated under the build/paddle_inference_install_dir/ file.

build/paddle_inference_install_dir/
|-- CMakeCache.txt
|-- paddle
|-- third_party
|-- version.txt
  • Note: If too many files open errors are reported during compilation, set ulimit -n 63356

For more compilation parameter options, please refer to the Paddle C++ prediction library official website:
https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/05_inference_deployment/inference/build_and_install_lib_cn.html#congyuanmabianyi

2. run

2.1 Export the model as an inference model

You can refer to the model prediction chapter ( https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/inference.md ) to export the inference model for model prediction. After the model is exported, assuming it is placed in the inference directory, the directory structure is as follows.

inference/
|-- det_db
|   |--inference.pdparams
|   |--inference.pdimodel
|-- rec_rcnn
|   |--inference.pdparams
|   |--inference.pdparams

2.2 Compile PaddleOCR C++ prediction demo

2.2.1 Download PaddleOCR
git clone https://github.com/PaddlePaddle/PaddleOCR.git

cd PaddleOCR/deploy/cpp_infer

vi tools/build.sh

Set OPENCV_DIR in build.sh to the installation path (1.2.1) used to compile opencv yourself, and set LIB_DIR to the path used to compile the Paddle prediction library. Refer to the figure below:

Insert image description here

sh tools/build.sh

After compilation is completed, an executable file named ocr_system will be generated in the build folder.

Run the program:

sh tools/run.sh

The running effect of the program is as follows:
Insert image description here
the result image of ocr_vis.png will be generated in the current directory. The effect is as follows:
Insert image description here

Guess you like

Origin blog.csdn.net/santanan/article/details/111886720