OCR-docker部署最新版PaddleServing和PaddleOCR

官方参考:

PaddlePaddle安装方式

https://github.com/PaddlePaddle/PaddleOCR/tree/release/2.5/deploy/pdserving

https://gitee.com/AI-Mart/PaddleOCR/blob/release/2.5/deploy/pdserving/README_CN.md#paddle-serving-pipeline%E9%83%A8%E7%BD%B2

一、Paddle Serving pipeline部署 python

1、下拉镜像

docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda10.1-cudnn7-devel

2、进入镜像

nvidia-docker run -it --entrypoint=/bin/bash registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda10.1-cudnn7-devel

3、安装特定PaddlePaddle

特定PaddlePaddle下载地址

wget https://paddle-inference-lib.bj.bcebos.com/2.3.0/python/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda10.1_cudnn7.6.5_trt6.0.1.5/paddlepaddle_gpu-2.3.0.post101-cp37-cp37m-linux_x86_64.whl
pip3.7 install paddlepaddle_gpu-2.3.0.post101-cp37-cp37m-linux_x86_64.whl

3.1、直接在线安装

PaddlePaddle在线安装方式

python3.7 -m pip install paddlepaddle-gpu==2.3.0.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html

4、安装相关的包

pip3.7 install paddle-serving-client==0.9.0 paddle-serving-app==0.9.0 paddle-serving-server-gpu==0.9.0.post101 -i https://mirror.baidu.com/pypi/simple

5、下载OCR repo

git clone https://github.com/PaddlePaddle/PaddleOCR
cd PaddleOCR/deploy/pdserving/

6、下载并解压 OCR 文本检测模型

wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar -O ch_PP-OCRv3_det_infer.tar && tar -xf ch_PP-OCRv3_det_infer.tar

7、下载并解压 OCR 文本识别模型

wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar -O ch_PP-OCRv3_rec_infer.tar &&  tar -xf ch_PP-OCRv3_rec_infer.tar

8、转换检测模型

python3.7 -m paddle_serving_client.convert --dirname ./ch_PP-OCRv3_det_infer/ \
                                         --model_filename inference.pdmodel \
                                         --params_filename inference.pdiparams \
                                         --serving_server ./ppocr_det_v3_serving/ \
                                         --serving_client ./ppocr_det_v3_client/

9、转换识别模型

python3.7 -m paddle_serving_client.convert --dirname ./ch_PP-OCRv3_rec_infer/ \
                                         --model_filename inference.pdmodel \
                                         --params_filename inference.pdiparams \
                                         --serving_server ./ppocr_rec_v3_serving/ \
                                         --serving_client ./ppocr_rec_v3_client/

10、调整模型服务配置文件config.yml

减少并发数,比如调整为2:1防止资源不足
#并发数,is_thread_op=True时,为线程并发;否则为进程并发 concurrency: 8
#并发数,is_thread_op=True时,为线程并发;否则为进程并发 concurrency: 4

vim config.yml
i
:wq

11、启动服务端

python3.7 web_service.py &>log.txt &

12、启动客户端

python3.7 pipeline_http_client.py

二、Paddle Serving C++ 部署

1、下拉镜像

docker pull registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda10.1-cudnn7-devel

2、进入镜像

nvidia-docker run -it --entrypoint=/bin/bash registry.baidubce.com/paddlepaddle/serving:0.9.0-cuda10.1-cudnn7-devel

3、安装特定PaddlePaddle

特定PaddlePaddle下载地址

wget https://paddle-inference-lib.bj.bcebos.com/2.3.0/python/Linux/GPU/x86-64_gcc8.2_avx_mkl_cuda10.1_cudnn7.6.5_trt6.0.1.5/paddlepaddle_gpu-2.3.0.post101-cp37-cp37m-linux_x86_64.whl
pip3.7 install paddlepaddle_gpu-2.3.0.post101-cp37-cp37m-linux_x86_64.whl

3.1、直接在线安装

PaddlePaddle在线安装方式

python3.7 -m pip install paddlepaddle-gpu==2.3.0.post101 -f https://www.paddlepaddle.org.cn/whl/linux/mkl/avx/stable.html

4、安装相关的包

pip3.7 install paddle-serving-client==0.9.0 paddle-serving-app==0.9.0 paddle-serving-server-gpu==0.9.0.post101 -i https://mirror.baidu.com/pypi/simple

5、下载OCR repo

git clone https://github.com/PaddlePaddle/PaddleOCR
cd PaddleOCR/deploy/pdserving/

6、下载并解压 OCR 文本检测模型

wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_det_infer.tar -O ch_PP-OCRv3_det_infer.tar && tar -xf ch_PP-OCRv3_det_infer.tar

7、下载并解压 OCR 文本识别模型

wget https://paddleocr.bj.bcebos.com/PP-OCRv3/chinese/ch_PP-OCRv3_rec_infer.tar -O ch_PP-OCRv3_rec_infer.tar &&  tar -xf ch_PP-OCRv3_rec_infer.tar

8、转换检测模型

python3.7 -m paddle_serving_client.convert --dirname ./ch_PP-OCRv3_det_infer/ \
                                         --model_filename inference.pdmodel \
                                         --params_filename inference.pdiparams \
                                         --serving_server ./ppocr_det_v3_serving/ \
                                         --serving_client ./ppocr_det_v3_client/

9、转换识别模型

python3.7 -m paddle_serving_client.convert --dirname ./ch_PP-OCRv3_rec_infer/ \
                                         --model_filename inference.pdmodel \
                                         --params_filename inference.pdiparams \
                                         --serving_server ./ppocr_rec_v3_serving/ \
                                         --serving_client ./ppocr_rec_v3_client/

10、调整模型服务配置文件config.yml

减少并发数,比如调整为2:1防止资源不足
#并发数,is_thread_op=True时,为线程并发;否则为进程并发 concurrency: 8
#并发数,is_thread_op=True时,为线程并发;否则为进程并发 concurrency: 4

vim config.yml
i
:wq

11、启动服务端

python3.7 -m paddle_serving_server.serve --model ppocr_det_v3_serving ppocr_rec_v3_serving --op GeneralDetectionOp GeneralInferOp --port 9293 &>log.txt &

12、客户端配置修改

由于需要在C++Server部分进行前后处理,为了加速传入C++Server的仅仅是图片的base64编码的字符串,故需要手动修改 ppocr_det_v3_client/serving_client_conf.prototxt 中 feed_type 字段 和 shape 字段,修改成如下内容:

vim ppocr_det_v3_client/serving_client_conf.prototxt
i
:wq
 feed_var {
    
    
 name: "x"
 alias_name: "x"
 is_lod_tensor: false
 feed_type: 20
 shape: 1
 }

13、启动客户端

python3.7 ocr_cpp_client.py ppocr_det_v3_client ppocr_rec_v3_client

猜你喜欢

转载自blog.csdn.net/qq_15821487/article/details/125069057