FastDeploy UIE Model Python Deployment Example
Before deployment, install the FastDeploy Python SDK by referring to the FastDeploy SDK installation documentation .
This directory provides infer.py
Python deployment examples for quickly completing general text classification tasks on CPU/GPU.
0. FastDeploy precompiled library installation
FastDeploy provides precompiled libraries for each platform for developers to download and install directly. Of course, FastDeploy compilation is also very easy, and developers can also compile FastDeploy according to their own needs.
0.1GPU deployment environment
0.1.1 Environmental requirements
- CUDA >= 11.2
- cuDNN >= 8.0
- python >= 3.6
- OS: Linux(x64)/Windows 10(x64)
Supports the deployment of CPU and Nvidia GPU, integrates Paddle Inference, ONNX Runtime, OpenVINO and TensorRT reasoning backend by default, Vision visual model module, Text text NLP model module
Version information: Paddle Inference2.4-dev5,ONNXRuntime1.12.0,OpenVINO2022.2.0.dev20220829,TensorRT8.5