HUAWEI CLOUD GPU server uses PaddleClas and PaddleServing to train and deploy vehicle type classification model services

0 Preface

The following is a record of the recent process of using PaddleClas and PaddleServing to train and deploy a vehicle type recognition model on the Huawei Cloud GPU server, for future reference and help from other friends in need. The time to get in touch with this aspect is relatively short. If you have any questions, please criticize and correct.

How to build the GPU version of PaddlePaddle environment on Huawei cloud server, please refer to the following article: https://blog.csdn.net/loutengyuan/article/details/126527326

1 Environment preparation

The operating environment of PaddleClas and the operating environment of Paddle Serving need to be prepared.

  • Prepare the operating environment link of PaddleClas
# 克隆代码
git clone https://github.com/PaddlePaddle/PaddleClas
  • Install the operating environment of PaddleServing, the steps are as follows
# 安装serving,用于启动服务
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.8.3.post102-py3-none-any.whl
pip3 install paddle_serving_server_gpu-0.8.3.post102-py3-none-any.whl

# 安装client,用于向服务发送请求
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.8.3-cp38-none-any.whl
pip3 install paddle_serving_client-0.8.3-cp38-none-any.whl

# 安装serving-app
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.8.3-py3-none-any.whl
pip3 install paddle_serving_app-0.8.3-py3-none-any.whl

2 Dataset and its processing

Put the sorted and sorted data in different folders according to different categories, and then upload the data to the Huawei Cloud server. The directory structure is as follows:

# tree ./TruckType
.
├── test_01.jpg
├── TruckType
│   ├── 0-qyc
│   │   ├── 10765.jpg
│   │   ├── 19994.jpg
│   │   ├── 1029.jpg
│   │   ├── 106710.jpg
│   │   ├── 9610.jpg
│   │   ├── 98388.jpg
│   │   └── 9938.jpg
│   ├── 1-zhc
│   │   ├── 10154.jpg
│   │   ├── 1055.jpg
│   │   ├── 10801.jpg
│   │   ├── 9969.jpg
│   │   ├── 9970.jpg
│   │   ├── 9513.jpg
│   │   └── 9515.jpg
│   ├── 2-zxc
│   │   ├── 5274.jpg
│   │   ├── 69648.jpg
│   │   ├── 6649.jpg
│   │   ├── 5651.jpg
│   │   ├── 3055.jpg
│   │   ├── 7630.jpg
│   │   ├── 58.jpg
│   │   └── 9082.jpg
│   ├── 3-gc
│   │   ├── 9587.jpg
│   │   ├── 855.jpg
│   │   ├── 663.jpg
│   │   ├── 5611.jpg
│   │   ├── 9085.jpg
│   │   └── 2284.jpg
│   ├── 4-jbc
│   │   ├── 874.jpg
│   │   ├── 56456.jpg
│   │   ├── 36576.jpg
│   │   └── 25244.jpg
│   ├── all_list.txt
│   ├── label_list.txt
│   ├── test_list.txt
│   ├── train_list.txt
│   └── val_list.txt
└── write_label_truck_type.py

test_01.jpgUsed to test the training model.
0-qyc 、1-zhc 、2-zxc 、3-gc 、4-jbcThey are pictures of different types of vehicles (note: it is best not to have special characters such as Chinese, brackets, or spaces in the picture file name, which is easy to report errors in training). They are the label files generated after processing. They are the step files for processing data, which are used to automatically generate the above
all_list.txt、label_list.txt、test_list.txt、train_list.txt、val_list.txtlabel
write_label_truck_type.pyfiles

Generate label file step write_label_truck_type.py The code is as follows:

# -*- coding: utf-8 -*-
import os
import sys
from sklearn.utils import shuffle

# 拿到总的训练数据txt
# -*- coding: utf-8 -*-
# 根据官方paddleclas的提示,我们需要把图像变为两个txt文件
# train_list.txt(训练集)
# val_list.txt(验证集)
# 先把路径搞定 比如:foods/beef_carpaccio/855780.jpg ,读取到并写入txt
# 根据左侧生成的文件夹名字来写根目录
# 先得到总的txt后续再进行划分,因为要划分出验证集,所以要先打乱,因为原本是有序的
def get_all_txt(image_root, dir_name):
    all_list = []
    label_list = []
    i = 0 # 标记总文件数量
    # j = -1 # 标记文件类别
    for root, dirs, files in os.walk(image_root+dir_name): # 分别代表根目录、文件夹、文件
        if "ipynb_checkpoints" in root:
            continue
        strs = str(root).replace(image_root+dir_name+"/", "").split('-')
        if len(strs) != 2:
            continue
        label_idx_str = strs[0].replace(" ", "")
        print("root = {}  label_idx_str = {}".format(root, label_idx_str))
        label_list.append("{} {}\n".format(label_idx_str, strs[1]))
        for file in files:
            i = i + 1
            # 文件中每行格式: 图像相对路径      图像的label_id(数字类别)(注意:中间有空格)。
            img_path = os.path.join(root, file).replace(image_root, "")
            all_list.append(img_path+" " + label_idx_str + "\n")
        # j = j + 1
        label_list.sort()
    return all_list, i, label_list


if __name__ == "__main__":
    if len(sys.argv) < 3:
        print("请传入预处理图像根目录和文件夹: 传入参数长度错误!")
    else:
        # for arg in sys.argv:
        #     print(arg)
        image_root = sys.argv[1]
        dir_name = sys.argv[2]
        print("image_root = {}  dir_name = {}".format(image_root, dir_name))
        # 拿到总的训练数据txt
        all_list, all_len, label_list = get_all_txt(image_root, dir_name)
        print(all_len)
        print(label_list)

        # 写入标签文件
        label_str = ''.join(label_list)
        f = open(image_root+dir_name+'/label_list.txt', 'w', encoding='utf-8')
        f.write(label_str)
        print("写入标签文件完成")

        # 把数据打乱
        all_list = shuffle(all_list)
        allstr = ''.join(all_list)
        f = open(image_root+dir_name+'/all_list.txt', 'w', encoding='utf-8')
        f.write(allstr)
        print("打乱成功,并写入文本")

        # 按照比例划分数据集 食品的数据有5000张图片,不算大数据,一般9:1即可
        train_size = int(all_len * 0.8)
        train_list = all_list[:train_size]
        temp_list = all_list[train_size:]
        val_size = int(len(temp_list) * 0.8)
        val_list = temp_list[:val_size]
        test_list = temp_list[val_size:]

        print(len(train_list))
        print(len(val_list))
        print(len(test_list))

        # 生成训练集txt
        train_txt = ''.join(train_list)
        f_train = open(image_root+dir_name+'/train_list.txt', 'w', encoding='utf-8')
        f_train.write(train_txt)
        f_train.close()
        print("train_list.txt 生成成功!")

        # 生成验证集txt
        val_txt = ''.join(val_list)
        f_val = open(image_root+dir_name+'/val_list.txt', 'w', encoding='utf-8')
        f_val.write(val_txt)
        f_val.close()
        print("val_list.txt 生成成功!")

        # 生成测试集txt
        test_txt = ''.join(test_list)
        f_test = open(image_root+dir_name+'/test_list.txt', 'w', encoding='utf-8')
        f_test.write(test_txt)
        f_test.close()
        print("test_list.txt 生成成功!")

Execute the script:

cd 数据目录
python write_label_truck_type.py ./ TruckType

The content format of all_list.txt, test_list.txt, train_list.txt, val_list.txt is similar to the following:

TruckType/1-zhc/495218.jp 1
TruckType/3-gc/543432.jpg 3
TruckType/2-zxc/3453.jpg 2
TruckType/2-zxc/343453.jpg 2
TruckType/3-gc/34545.jpg 3
TruckType/1-zhc/637371.jpg 1
TruckType/0-qyc/32354.jpg 0
TruckType/0-qyc/650456.jpg 0

The format of label_list.txt is as follows:

0 0-qyc
1 1-zhc
2 2-zxc
3 3-gc
4 4-jbc

3 Model training

Enter the previously downloaded PaddleClas code directory

# cd PaddleClas
# ll
total 148
drwxr-xr-x  2 root root  4096 Aug 25 14:52 benchmark
drwxr-xr-x  2 root root  4096 Aug 25 14:52 dataset
drwxr-xr-x 22 root root  4096 Sep  2 11:10 deploy
drwxr-xr-x  6 root root  4096 Aug 25 14:52 docs
-rw-r--r--  1 root root 28095 Aug 25 14:52 hubconf.py
drwxr-xr-x  4 root root  4096 Sep  3 09:32 inference
-rw-r--r--  1 root root   705 Aug 25 14:52 __init__.py
-rw-r--r--  1 root root 11357 Aug 25 14:52 LICENSE
-rw-r--r--  1 root root   259 Aug 25 14:52 MANIFEST.in
drwxr-xr-x  6 root root  4096 Sep  3 08:55 output
-rw-r--r--  1 root root 24463 Aug 25 14:52 paddleclas.py
drwxr-xr-x 12 root root  4096 Aug 31 16:34 ppcls
-rw-r--r--  1 root root  9819 Aug 25 14:52 README_ch.md
-rw-r--r--  1 root root  9149 Aug 25 14:52 README_en.md
-rw-r--r--  1 root root    12 Aug 25 14:52 README.md
-rw-r--r--  1 root root   148 Aug 25 14:52 requirements.txt
-rw-r--r--  1 root root  2343 Aug 25 14:52 setup.py
drwxr-xr-x  3 root root  4096 Aug 25 14:52 tests
drwxr-xr-x  5 root root  4096 Aug 25 14:52 test_tipc
drwxr-xr-x  2 root root  4096 Aug 25 14:52 tools

3.1 Modify the configuration file

The main points are as follows: number of categories, paths for training and verification, image size, data preprocessing, num_workers for training and prediction: 0 (you need to change num_workers to 0, because it is a single card) The following is an
example of
ShuffleNetV2_x0_25, which is a quick start for beginners. In fact, the folders under PaddleClas/ppcls/configs/ImageNet/ are all model files, which can be selected by yourself.
The path is as follows:

PaddleClas/ppcls/configs/quick_start/new_user/ShuffleNetV2_x0_25.yaml

Copy it out and name it ShuffleNetV2_x0_25_truck_type.yaml with the following path:

PaddleClas/ppcls/configs/quick_start/new_user/ShuffleNetV2_x0_25_truck_type.yaml

Modify the configuration file ShuffleNetV2_x0_25_truck_type.yaml as follows:

# global configs
Global:
  checkpoints: null
  pretrained_model: null
  output_dir: ./output/truck_type/
  # 使用GPU训练
  device: gpu
  # 每几个轮次保存一次
  save_interval: 1 
  eval_during_train: True
  # 每几个轮次验证一次
  eval_interval: 1 
  # 训练轮次
  epochs: 100
  print_batch_step: 1
  use_visualdl: True #开启可视化(目前平台不可用)
  # used for static mode and model export
  # 图像大小
  image_shape: [3, 224, 224] 
  save_inference_dir: ./inference/clas_truck_type_infer
  # training model under @to_static
  to_static: False

# model architecture
Arch:
  # 采用的网络
  name: ShuffleNetV2_x0_25
  class_num: 5
 
# loss function config for traing/eval process
Loss:
  Train:

    - CELoss: 
        weight: 1.0
  Eval:
    - CELoss:
        weight: 1.0


Optimizer:
  name: Momentum
  momentum: 0.9
  lr:
    name: Piecewise
    learning_rate: 0.015
    decay_epochs: [30, 60, 90]
    values: [0.1, 0.01, 0.001, 0.0001]
  regularizer:
    name: 'L2'
    coeff: 0.0005


# data loader for train and eval
DataLoader:
  Train:
    dataset:
      name: ImageNetDataset
      # 根路径
      image_root: /yxdata/truck_type/
      # 前面自己生产得到的训练集文本路径
      cls_label_path: /yxdata/truck_type/TruckType/train_list.txt
      # 数据预处理
      transform_ops:
        - DecodeImage:
            to_rgb: True
            channel_first: False
        - ResizeImage:
            resize_short: 256
        - CropImage:
            size: 224
        - RandFlipImage:
            flip_code: 1
        - NormalizeImage:
            scale: 1.0/255.0
            mean: [0.485, 0.456, 0.406]
            std: [0.229, 0.224, 0.225]
            order: ''

    sampler:
      name: DistributedBatchSampler
      batch_size: 128
      drop_last: False
      shuffle: True
    loader:
      num_workers: 0
      use_shared_memory: True

  Eval:
    dataset: 
      name: ImageNetDataset
      # 根路径
      image_root: /yxdata/truck_type/
      # 前面自己生产得到的验证集文本路径
      cls_label_path: /yxdata/truck_type/TruckType/val_list.txt
      # 数据预处理
      transform_ops:
        - DecodeImage:
            to_rgb: True
            channel_first: False
        - ResizeImage:
            resize_short: 256
        - CropImage:
            size: 224
        - NormalizeImage:
            scale: 1.0/255.0
            mean: [0.485, 0.456, 0.406]
            std: [0.229, 0.224, 0.225]
            order: ''
    sampler:
      name: DistributedBatchSampler
      batch_size: 128
      drop_last: False
      shuffle: True
    loader:
      num_workers: 0
      use_shared_memory: True

Infer:
  infer_imgs: /yxdata/truck_type/test_01.jpg
  batch_size: 10
  transforms:
    - DecodeImage:
        to_rgb: True
        channel_first: False
    - ResizeImage:
        resize_short: 256
    - CropImage:
        size: 224
    - NormalizeImage:
        scale: 1.0/255.0
        mean: [0.485, 0.456, 0.406]
        std: [0.229, 0.224, 0.225]
        order: ''
    - ToCHWImage:
  PostProcess:
    name: Topk
    # 输出的可能性最高的前topk个
    topk: 3
    # 标签文件 需要自己新建文件
    class_id_map_file: /yxdata/truck_type/TruckType/label_list.txt

Metric:
  Train:
    - TopkAcc:
        topk: [1, 3]
  Eval:
    - TopkAcc:
        topk: [1, 3]

3.2 Start training

python3 tools/train.py \
    -c ./ppcls/configs/quick_start/new_user/ShuffleNetV2_x0_25_truck_type.yaml \
    -o Global.device=gpu

After training, PaddleClas/output/truck_type/the model file will be generated in the directory

# tree ./truck_type/
├── ShuffleNetV2_x0_25
│   ├── best_model.pdopt
│   ├── best_model.pdparams
│   ├── best_model.pdstates
│   ├── epoch_100.pdopt
│   ├── epoch_100.pdparams
│   ├── epoch_100.pdstates
│   ├── epoch_10.pdopt
│   ├── epoch_10.pdparams
│   ├── epoch_10.pdstates
│   ├── epoch_11.pdopt
│   ├── epoch_11.pdparams
│   ├── epoch_11.pdstates
│   ├── epoch_1.pdopt
│   ├── epoch_1.pdparams
│   ├── epoch_1.pdstates
│   ├── export.log
│   ├── infer.log
│   ├── latest.pdopt
│   ├── latest.pdparams
│   ├── latest.pdstates
│   └── train.log
└── vdl
    └── vdlrecords.1662166534.log

3.3 Predict a piece

python3 tools/infer.py \
    -c ./ppcls/configs/quick_start/new_user/ShuffleNetV2_x0_25_truck_type.yaml \
    -o Infer.infer_imgs=/yxdata/truck_type/test_01.jpg \
    -o Global.pretrained_model=output/truck_type/ShuffleNetV2_x0_25/best_model

The predicted results are as follows:

[{
    
    'class_ids': [4, 0, 1], 'scores': [0.9976, 0.00225, 0.0001], 'file_name': '/yxdata/truck_type/test_01.jpg', 'label_names': ['1-zhc', '3-gc', '2-zxc']}]

3.4 Batch Prediction

python3 tools/infer.py \
    -c ./ppcls/configs/quick_start/new_user/ShuffleNetV2_x0_25_truck_type.yaml \
    -o Infer.infer_imgs=/yxdata/truck_type/ \
    -o Global.pretrained_model=output/truck_type/ShuffleNetV2_x0_25/best_model

The predicted results are as follows:

[{
    
    'class_ids': [4, 0, 1], 'scores': [0.9976, 0.00225, 0.0001], 'file_name': '/yxdata/truck_type/test_01.jpg', 'label_names': ['1-zhc', '3-gc', '2-zxc']}]

3.5 Export prediction model

python3 tools/export_model.py \
    -c ppcls/configs/quick_start/new_user/ShuffleNetV2_x0_25_truck_type.yaml \
    -o Global.pretrained_model=output/truck_type/ShuffleNetV2_x0_25/best_model

After the export is successful, the model file will be generated in the PaddleClas/inference/clas_truck_type_infer/ directory, and the structure is as follows:

# tree ./clas_truck_type_infer/
├── inference.pdiparams
├── inference.pdiparams.info
└── inference.pdmodel

4 Model service deployment

4.1 Model conversion

Enter the working directory:

cd PaddleClas/deploy/

Create and enter the models folder:

# 创建并进入models文件夹
mkdir models
cd models

Put the trained inference model exported at the end of the previous model training step into this folder, the structure is as follows:

└── clas_truck_type_infer
    ├── inference.pdiparams
    ├── inference.pdiparams.info
    └── inference.pdmodel

Convert the vehicle type classification inference model to the Serving model:

# 转换车辆类型分类模型
python3.8 -m paddle_serving_client.convert \
--dirname ./clas_truck_type_infer/ \
--model_filename inference.pdmodel  \
--params_filename inference.pdiparams \
--serving_server ./clas_truck_type_serving/ \
--serving_client ./clas_truck_type_client/

After the conversion of the vehicle type classification inference model is completed, there will be additional folders of clas_truck_type_serving/ and clas_truck_type_client/ in the current folder, with the following structure:

    ├── clas_truck_type_serving/
    │   ├── inference.pdiparams
    │   ├── inference.pdmodel
    │   ├── serving_server_conf.prototxt
    │   └── serving_server_conf.stream.prototxt
    └── clas_truck_type_client/
          ├── serving_client_conf.prototxt
          └── serving_client_conf.stream.prototxt

Model parameter modification
Serving provides the function of input and output renaming in order to be compatible with the deployment of different models. When inferring and deploying different models, you only need to modify the alias_name of the configuration file, and the inference deployment can be completed without modifying the code. Therefore, after the conversion, you need to modify the file serving_server_conf.prototxt under clas_truck_type_serving and the file serving_client_conf.prototxt under clas_truck_type_client, change the field after alias_name: in fetch_var to prediction, and the modified serving_server_conf.prototxt and serving_client_conf The .prototxt looks like this:

feed_var {
    
    
  name: "x"
  alias_name: "x"
  is_lod_tensor: false
  feed_type: 1
  shape: 3
  shape: 224
  shape: 224
}
fetch_var {
    
    
  name: "softmax_1.tmp_0"
  alias_name: "prediction"
  is_lod_tensor: false
  fetch_type: 1
  shape: 5
}

The specific meanings of the parameters in the above commands are shown in the table below:

parameter type Defaults describe
dirname str - The storage path of the model file that needs to be converted, and the program structure file and parameter file are all saved in this directory.
model_filename str None The name of the file that stores the Inference Program structure of the model that needs to be converted. If set to None, use __model__as default filename
params_filename str None The name of the file where all parameters of the model to be converted are stored. It needs to be specified if and only if all model parameters are stored in a single binary file. If model parameters are stored in separate files, set its value to None
serving_server str "serving_server" The storage path of converted model files and configuration files. The default is serving_server
serving_client str "serving_client" The converted client configuration file storage path. The default is serving_client

4.2 Service Deployment

into the working directory

  cd ./deploy/paddleserving/

The paddleserving directory contains the code to start the Python Pipeline service, the C++ Serving service and send prediction requests, including:

__init__.py
classification_web_service.py # 启动pipeline服务端的脚本
config.yml                    # 启动pipeline服务的配置文件
pipeline_http_client.py       # http方式发送pipeline预测请求的脚本
pipeline_rpc_client.py        # rpc方式发送pipeline预测请求的脚本
readme.md                     # 分类模型服务化部署文档
run_cpp_serving.sh            # 启动C++ Serving部署的脚本
test_cpp_serving_client.py    # rpc方式发送C++ serving预测请求的脚本

Modify the config.yml file as follows:

#worker_num, 最大并发数。当build_dag_each_worker=True时, 框架会创建worker_num个进程,每个进程内构建grpcSever和DAG
##当build_dag_each_worker=False时,框架会设置主线程grpc线程池的max_workers=worker_num
worker_num: 1

#http端口, rpc_port和http_port不允许同时为空。当rpc_port可用且http_port为空时,不自动生成http_port
http_port: 8877
#rpc_port: 9993

dag:
    #op资源类型, True, 为线程模型;False,为进程模型
    is_thread_op: False
op:
    clas_truck_type:
        #并发数,is_thread_op=True时,为线程并发;否则为进程并发
        concurrency: 1

        #当op配置没有server_endpoints时,从local_service_conf读取本地服务配置
        local_service_conf:

            #uci模型路径
            model_config: ../models/clas_truck_type_serving
#            model_config: ../models/ResNet50_vd_serving

            #计算硬件类型: 空缺时由devices决定(CPU/GPU),0=cpu, 1=gpu, 2=tensorRT, 3=arm cpu, 4=kunlun xpu
            device_type: 1

            #计算硬件ID,当devices为""或不写时为CPU预测;当devices为"0", "0,1,2"时为GPU预测,表示使用的GPU卡
            devices: "0" # "0,1"

            #client类型,包括brpc, grpc和local_predictor.local_predictor不启动Serving服务,进程内预测
            client_type: local_predictor

            #Fetch结果列表,以client_config中fetch_var的alias_name为准
            fetch_list: ["prediction"]

Modify the classification_web_service.py file as follows:

# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import sys
from paddle_serving_app.reader import Sequential, URL2Image, Resize, CenterCrop, RGB2BGR, Transpose, Div, Normalize, Base64ToImage
try:
    from paddle_serving_server_gpu.web_service import WebService, Op
except ImportError:
    from paddle_serving_server.web_service import WebService, Op
import logging
import numpy as np
import base64, cv2


class TruckTypeClasOp(Op):
    def init_op(self):
        print("------------------------ TruckTypeClasOp init_op ---------------------------")
        self.seq = Sequential([
            Resize(256), CenterCrop(224), RGB2BGR(), Transpose((2, 0, 1)),
            Div(255), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225],
                                True)
        ])
        self.label_dict = {
    
    }
        label_idx = 0
        with open("truck_type_list.label") as fin:
            for line in fin:
                self.label_dict[label_idx] = line.strip()
                label_idx += 1
        print("label_dict --> {}".format(self.label_dict))

    def preprocess(self, input_dicts, data_id, log_id):
        print("{} TruckTypeClasOp preprocess\tbegin\t--> data_id: {}".format(datetime.datetime.now(), data_id))
        (_, input_dict), = input_dicts.items()
        batch_size = len(input_dict.keys())
        imgs = []
        for key in input_dict.keys():
            data = base64.b64decode(input_dict[key].encode('utf8'))
            data = np.fromstring(data, np.uint8)
            im = cv2.imdecode(data, cv2.IMREAD_COLOR)
            img = self.seq(im)
            imgs.append(img[np.newaxis, :].copy())
        input_imgs = np.concatenate(imgs, axis=0)
        print("{} TruckTypeClasOp preprocess\tfinish\t--> data_id: {}".format(datetime.datetime.now(), data_id))
        # return {"inputs": input_imgs}, False, None, ""
        return {
    
    "x": input_imgs}, False, None, ""

    def postprocess(self, input_dicts, fetch_dict, data_id, log_id):
        print("{} TruckTypeClasOp postprocess\tbegin\t--> data_id: {}".format(datetime.datetime.now(), data_id))
        score_list = fetch_dict["prediction"]
        print("{} data_id: {}  -->  score_list: {}".format(datetime.datetime.now(), data_id, score_list))
        result = []
        for score in score_list:
            item = {
    
    }
            score = score.tolist()
            max_score = max(score)
            idx = score.index(max_score)
            print("{} data_id: {}  -->  max_score = {}  -->  idx = {}".format(datetime.datetime.now(), data_id, max_score, idx))
            if self.label_dict is not None:
                if idx < len(self.label_dict):
                    label = self.label_dict[score.index(max_score)].strip().replace(",", "")
                else:
                    label = 'ErrorType'
            else:
                label = str(idx)
            item["label"] = label
            item["prob"] = max_score
            result.append(item)
        print("{} TruckTypeClasOp postprocess\tfinish\t--> data_id: {} --> result:{}".format(datetime.datetime.now(), data_id, result))
        return {
    
    "result": str({
    
    "truck_type": result})}, None, ""


class ClassificationService(WebService):
    def get_pipeline_response(self, read_op):
        truck_type_op = TruckTypeClasOp(name="clas_truck_type", input_ops=[read_op])
        return truck_type_op


uci_service = ClassificationService(name="classification")
uci_service.prepare_pipeline_config("config.yml")
uci_service.run_service()

Add the file truck_type_list.label with the following content:

牵引车
载货车
自卸车
挂车
搅拌车

Start the service:

# 启动服务,运行日志保存在 paddleclas_recognition_log.txt
nohup python3.8 -u classification_web_service.py &>./paddleclas_recognition_log.txt &

view progress

ps -ef|grep python

close process

# 通过上一步查看进程号,杀死指定进程
kill -9 19913
# 或者通过以下命令
python3.8 -m paddle_serving_server.serve stop

view log

tail -f 1000 ./paddleclas_recognition_log.txt

How to Check Port Occupation

$: netstat -anp | grep 8888
tcp        0      0 127.0.0.1:8888          0.0.0.0:*               LISTEN      13404/python3       
tcp        0      1 172.17.0.10:34036       115.42.35.84:8888       SYN_SENT    14586/python3 

Forcibly kill the process: by pid

$: kill -9 13404
$: kill -9 14586
$: netstat -anp | grep 8888
$:

4.3 Service Test

Modify the pipeline_http_client.py file as follows:

import requests
import json
import base64
import os


def cv2_to_base64(image):
    return base64.b64encode(image).decode('utf8')


if __name__ == "__main__":
    url = "http://127.0.0.1:8877/classification/prediction"
    with open(os.path.join(".", "图片路径.jpg"), 'rb') as file:
        image_data1 = file.read()
    image = cv2_to_base64(image_data1)

    data = {
    
    "key": ["image"], "value": [image]}
    for i in range(1):
        r = requests.post(url=url, data=json.dumps(data))
        print(r.json())

send request:

python3.8 pipeline_http_client.py

After a successful run, the results predicted by the model will be printed in the client, as follows:

{
    
    'err_no': 0, 'err_msg': '', 'key': ['result'], 'value': ["{'truck_type': [{'label': '载货车', 'prob': 0.98669669032096863}]}"], 'tensors': []}

Guess you like

Origin blog.csdn.net/loutengyuan/article/details/126674945