CentOS uses Docker to deploy and use Tensorflow Serving service

1. Install Docker

curl -sSL https://get.daocloud.io/docker | sh

2. Pull the image of Tensorflow Serving

docker pull tensorflow/serving

3. Deploy and start the Serving model

Create a new folder in the current path to store the model, for example tf_serving_model, and then put the model in, the directory tree maintains the following structure

-tf_serving_model
	-1
		-assets
		-variables
		saved_model.pb

If a new model is added, the name will be arranged in the order of 2, 3..., and tf serving will automatically load the latest version of the model

Next start the model

docker run -p 8501:8501 -p 8500:8500 \
--mount type=bind,\
source=$(pwd)/tf_serving_model/,\
target=/models/diamond \
-e MODEL_NAME=diamond -t tensorflow/serving
其中,
gRPC默认端口是8500,HTTP请求的默认端口是8501,
source:指定要加载的模型的路径
target: 挂载的目标位置,是docker容器中的目录。/models/是docker中默认模型位置, diamond是我自己的模型名,要与MODEL_NAME保持一致,serving镜像中的程序会自动加载镜像内/models下的模型,通过MODEL_NAME指定/models下的哪个模型。
-t:  指定容器
-e:  指定模型名称

4. Use the model

There are two communication methods between the TF Serving client and the server: gRPC and RESTfull API. Here, gRPC is used as an example.

import grpc
import requests
import tensorflow as tf
import cv2
import numpy as np
import requests

from modules import utils
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
 
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
 
server='localhost:8500'
image='http://xxxxxxx.jpeg'
#我的初衷是根据url获取图片并进行预处理,得到np格式的图片,然后用模型检测目标。
def main(visualization = True):
    # 设置grpc
    options = [('grpc.max_send_message_length', 1000 * 1024 * 1024), 
            ('grpc.max_receive_message_length', 1000 * 1024 * 1024)]   
    channel = grpc.insecure_channel(server, options = options)
    stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
    request = predict_pb2.PredictRequest()
    request.model_spec.name = 'diamond' #模型名称
    request.model_spec.signature_name = 'serving_default'

    # 输入图片并进行请求
    response = requests.get(image)
    img, _, _,_,_=utils.preprocess(response.content)#我自己的预处理函数。其实可以直接用cv2加载np格式的图片。
    if len(img.shape) == 2:  
        img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
    height = img.shape[0]
    width = img.shape[1]
    print("Image shape:", img.shape)
    request.inputs['input_tensor'].CopyFrom(
    tf.make_tensor_proto(img.astype(dtype=np.uint8), shape=[1, height, width, 3]))

    # 法一,速度较慢
    # result = stub.Predict(request, 10.0)  # 10 secs timeout

    # 法二,速度较快
    result_future = stub.Predict.future(request, 10.0)  # 10 secs timeout
    result = result_future.result() 
    
    boxes = result.outputs['detection_boxes'].float_val
    classes = result.outputs['detection_classes'].float_val
    scores = result.outputs['detection_scores'].float_val
    #重设格式
    boxes = np.reshape(boxes,[len(boxes)//4,4])
    classes = np.squeeze(classes).astype(np.int32)
    scores = np.squeeze(scores)      
    
    # 可视化检测结果
    if visualization == True:
        category_index = label_map_util.create_category_index_from_labelmap('./modules/distance/label_map.pbtxt', use_display_name=True)
        tf.keras.backend.clear_session()  
        vis_util.visualize_boxes_and_labels_on_image_array( 
            img,
            boxes,
            classes,
            scores,
            category_index,
            instance_masks=result.outputs.get('detection_masks_reframed', None), 
            use_normalized_coordinates=True,
            max_boxes_to_draw=5,
            min_score_thresh=0.2,
            line_thickness=8)
        # 保存结果图片                    
        cv2.imwrite('result.jpg', img)
 
if __name__ == '__main__':
    main()

Run, the test result is saved in result.jpg
insert image description here
Reference materials:

1
2
3

Guess you like

Origin blog.csdn.net/SingDanceRapBall/article/details/123111360