TensorFlow Serving installation configuration

TensorFlow Serving is a machine learning model for serving high-performance open source library. It can be trained machine learning models to be deployed online, use gRPC as an interface to accept the external call. More bright spots is that it supports hot model automatically updates the model version management. This means that once deployed TensorFlow Serving, you'll never need to worry about online services, only need to care model training under your line.

1, the installation Bazel 

The following files into /etc/yum.repos.d/vbatts-bazel-epel-7.repo

[vbatts-bazel]
name=Copr repo for bazel owned by vbatts
baseurl=https://copr-be.cloud.fedoraproject.org/results/vbatts/bazel/epel-7-$basearch/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://copr-be.cloud.fedoraproject.org/results/vbatts/bazel/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1

Excuting an order:

yum install bazel
yum install patch

2, the installation package and the like gRPC

pip install grpcio
pip install patch
pip install tensorflow
pip install tensorflow-serving-api

3, download TensorFlow serving Source

git clone --recurse-submodules https://github.com/tensorflow/serving
cd serving
cd tensorflow
./configure
cd ..
#构建
bazel build -c opt tensorflow_serving/...
#测试
bazel test -c opt tensorflow_serving/...

4, test case

Export Model

bazel way

bazel-bin/tensorflow_serving/example/mnist_saved_model /tmp/mnist_model

python way

        You need to pip installed tensorflow-serving-api

python tensorflow_serving/example/mnist_saved_model.py /tmp/mnist_model

Deployment Model

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9000 --model_name=mnist --model_base_path=/tmp/mnist_model/

Test Model

        There are two ways, bazel and python, the same way does not take bazel

python tensorflow_serving/example/mnist_client.py --num_tests=1000 --server=localhost:9000

5, the export custom model code implementation process

 

1, receiving a command by defining parameters passed marked line tf.app.flags

2, the main function model training code and write code to export the model

3, add custom code in BUILD file format

4, compile and export commands to deploy the model, you can use the client connection service

There is a problem

    tensorflow serving model release tight coupling with the code; examples given by the official and only mnist inception two models; the model should publish general need to write more code, and documentation are also lacking.
Attachment: a lstm model code lstm.py and will lstm export a model code lstm_saved_model.py ; and the service invocation model predicted code lstm_client1.py

 

Guess you like

Origin blog.csdn.net/zwahut/article/details/90637916