Building Standard Tensorflow Model Server

Building Standard TensorFlow ModelServer

From https://tensorflow.google.cn/serving/serving_advanced

This tutorial shows you how to use TensorFlow Serving components to build the standard TensorFlow ModelServer that dynamically discovers and serves new versions of a trained TensorFlow model.

Train And Export TensorFlow Model

Clear the export directory if it already exists:

$ rm -rf /tmp/mnist_mode/

Train (with 100 iterations) and export the first version of model:

$ cd cd serving/
$ python tensorflow_serving/example/mnist_saved_model.py --training_iteration=100 --model_version=1 /tmp/mnist_model

Train (with 2000 iterations) and export the second version of model:

$ python tensorflow_serving/example/mnist_saved_model.py --training_iteration=2000 --model_ver
sion=2 /tmp/mnist_model
$ ls /tmp/mnist_model/
1 2

Test and Run The Server

Copy the first version of the export to the monitored folder and start the server.

$ mkdir /tmp/monitored
$ cp -r /tmp/mnist_model/1 /tmp/monitored
$ tensorflow_model_server --enable_batching --port=9000 --model_name=mnist --model_base_path=/tmp/monitored

Test the first version.

$ python tensorflow_serving/example/mnist_client.py --num_tests=1000 --server=localhost:9000 --concurrency=10
...
Inference error rate: 13.1%

Then we copy the second version of the export to the monitored folder and re-run the test:

$ cp -r /tmp/mnist_model/2 /tmp/monitored
$ python tensorflow_serving/example/mnist_client.py --num_tests=1000 --server=localhost:9000 --concurrency=10
...
Inference error rate: 9.5%

This confirms that your server automatically discovers the new version and uses it for serving!

猜你喜欢

转载自blog.csdn.net/u011026329/article/details/79184184