5.4 TensorFlow persistence model

  TensorFlow provides a very simple API to save and restore a neural network model. This API is tf.train.Saver class. The following code shows how to save calculation TesnsorFlow FIG.

tensorflow TF AS Import 

# declare two variables and their calculation and 
V1 = tf.Variable (tf.constant (1.0, Shape = [. 1]), name = "V1") 
V2 = tf.Variable (tf.constant (2.0 , Shape = [. 1]), name = "V2") 
Result = V1 + V2 

init_op tf.initialize_all_variables = () 
# tf.train.Saver class declaration model is saved 
Saver = tf.train.Saver () 

with TF. the Session () AS sess: 
    sess.run (init_op) 
    # save the model to /path/to/model/model.ckpt file 
    saver.save (sess, "/path/to/model/model.ckpt")

  The code implements a simple TensorFlow persistence model function. By saver.save function saves TensorFlow model to the specified path. Although the program only specify a file path, but there will be three files in this directory. Because the parameters will be calculated on the structure of FIG TensorFlow and separately stored values.

  The first file model.ckpt.meta, which preserves the structure of FIG TensorFlow calculation can be simply understood as a network structure of the neural network. The second file is model.ckpt, this file is saved in TensorFlow program values ​​for each variable. The last checkpoint file to file, the file contains a list of all files in a directory under the model.

  Here is load the already saved TensorFlow model approach.

tensorflow TF AS Import 

# use and storage in the same manner as model code to declare variables 
V1 = tf.Variable (tf.constant (1.0, Shape = [. 1]), name = "V1") 
V2 = tf.Variable (TF. Constant (2.0, Shape = [. 1]), name = "V2") 
Result = V1 + V2 

Saver = tf.train.Saver () 

with tf.Session () AS Sess: 
    the saved model # load, and has through values stored in variables in the model calculation adder 
    saver.restore (Sess, "path / to / model / model.ckpt") 
    Print (sess.run (Result))

  The only difference is that two pieces of code, no initialization code loads the operating variables in the model, but the value of a variable load in through the saved model.

  It can be loaded directly has persisted in Fig.

TF tensorflow AS Import 
# direct load persisted in Figure 
Saver = tf.train.import_meta_graph ( "/ path / to / model.ckpt / model.ckpt.meta") 
with tf.Session () AS sess: 
    saver.restore (sess , "/path/to/medel/model.ckpt") 
    # get tensor by name tensor 
    print (sess.run (tf.get_default_graph () get_tensor_by_name (. "add: 0"))

  The above program, the default saving and loading all the variables defined TensorFlow calculated on FIG. But sometimes only need to save or load some of the variables, then the need to provide a list of specified variable needs to be saved or loaded when the declaration tf.train.Saver class.

  tf.train.Saver class also supports renaming to the variable when you save or load.

# This declares a different name and variable name have been stored in the model variables 
V1 = tf.Variable (tf.constant (1.0, Shape = [. 1]), name = "OTHER-V1") 
V2 = tf.Variable ( tf.constant (2.0, the Shape = [1]), name = "OTHER-v2") 

# If the direct use tf.train.Saver () to load the wrong model variables can not be found will be reported. 

# Use a dictionary to rename variables you can load the original model. This dictionary specifies the name of the original variables in the heart v1 loaded 
in the # variable v1 (named other-v1), the name of the variable v2 loaded into a variable v2 (v2-named OTHER) 
Saver = tf.train.Saver ({ "v1": v1, "v2": v2})

  One of the main purpose of this is to facilitate the use of variable moving average. In TensorFlow, the moving average for each variable is a variable maintained by the shadow, so to get a running average of the variable actually get the value of this shadow variable. If the model is loaded directly mapped to the shadow variable variable itself, then trained in the use of the model do not need to call the function to get the running average of the variable.

  The following is a sample stored moving average model.

TF tensorflow AS Import 

v = tf.Varibale (0, dtype = tf.float32, name = "v") 
# only one variable v in the East did not apply the average model, so the following statement will output "v: 0". 
Variables in tf.all_variables for (): 
    Print (variables.name) 

EMA = tf.train.ExponentialMovingAverage (0.99) 
maintain_average_op = ema.apply (tf.all_variables ()) 
# after applying moving average model, TensorFlow automatically generated a shadow variable 
# v / ExponentialMoving Average. So the following statement prints 
# "v: 0" and "v / ExpinentialMovingAverage: 0". 
tf.all_variables in the Variables for (): 
    Print (variables.name) 

Saver = tf.train.Saver () 
with tf.Session () AS sess: 
    init_op = tf.initialize_all_variables () 
    sess.run (init_op) 

    sess.run ( tf.assign (v, 10))
    sess.run (maintain_average_op) 
    # save, TensorFlow will v: 0 and v / ExponentialMovingAverage: 0 two variables are kept down 
    saver.save (sess, "path / to / model.ckpt") 
    Print (sess.run ( [v, ema.average (v)] )) # output [10.0, 0.99999905]
    

  The following code shows how to rename variable read directly sliding average variable. The following can be seen from the output of the program, the value of the variable v read actually running average of variable v in the code above. By this method, the result may be calculated before moving average model to use exactly the same spread code.

tf.Variable = v (0, DTYPE = tf.float32, name = "v") 
# renaming by assigning variable moving average of the original variable v to v. 
= tf.train.Saver Saver ({ "V / ExponentialMovingAverage": V}) 
with tf.Session () AS Sess: 
    saver.restore (Sess, "/path/to/model/model.ckpt") 
    Print (Sess. run (v)) # 0.099999905 output, this value is the running average of the original model variable v

  Use tf.train.Saver saves all the information needed to run the program TensorFlow, but sometimes do not need certain information. Thus TensorFlow convert_variables_to_constnats functions provided by this function can be calculated and the values ​​of variables in FIG stored in the form of a constant, so that the entire computation graph can be unified TensorFlow stored in a file.

tensorflow TF AS Import 
from tensorflow.python.framework Import graph_util 

V1 = tf.Variable (tf.constant (1.0, Shape = [. 1]), name = "V1") 
V2 = tf.Variable (tf.constant (2.0, Shape = [. 1]), name = "V2") 
Result = V1 + V2 

init_op tf.initialize_all_variables = () 
with tf.Session () AS Sess: 
    sess.run (init_op) 
    # GraphDef derived graph of the current calculation, only this part can be completed from the input layer to the output layer calculation process 
    graph_def tf.get_default_graph = (). as_graph_def () 

    # FIG variables and their values into constant while the FIG remove unnecessary nodes below line of code 
    # last parameter [ 'add'] gives the name of the node need to save, add nodes are added the above two variables 
    # operation 
    output_grpah_def = graph_util_convert_variables_to_constants (Sess, graph_def, [ 'add']) 
    # export the model into a file
    with tf.gfile.GFile("/path/to/model/combined_model.pb", "wb") as f:
        f.write(output_grpah_def.SerializerToString())
        

    The addition result can be calculated by the following procedures defined directly, it needs to be calculated only when the value of a node in FIG herein to a more easy way.

tensorflow TF AS Import 
from tensorflow.python.platform Import GFILE 

with tf.Session () AS Sess: 
    model_filename = "/path/to/model/combined_model.pb" 
    # read the saved model file, and parses the file to the corresponding Buffer Protocol GraphDef 
    with gfile.FastGFile (model_filename, 'RB') AS F: 
        graph_def = tf.GraphDef () 
        graph_def.ParseFromString (reached, f.read ()) 

    # saved graph_def FIG loaded into the current FIG. return_elements = [ "add: 0" ] is given returned 
    # tensor name. Given the time saved is the name of the compute nodes, so to "add". Given at the time of loading 
    # tensor is the name, it is the Add: 0 
    Result = tf.import_graph_def (graph_def, return_elements = [ "the Add: 0"]) 
    Print (sess.run (Result))

  

Guess you like

Origin www.cnblogs.com/CZT-TS/p/11241825.html
5.4