First, the origin of
Depth learning requires significant set of variables, in the past we only need to write code to do a global limit on it, but in tensorflow in doing so is neither easy to manage a set of variables, there are not easy to package, thus providing a variable management tensorflow methods: variable scoping mechanism
Second, the two important API
tf.get_variable (name, shape = None) # according to the given name or create a variable return
tf.variable_scope (name_or_scope, reuse = None) # of all the variables in the composition of a namespace name_or_scope
Third, the interpretation
Let me talk first API
tf.get_variable (name, shape = None) This method is tf.Variable () exactly the same as when creating a variable, except that it will search if there is a variable with the same name;
1 import tensorflow as tf 2 3 4 with tf.variable_scope('const'): 5 a = tf.get_variable('a', [1], initializer=tf.constant_initializer(1.))
Besides the second API
When the most important parameters reuse this method, there are three values: None, True, tf.AUTO_REUSE
reuse sign inherit the parent class: reuse = None
reuse = True: only multiplexing, can not create
1 import tensorflow as tf 2 3 4 with tf.variable_scope('const'): 5 a = tf.get_variable('a', [1]) 6 7 with tf.variable_scope('const', reuse=tf.AUTO_REUSE): 8 b = tf.get_variable('a', [1]) 9 10 print(a==b) # True
reuse = tf.AUTO_REUSE: is not created, has multiplexing, which is the most secure usage
. 1 Import tensorflow TF AS 2 . 3 . 4 DEF Test (): . 5 with tf.variable_scope ( ' const ' , Reuse = tf.AUTO_REUSE): . 6 A = tf.get_variable ( ' A ' , [. 1 ]) . 7 . 8 return A . 9 10 the X-= the Test () # is not created 11 the y-= the Test () # have to multiplex 12 Print (the y-the X-==) # True