An example description
variable_scope can also use with tf.variable_scope("name") as xxxscope to define the scope. When using this method, the defined scope variable xxxscope will no longer be restricted by the surrounding scope.
two code
import tensorflow as tf tf.reset_default_graph() with tf.variable_scope("scope1") as sp: var1 = tf.get_variable("v", [1]) print("sp:",sp.name) #sp: scope1 print("var1:",var1.name) #var1: scope1/v:0 with tf.variable_scope("scope2"): var2 = tf.get_variable("v", [1]) with tf.variable_scope(sp) as sp1: #Put sp in the scope scope2 and as sp1, so that v3 is not restricted by the outer scope var3 = tf.get_variable("v3", [1]) with tf.variable_scope("") : #Let v4 have one more empty layer var4 = tf.get_variable("v4", [1]) print("sp1:",sp1.name) #sp1: scope1 sp1 is under scope2, but the output is still scope1, no change print("var2:",var2.name) #var2: scope2/v:0 print("var3:",var3.name) #var3: scope1/v3:0 indicates that var3 is under scope1, again indicating that sp is not restricted by the outer layer. print("var4:",var4.name) #var4: scope1//v4:0 v4 has one more empty layer with tf.variable_scope("scope"): with tf.name_scope("bar"): v = tf.get_variable("v", [1]) x = 1.0 + v with tf.name_scope(""): #null character returns the scope to the top level y = 1.0 + v print("v:",v.name) #v: scope/v:0 The naming of variables is only limited by variable_scope, not by name_scope print("x.op:",x.op.name) #x.op: scope/bar/add The operator is not only limited by the name_scope scope, but also by the variable_scope scope. print("y.op:",y.op.name) #y.op: add is returned to the top level under the empty character scope
Three running results
sp: scope1
var1:scope1/v:0
sp1: scope1
var2:scope2/v:0
var3:scope1/v3:0
var4: scope1 // v4: 0
v: scope/v:0
x.op: scope/bar/add
y.op: add