以下内容转载自
:
Author:Maddock
转载请注明出处:Maddock 计算机视觉、图像处理、机器学习
① tf.name_scope()和tf.Variable():
tf.variable_scope() 主要结合 tf.get_variable() 来使用,目的是实现变量共享。
Author:Maddock
转载请注明出处:Maddock 计算机视觉、图像处理、机器学习
① tf.name_scope()和tf.Variable():
见我将它们俩写一起就知道他们两个总是一起使用,目的是方便参数命名管理,比如在Alexnet—Tensorflow实战中,Alexnet一共有5个卷积层,而每层都有“kernel”,“bias”等参数,所以为了区分每一层的参数命名情况,引入tf.name_scope进行参数命名管理。
代码:
import tensorflow as tf # 第一个卷积层 with tf.name_scope("cov1") as scope: weight1 = tf.Variable([1.0,2.0],name = 'weights') bias1 = tf.Variable([0.3],name = 'bias') # 第二个卷积层 with tf.name_scope('cov2') as scope: weight2 = tf.Variable([4.0,2.0],name= 'weights') bias2 = tf.Variable([0.33],name = 'bias') print(weight1.name) print(weight2.name)输出:
注意:上述代码执行完毕之后就已经存在了两个conv1和conv2的空间,如果再次执行上述代码会再次生成其他命名空间
如下:
import tensorflow as tf # 第一个卷积层 with tf.name_scope("cov1") as scope: weight1 = tf.Variable([1.0,2.0],name = 'weights') bias1 = tf.Variable([0.3],name = 'bias') # 第二个卷积层 with tf.name_scope('cov2') as scope: weight2 = tf.Variable([4.0,2.0],name= 'weights') bias2 = tf.Variable([0.33],name = 'bias') print(weight1.name) print(weight2.name) import tensorflow as tf # 第一个卷积层 with tf.name_scope("cov1") as scope: weight1 = tf.Variable([1.0, 2.0], name='weights') bias1 = tf.Variable([0.3], name='bias') # 第二个卷积层 with tf.name_scope('cov2') as scope: weight2 = tf.Variable([4.0, 2.0], name='weights') bias2 = tf.Variable([0.33], name='bias') print(weight1.name) print(weight2.name)输出:
② tf.variable_scope()和tf.get_variable()
tf.variable_scope() 主要结合 tf.get_variable() 来使用,目的是实现变量共享。
代码:
# 使用tf.Variable_scope()和tf.get_variable() import tensorflow as tf with tf.variable_scope('v_scope') as scope1: weights1 = tf.get_variable("Weights",shape=[2,3]) bias1 = tf.get_variable("bias",shape=[3]) # 下面来共享定义好的变量 # 注意:在下面的 scope 中的变量必须已经定义过了,才能设置 reuse=True,否则会报错 with tf.variable_scope('v_scope',reuse = True) as scope2: weights2 = tf.get_variable("Weights") print(weights1.name) print(weights2.name)
输出:
end