tf.name_scope()和tf.Variable(),tf.variable_scope()和tf.get_variable()

The following content is reproduced from :
Author: Maddock
Reprint please indicate the source: Maddock Computer Vision, Image Processing, Machine Learning

① tf.name_scope()和tf.Variable():

Seeing that I wrote them together, I know that they are always used together. The purpose is to facilitate parameter naming management. For example, in Alexnet-Tensorflow actual combat, Alexnet has a total of 5 convolutional layers, and each layer has a "kernel", "bias" and other parameters, so in order to distinguish the parameter naming situation of each layer, tf.name_scope is introduced for parameter naming management.
Code:

import tensorflow as tf
# 第一个卷积层
with tf.name_scope("cov1") as scope:
    weight1 = tf.Variable([1.0,2.0],name = 'weights')
    bias1 = tf.Variable([0.3],name = 'bias')

#Second convolutional layer
 with tf.name_scope ( 'cov2' ) as scope :
     weight2 = tf . Variable ([ 4.0 , 2.0 ], name = 'weights' )
    bias2 = tf.Variable([0.33],name = 'bias')

print(weight1.name)
print(weight2.name)
output:

Note: After the above code is executed, there are already two spaces for conv1 and conv2. If the above code is executed again, other namespaces will be generated again

as follows:

import tensorflow as tf
# 第一个卷积层
with tf.name_scope("cov1") as scope:
    weight1 = tf.Variable([1.0,2.0],name = 'weights')
    bias1 = tf.Variable([0.3],name = 'bias')

#Second convolutional layer
 with tf.name_scope ( 'cov2' ) as scope :
     weight2 = tf . Variable ([ 4.0 , 2.0 ], name = 'weights' )
    bias2 = tf.Variable([0.33],name = 'bias')

print(weight1.name)
print(weight2.name)

import tensorflow as tf

#First convolutional layer
 with tf.name_scope ( "cov1" ) as scope :
     weight1 = tf . Variable ([ 1.0 , 2.0 ], name = 'weights' )
    bias1 = tf.Variable([0.3], name='bias')

#Second convolutional layer
 with tf.name_scope ( 'cov2' ) as scope :
     weight2 = tf . Variable ([ 4.0 , 2.0 ], name = 'weights' )
    bias2 = tf.Variable([0.33], name='bias')

print(weight1.name)
print(weight2.name)
output:


② tf.variable_scope()和tf.get_variable()

tf.variable_scope() is mainly used in conjunction with tf.get_variable() to achieve variable sharing.

Code:

# 使用tf.Variable_scope()tf.get_variable()
import  tensorflow as tf
with tf.variable_scope('v_scope') as scope1:
    weights1 = tf.get_variable("Weights",shape=[2,3])
    bias1 = tf.get_variable("bias",shape=[3])

# 下面来共享定义好的变量
# 注意:在下面的 scope 中的变量必须已经定义过了,才能设置 reuse=True,否则会报错
with tf.variable_scope('v_scope',reuse = True) as scope2:
    weights2 = tf.get_variable("Weights")

print(weights1.name)
print(weights2.name)

输出:

end

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325379917&siteId=291194637