1、tf.get_variable()
Function tf.get_variable () when used to create a variable, and tf.Variable () function substantially equivalent.
v = tf.get_variable("v", shape=[1], initializer=tf.ones_initializer(1.0))
v = tf.Variable(tf.constant(1.0, shape=[1]), name="v")
Similar dimensions provide initialization process when tf.get_variable function call (Shape) and the information provided at the time the initialization method (initializer of) parameters and tf.Variable function call parameters.
The biggest difference is that tf.get_variable function and tf.Variable function parameter specifies the variable name. For tf.Variable function, the function name is an optional parameter, is given by the form name = "v" of. But for tf.get_variable function, the variable name is a mandatory parameter.
tf.get_variable can only create a new parameter, if the parameter is created already exists, it will error. If you want to get through tf.get_variable a variable function has been created, you need to generate a context manager by tf.variable_scope function, and clearly specified in this context manager, tf.get_variable direct access to the variables that have been generated.
2、tf.variable_scope()
tf.variable_scope tf.get_variable semantics may control functions.
#!/usr/bin/env python
# -*- coding:utf-8 -*-
import tensorflow as tf
# 在名字为foo的命名空间内创建名字为v的变量
with tf.variable_scope("foo"):
v = tf.get_variable('v', [1], initializer=tf.constant_initializer(1.0))
# 因为在命名空间foo已经存在名为v的变量,所以以下代码将会报错:
# Variable foo/v already exists, disallowed. Did you mean tu set reuse=True in Varscope?
with tf.variable_scope("foo"):
v = tf.get_variable('v', [1])
# 在生成上下文管理器时,将参数reuse设置为True。这样tf.get_variable函数将直接获取已经声明的变量
with tf.variable_scope("foo", reuse=True):
v1 = tf.get_variable('v', [1], initializer=tf.constant_initializer(1.0))
print(v == v1) # 输出为True,代表v,v1代表的是相同的TensorFlow中变量
# 将参数reuse设置为True时,tf.variable_scope将只能获取已经创建过的变量。因为在命名空间bar中还没有v变量,
# 所以以下代码会报错:
# Variable bar/v dose not exist, disallowed. Did you mean tu set reuse=None in Varscope?
with tf.variable_scope("bar", reuse=True):
v = tf.get_variable('v', [1])
When the upper examples tf.variable_scope reuse = True parameter function generates a context manager, in this context manager function tf.get_variable all variables that have been created directly acquired. If the variable does not exist, tf.get_variable function will error;
Conversely, if the function parameter tf.variable_scope reuse = None or reuse = False creating a context manager, tf.get_variable will create a new variable. If the variable of the same name already exists, it will error.
3, tf.variable_scope () function nested
TensorFlow in tf.variable_scope function can be nested, the procedure described below when tf.variable_scope nested functions, the value reuse is how to determine the parameters.
import tensorflow as tf
with tf.variable_scope("root"):
# 可以通过tf.get_variable_scope().reuse函数来获取当前上下文管理器中reuse参数的取值
print(tf.get_variable_scope().reuse) # 输出False,即最外层reuse是False
with tf.variable_scope("foo", reuse=True): #新建一个嵌套的上下文管理器,并指定reuse为True
print(tf.get_variable_scope().reuse) # 输出True
with tf.variable_scope("bar"): # 新建一个嵌套的上下文管理器,不指定reuse,这时reuse的取值与外面一层保持一致
print(tf.get_variable_scope().reuse) # 输出True
print(tf.get_variable_scope().reuse) # 输出False。5退出reuse设置为True的上下文管理器之后,reuse的值又回到了False