tensorflow "ValueError: None values not supported."

在学习的过程中遇到了这个问题,在github上看到别人的回答有了灵感
在这里插入图片描述

原先的代码:

"""
即需要计算参数的移动平均
"""
def inference(input_tensor,avg_class,weights1,biases1,weights2,biases2):
	layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1))
	return tf.matmul(layer1, avg_class.average(weights2)) + avg_class.average(biases2)
        
def train(mnist):
	"""
	初始化参数
	"""
	x = tf.placeholder...
	y_ = ...
	weights1 = tf.Variable...
	...
	"""
	定义移动平均
	"""
	variable_averages = tf.train.ExponentialMovingAverage(LEARNING_RATE_DECAY, global_step)
      
    variables_averages_op = variable_averages.apply(tf.trainable_variables())
    
    average_y = inference(x, variable_averages, weights1, biases1, weights2, biases2)
    ...
	
	sess.run(variables_averages_op)
	...

在学习了命名空间对inference函数进行改进如下, train函数内的顺序不变,只改动一下
调用方式:average_y = inference(x, variable_averages),结果就一直报异常:
"ValueError: None values not supported."

def inference(input_tensor, avg_class,reuse=False):
    
    
    with tf.variable_scope('layer1',reuse=reuse):
        weights = tf.get_variable("weights",[INPUT_NODE, LAYER1_NODE],
                                   initializer=tf.truncated_normal_initializer(stddev=0.1))
        biases = tf.get_variable("biases", [LAYER1_NODE],
                                  initializer=tf.constant_initializer(0.0))

        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights)) + avg_class.average(biases))
        #return 
    
    with tf.variable_scope('layer2',reuse=reuse):
        weights = tf.get_variable("weights",[LAYER1_NODE, OUTPUT_NODE],
                                   initializer=tf.truncated_normal_initializer(stddev=0.1))
        biases = tf.get_variable("biases", [OUTPUT_NODE],
                                  initializer=tf.constant_initializer(0.0))

        layer2 = tf.matmul(layer1, avg_class.average(weights)) + avg_class.average(biases)
    
    return layer2

在看到github上的回答后有了灵感,发现问题在于:
variables_averages_op = variable_averages.apply(tf.trainable_variables())在调用inference之前使用了!,之前的代码权重、偏置的初始化都在variable_averages.apply之前,所以tf.trainable_variables()就能获取到那些参数,但是现在tf.trainable_variables()没有值。为了验证猜想,对函数进行了如下改进:

def inference(input_tensor, avg_class,reuse=False):

    with tf.variable_scope('layer1',reuse=reuse):
        weights = tf.get_variable("weights",[INPUT_NODE, LAYER1_NODE],
                                   initializer=tf.truncated_normal_initializer(stddev=0.1))
        biases = tf.get_variable("biases", [LAYER1_NODE],
                                  initializer=tf.constant_initializer(0.0))
        
        variables_averages_op = avg_class.apply(tf.trainable_variables())
        

        layer1 = tf.nn.relu(tf.matmul(input_tensor, avg_class.average(weights)) + avg_class.average(biases))
        #return 
    
    with tf.variable_scope('layer2',reuse=reuse):
        weights = tf.get_variable("weights",[LAYER1_NODE, OUTPUT_NODE],
                                   initializer=tf.truncated_normal_initializer(stddev=0.1))
        biases = tf.get_variable("biases", [OUTPUT_NODE],
                                  initializer=tf.constant_initializer(0.0))
        
        variables_averages_op = avg_class.apply(tf.trainable_variables())

        layer2 = tf.matmul(layer1, avg_class.average(weights)) + avg_class.average(biases)
    
    return layer2,variables_averages_op

果然是不会报错能正常运行了,但是这样子的代码显然太丑了,但是目前还不知道怎么改,让它变得结构化、优美点。

猜你喜欢

转载自blog.csdn.net/normol/article/details/88380432
今日推荐