Generally speaking, there are two functions for printing tensorflow variables:
tf.trainable_variables() and tf.all_variables()
The difference is:
tf.trainable_variables() refers to the variables that need to be trained
tf.all_variables() refers to all variables
In general, we pay more attention to the training variables that need to be trained:
it is worth noting that when outputting variable names, the entire graph is initialized
1. Print the name of the variable that needs to be trained
sess.run(tf.global_varibales_initializer())
variable_name = [v.name for c in tf.trainable_variables()]
print(variable_names)
2. Print the variable names and variable values that need to be trained
variable_names = [v.name for v in tf.trainable_variables()]
values = sess.run(variable_names)
for k,v in zip(variable_names, values):
print("Variable: ", k)
print("Shape: ", v.shape)
print(v)
Here is a function that prints the variable name, shape and the number of variables
def print_num_of_total_parameters(output_detail=False, output_to_logging=False):
total_parameters = 0
parameters_string = ""
for variable in tf.trainable_variables():
shape = variable.get_shape()
variable_parameters = 1
for dim in shape:
variable_parameters *= dim.value
total_parameters += variable_parameters
if len(shape) == 1:
parameters_string += ("%s %d, " % (variable.name, variable_parameters))
else:
parameters_string += ("%s %s=%d, " % (variable.name, str(shape), variable_parameters))
if output_to_logging:
if output_detail:
logging.info(parameters_string)
logging.info("Total %d variables, %s params" % (len(tf.trainable_variables()), "{:,}".format(total_parameters)))
else:
if output_detail:
print(parameters_string)
print("Total %d variables, %s params" % (len(tf.trainable_variables()), "{:,}".format(total_parameters)))