Introduction to TF flags

1. Introduction to TF flags

1. Flags can help us dynamically change the parameters in the code through the command line. Tensorflow uses flags to define the method of command line parameters. There are a large number of hyperparameters that require tuning in the ML model, so this method caters to the need for a flexible way to adjust some parameters of the code (1). For example, in this py file, some parameters are first
defined , and then save the parameters in the variable FLAGS uniformly, which is equivalent to assigning values. When calling these parameters later, you can directly use the FLAGS parameters
.
(3), the first is the parameter name, the second is the default value, and the third is the parameter description

2. Use process

#First step, call flags = tf.app.flags to define parameter names, and give initial values ​​and parameter descriptions
#Second step, assign flags parameters directly
#Third step, run tf.app.run()

FLAGS = tf.flags.FLAGS

tf.flags.DEFINE_string('name', 'default', 'name of the model')
tf.flags.DEFINE_integer('num_seqs', 100, 'number of seqs in one batch')
tf.flags.DEFINE_integer('num_steps', 100, 'length of one seq')
tf.flags.DEFINE_integer('lstm_size', 128, 'size of hidden state of lstm')
tf.flags.DEFINE_integer('num_layers', 2, 'number of lstm layers')
tf.flags.DEFINE_boolean('use_embedding', False, 'whether to use embedding')
tf.flags.DEFINE_integer('embedding_size', 128, 'size of embedding')
tf.flags.DEFINE_float('learning_rate', 0.001, 'learning_rate')
tf.flags.DEFINE_float('train_keep_prob', 0.5, 'dropout rate during training')
tf.flags.DEFINE_string('input_file', '', 'utf8 encoded text file')
tf.flags.DEFINE_integer('max_steps', 100000, 'max steps to train')
tf.flags.DEFINE_integer('save_every_n', 1000, 'save the model every n steps')
tf.flags.DEFINE_integer('log_every_n', 10, 'log to the screen every n steps')
tf.flags.DEFINE_integer('max_vocab', 3500, 'max char number')

Examples are as follows:

import tensorflow as tf
#取上述代码中一部分进行实验
tf.flags.DEFINE_integer('num_seqs', 100, 'number of seqs in one batch')
tf.flags.DEFINE_integer('num_steps', 100, 'length of one seq')
tf.flags.DEFINE_integer('lstm_size', 128, 'size of hidden state of lstm')

#通过print()确定下面内容的功能
FLAGS = tf.flags.FLAGS #FLAGS保存命令行参数的数据
FLAGS._parse_flags() #将其解析成字典存储到FLAGS.__flags中
print(FLAGS.__flags)

print(FLAGS.num_seqs)

print("\nParameters:")
for attr, value in sorted(FLAGS.__flags.items()):
    print("{}={}".format(attr.upper(), value))
print("")

If you encounter problems, you can refer to: related solutions

We chat number

Guess you like

Origin blog.csdn.net/HHTNAN/article/details/102743006