[tensorflow] reproduce in tensorflow: tf.set_random_seed() Tensorflow中复现实验

版权声明:Copyright reserved to Hazekiah Wang ([email protected]) https://blog.csdn.net/u010909964/article/details/84068565

Reproducing experiments means to restore the random seed.
There are two types of seeding in tensorflow:

  • graph-level
    typically by calling tf.set_random_seed(seed)
  • op-level
    by setting the seed= param in the tf.random functions.

What are the differences?

  • graph-level seed has priority over op-level
    when graph-level seed set, different sessions running identical graphs with the same graph-level seed will produce the same random behavior.
    Note: graphs do not need to have same references but same structure and seed, i.e., set by tf.set_random_seed(seed)
  • op-level seed controls the specific op to keep its random behavior identical across different sessions.
    Note: graphs do not need to have same references and even the same structure, but there should be two identical op with the same seed.

Warning
When tf.random functions are used in interior functions or lambda functions, It seems that the graph is not inherited, which means the graph-level seed is not inherited. It follows that in such cases, we have to manually set the op-level seed, if we want to reproduce the experiment.

猜你喜欢

转载自blog.csdn.net/u010909964/article/details/84068565