BN(Batch Normalization)在TensorFlow的实现

对于BN计算一直不懂,但在tensorflow里可以有几个实现的方法,记录一下:

这个Stack Overflow回答详解了目前tensorflow中所有的batch normalization用法,其中推荐使用的high-level API是tf.layers.batch_normalization。如果想用low-level API自己写函数,则用tf.nn.batch_normalization即可。
来自知乎Bob Auditore:https://www.zhihu.com/question/53133249/answer/307250507

摘录如下

tf.nn.batch_normalization is a low-level op. The caller is responsible to handle mean and variance tensors themselves.

tf.nn.fused_batch_norm is another low-level op, similar to the previous one. The difference is that it’s optimized for 4D input tensors, which is the usual case in convolutional neural networks. tf.nn.batch_normalization accepts tensors of any rank greater than 1.
tf.layers.batch_normalization is a high-level wrapper over the previous ops. The biggest difference is that it takes care of creating and managing the running mean and variance tensors, and calls a fast fused op when possible. Usually, this should be the default choice for you.
tf.contrib.layers.batch_norm is the early implementation of batch norm, before it’s graduated to the core API (i.e., tf.layers). The use of it is not recommended because it may be dropped in the future releases.
tf.nn.batch_norm_with_global_normalization is another deprecated op. Currently, delegates the call to tf.nn.batch_normalization, but likely to be dropped in the future.

Finally, there’s also Keras layer keras.layers.BatchNormalization, which in case of tensorflow backend invokes tf.nn.batch_normalization.

简书上有一篇也不错:
https://www.jianshu.com/p/7ce4e709fe7d

猜你喜欢

转载自blog.csdn.net/chaowang1994/article/details/80302527