TensorBoard菜鸟教程(包含TFlearn例子)

目录

1. 简介

2. TnesorBoard启动

3.代码解释

4.补充例子


1. 简介

网上关于TensorBoard有很多介绍,但作为一名小白很难操作起来,实现过程中困难重重。本文章从实例解析tensorboard的使用方法。其他文字方面的介绍(如TensorBoard是什么、TensorBoard的作用)可参考大神们的博客。以下代码转自http://www.jianshu.com/p/61081bba175f 。已运行通过(python3)

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/MNIST_data", one_hot=True)

# Input placeholder, 2-D tensor of floating-point nunbers.
# here None means that a dimension can be of any length.
X = tf.placeholder(tf.float32, [None, 784], name = 'X-input')

# New placeholder to input the correct answers.
Y = tf.placeholder(tf.float32, [None, 10], name = 'Y-input')

# Initialize both W and b as tensors full of zeros.
# Since we are going to learn W and b, it doesn't matter very much what they initial are.
W = tf.Variable(tf.zeros([784, 10]), name = 'Weight')
B = tf.Variable(tf.zeros([10]), name = 'Bias')

# Tensorboard histogram summary.
tf.summary.histogram('WeightSM', W)
tf.summary.histogram('BiasSM', B)

with tf.name_scope('Layer'):
    y = tf.nn.softmax(tf.matmul(X, W) + B)

with tf.name_scope('Cost'):
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(y), reduction_indices=[1]))
    # Tensorboard scalar summary.
    tf.summary.scalar('Cost', cross_entropy)

with tf.name_scope('Train'):
    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

with tf.name_scope('Accuracy'):
    accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y, 1), tf.argmax(Y, 1)), tf.float32))
    # Tensorboard scalar summary.
    tf.summary.scalar('Accuracy', accuracy)

with tf.Session() as sess:
    # Merge all summaries.
    writer=tf.summary.FileWriter('./logs',sess.graph)
    merged = tf.summary.merge_all()
    tf.initialize_all_variables().run()
    # Training 1000 times, 100 for each loop.

    for i in range(1000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        _, summary = sess.run([train_step, merged], feed_dict={X: batch_xs, Y: batch_ys})
        # Write summary into files.
        writer.add_summary(summary, i)

    # Close summary writer.
    writer.close()

    print('Accuracy', accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))

2. TnesorBoard启动

TensorBoard首先要求python程序生成运行结果,保存到本地,然后TensorBoard才能读取从而分析。第39行代码指定了生成数据的路径:

writer=tf.summary.FileWriter('./logs',sess.graph)

执行上述代码之后(注意使用的是python3),会在./logs目录下生成一个events文件

通过以下步骤启动TensorBoard:

(1) 在命令行中转到Python项目所在路径

(2) 输入命令  tensorboard --logdir=logs  其中logs是我刚才设置的路径'./logs'。此时会生成一个地址,如下图所示,把该网址复制到浏览器中便可打开tensorboard。这个网址其实就是http://127.0.0.1:6006/,可添加到书签方便下次使用


3.代码解释

通过TensorBoard页面的GRAPHS可以看到计算图的结构.

文章开头的第5~15行代码声明了变X, Y, Weight, Bias,其中name参数的内容就是图中显示的名称

# Input placeholder, 2-D tensor of floating-point nunbers.
# here None means that a dimension can be of any length.
X = tf.placeholder(tf.float32, [None, 784], name = 'X-input')

# New placeholder to input the correct answers.
Y = tf.placeholder(tf.float32, [None, 10], name = 'Y-input')

# Initialize both W and b as tensors full of zeros.
# Since we are going to learn W and b, it doesn't matter very much what they initial are.
W = tf.Variable(tf.zeros([784, 10]), name = 'Weight')
B = tf.Variable(tf.zeros([10]), name = 'Bias')

第17~19行指明了要统计的数据。

# Tensorboard histogram summary.
tf.summary.histogram('WeightSM', W)
tf.summary.histogram('BiasSM', B)
可以看到现在要统计的是histogram数据,在TensorBoard中打开HISTOGRAM即可看到这2个变量

第21~22行把乘法操作matmul、加法操作+、softmax函数封装成一个单元了,并命名为layer

with tf.name_scope('Layer'):
    y = tf.nn.softmax(tf.matmul(X, W) + B)

通过下图对比即可看出with tf.name_scope('Layer') 的作用:

可以看到封装使得结构图更加清晰简洁。双击封装后的layer,可以看到内部拥有和左图一样的结构

第23~35行同样是封装操作,在此不作赘述:

with tf.name_scope('Cost'):
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(y), reduction_indices=[1]))
    # Tensorboard scalar summary.
    tf.summary.scalar('Cost', cross_entropy)

with tf.name_scope('Train'):
    train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

with tf.name_scope('Accuracy'):
    accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(y, 1), tf.argmax(Y, 1)), tf.float32))
    # Tensorboard scalar summary.
    tf.summary.scalar('Accuracy', accuracy)

观察最核心部分的代码:

with tf.Session() as sess:
    # Merge all summaries.
    writer=tf.summary.FileWriter('./logs',sess.graph)
    merged = tf.summary.merge_all()
    tf.initialize_all_variables().run()
    # Training 1000 times, 100 for each loop.

    for i in range(1000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        _, summary = sess.run([train_step, merged], feed_dict={X: batch_xs, Y: batch_ys})
        # Write summary into files.
        writer.add_summary(summary, i)

    # Close summary writer.
    writer.close()

    print('Accuracy', accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))

计算图必须通过一个session启动,所以开头的地方新建了一个计算图。

writer=tf.summary.FileWriter('./logs',sess.graph)  指定了summary的保存文件,并且writer可以看作为一个文件指针。

merged = tf.summary.merge_all()  把所有的summary归并为一个算子,所以只需要执行merged这个节点,就能得到前面创建的所有summary

tf.initialize_all_variables().run()  初始化了所有变量,例如前面提到的weight和bias

接着是迭代训练过程,在每一次迭代过程中,通过执行merged算子来得到当前迭代轮次的summary,通过summary的文件指针writer把summary写入到本地


4.补充例子

补充一个TFlearn的例子,在TFlearn库中生成events文件的方法是在tflearn.DNN()的参数中加入语句tensorboard_dir='目录',例如:

model = tflearn.DNN(net,checkpoint_path='model_resnet_mnist',max_checkpoints=10,tensorboard_verbose=0,tensorboard_dir='logs')

ResNet处理mnist数据集的TFlearn例子如下:

import tflearn
import  tflearn.data_utils as du

# 加载数据并做预处理
import tflearn.datasets.mnist as mnist
X,Y,testX,testY=mnist.load_data(one_hot=True)
X=X.reshape([-1,28,28,1])
testX=testX.reshape([-1,28,28,1])
X,mean=du.featurewise_zero_center(X)
testX=du.featurewise_zero_center(testX,mean)

# 构建残差网络模型
net=tflearn.input_data(shape=[None,28,28,1])
net=tflearn.conv_2d(net,64,3,activation='relu',bias=False)

# 使用Bottleneck结构构建残差块
net=tflearn.residual_bottleneck(net,3,16,64)
net=tflearn.residual_bottleneck(net,1,32,128,downsample=True)
net=tflearn.residual_bottleneck(net,2,32,128)
net=tflearn.residual_bottleneck(net,1,64,256,downsample=True)
net=tflearn.residual_bottleneck(net,2,64,256)
net=tflearn.batch_normalization(net)
net=tflearn.activation(net,'relu')
net=tflearn.global_avg_pool(net)
net=tflearn.fully_connected(net,10,activation='softmax')

# 声明优化算法、损失函数、学习率等
net=tflearn.regression(net,optimizer='momentum',loss='categorical_crossentropy',learning_rate=0.1)


# 训练
model = tflearn.DNN(net,checkpoint_path='model_resnet_mnist',max_checkpoints=10,tensorboard_verbose=0,tensorboard_dir='logs')
model.fit(X,Y,n_epoch=1,validation_set=(testX,testY),show_metric=True,batch_size=256,run_id='resnet_minst')
发布了70 篇原创文章 · 获赞 17 · 访问量 4万+

猜你喜欢

转载自blog.csdn.net/hwj_wayne/article/details/78224599