TensorFlow之如何可视化tensorboard(一)

一、可视化tensorboard

可视化tensorbod

from __future__ import print_function
import tensorflow as tf


def add_layer(inputs, in_size, out_size, activation_function=None):
    # add one more layer and return the output of this layer
    with tf.name_scope('layer'):
        with tf.name_scope('weights'):
            Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
        with tf.name_scope('biases'):
            biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='b')
        with tf.name_scope('Wx_plus_b'):
            Wx_plus_b = tf.add(tf.matmul(inputs, Weights), biases)
        if activation_function is None:
            outputs = Wx_plus_b
        else:
            outputs = activation_function(Wx_plus_b, )
        return outputs


# define placeholder for inputs to network
with tf.name_scope('inputs'):#在tensorboard中用框框显示
    xs = tf.placeholder(tf.float32, [None, 1], name='x_input')
    ys = tf.placeholder(tf.float32, [None, 1], name='y_input')

# add hidden layer
l1 = add_layer(xs, 1, 10, activation_function=tf.nn.relu)#对应tensorboard图中的隐藏层
# add output layer
prediction = add_layer(l1, 10, 1, activation_function=None)#对应tensorboard图中的输出层

# the error between prediciton and real data
with tf.name_scope('loss'):
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
                                        reduction_indices=[1]))

with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)

sess = tf.Session()

# tf.train.SummaryWriter soon be deprecated, use following
#把整个框架加载到一个文件里面去,通过这个文件才能浏览出来到浏览器里面看,其中graph就是整个框架,##在tensorboard里面会显示,也就是把graph全部的信息收集起来,防止logs文件夹里面去
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:  # tensorflow version < 0.12
    writer = tf.train.SummaryWriter('logs/', sess.graph)
else: # tensorflow version >= 0.12
    writer = tf.summary.FileWriter("logs/", sess.graph)

# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess.run(init)

# direct to the local dir and run this in terminal:
# $ tensorboard --logdir=logs

其中,with tf.name_scope('')是用来在tensorboard中显示框框的

           writer = tf.train.SummaryWriter(‘logs/’,sess.graph)是用来把整个框架(graph)到一个文件夹(logs)里面去,并且在这个文件夹里面生成一个文件,该文件加载到文件夹里面后再导出来,通过浏览器打开,也就是在tensorboard中显示框架。

二、在anaconda中显示tensorboard

准备工作:安装tensorboard(在cmd模式下输入pip install tensorboard)

                  更新pip(在cmd模式下输入:python -m pip install --upgrade pip)

1、在Spyder中运行程序,运行后会生成一个文件,我的是在C:\Users\hubinghua\.spyder-py3\logs中

2、打开Anacdonda Prompt输入:activate tensorflow,回车激活TensorFlow

3、接着输入 :tensorboard --logdir=C:\Users\hubinghua\.spyder-py3\logs

4、复制Anacdonda Prompt中显示的地址,我的是 http://hubinghua-PC:6006,然后在谷歌浏览器地址栏粘贴,回车进入tensorboard显示模式,在GRAPHS中就可以看到神经网络结构了

猜你喜欢

转载自blog.csdn.net/weixin_40849273/article/details/81187985