tensorflow learning experience 2

tensorflow learning experience (2)

2018.4.22

上期我们说到tensorflow分两步,Graph和run。在公司用tensorflow搭了下lstm做分类的模型,结果很奇怪。同样的模型结构,我用Keras跑出来auc为0.7,而用tensorflow时loss一直跳,而且auc基本在0.5左右。然后在排错过程中,我想到了graph,把graph打出来后,发现了几个问题,如最后一层忘记加sigmoid激活。
尴尬!!!不过反映了tensorflow graph的一个好处,可以让我们像看图一样检查模型。。。
今天主要讲下常量,变量和占位符。
constant
a = tf.contant([3,6])#表示生成一个长度为2的向量a = [3,6],此时a是一个op,在sess.run之后才有值
#tf.zeros()
#tf.ones()
#都为常量
######------------------
#tf.random_normal(shape=,dtype=,name='')
#tf.random_uniform()
#以上也是常量,只不过是随机常量
variable
b = tf.Variable(2,name='scalar')#生成一个变量,并为该变量赋初值为2,仍在sess.run()后才有值为2。
#变量在使用之前要初始化!!!初始化变量有三种方式
#1)全部初始化
init = tf.global_variables_initializer()
sess.run(init)
#2)初始化部分变量
init_ab = tf.variables_initializer([a,b],name ='init_ab')
#3)初始化单个变量
sess.run(b.initializer)
#当然,此时b被初始化后是个Tensor,
print(b)#Tensor('Variable/....')
print(b.eval())#[[0.444,0.111,....],...,[...]],使用eval()输出变量内容

Remember that a constant is one op, and Variable() is a class

placeholder placeholder()

a = tf.placeholder(dtype,shape=,name='')
A placeholder is a container. It occupies a pit first, and there is nothing in it . When it is run, you can use
sess.run([a,b],feed_dict={a:a_data,b:b_data})to put the value in the previously occupied pit.
Generally, we use placeholders to store data for training. Of course, we can build directly with data op, but it is a bit unprofessional. . The way we use placeholderit allows us to send data more freely, and we can send it in the batch we need.

To briefly summarize, the constant constant() is used to construct some quantities that do not need to be changed in the model, the variable Variable() is generally used to construct weights, etc., which need to be continuously updated, and the placeholder is generally used as a data container.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324759719&siteId=291194637