Built up a NN

本人刚刚入坑人工智能的小白一枚,下面是我用python做的一个简单的tensorflow深度学习网络,参考的是MOOC北大曹健老师的人工智能实践课程。

  1 #start
  2 #step0:  import module and produce analog data set
  3 import tensorflow as tf
  4 import numpy as np
  5 BATCH_SIZE=8
  6 seed=23455
  7
  8 rng=np.random.RandomState(seed)      #random number based of the "seed"
  9
 10 X=rng.rand(32,2)                     #matrix composed of random number as th    e input array of data
 11 Y=[[int (x0+x1<1)] for (x0,x1) in X]
 12 print "X:\n",X
 13 print "Y:\n",Y
 14
 15
 16 #step1: define the input, output, parameter of neural network(NN)
 17 #       define forward propagation process
 18 x = tf.placeholder(tf.float32, shape=(None, 2))
 19 y_= tf.placeholder(tf.float32, shape=(None, 1))
 20
 21 w1=tf.Variable(tf.random_normal([2,3], stddev=1, seed=1))
 22 w2=tf.Variable(tf.random_normal([3,1], stddev=1, seed=1))
 23
 24 a=tf.matmul(x,w1)
 25 y=tf.matmul(a,w2)
 26
 27 #step2: define loss fuction and the way of backward propagation
 28 loss=tf.reduce_mean(tf.square(y-y_))
 29 train_step=tf.train.GradientDescentOptimizer(0.001).minimize(loss)
 30
 31 #step3: produce session and train the process of STEPS
 32
 33 with tf.Session() as sess:
 34     init_op=tf.global_variables_initializer()
 35     sess.run(init_op)
 36     print"w1:\n",sess.run(w1)
 37     print"w2:\n",sess.run(w2)
 38     print"\n"
 39
 40     STEPS=3000
 41     for i in range(STEPS):
 42         start=(i*BATCH_SIZE)%32
 43         end=start+BATCH_SIZE
 44         sess.run(train_step, feed_dict={x:X[start:end], y_:Y[start:end]})
 45         if i%500==0:
 46             total_loss=sess.run(loss, feed_dict={x:X, y_:Y})
 47             print("After %d training steps, loss on all data is %g"%(i,total    _loss))
 48     print"\n"
 49     print"w1:\n",sess.run(w1)
 50     print"w2:\n",sess.run(w2)
 

After carried out:

X:
[[0.83494319 0.11482951]
 [0.66899751 0.46594987]
 [0.60181666 0.58838408]
 [0.31836656 0.20502072]
 [0.87043944 0.02679395]
 [0.41539811 0.43938369]
 [0.68635684 0.24833404]
 [0.97315228 0.68541849]
 [0.03081617 0.89479913]
 [0.24665715 0.28584862]
 [0.31375667 0.47718349]
 [0.56689254 0.77079148]
 [0.7321604  0.35828963]
 [0.15724842 0.94294584]
 [0.34933722 0.84634483]
 [0.50304053 0.81299619]
 [0.23869886 0.9895604 ]
 [0.4636501  0.32531094]
 [0.36510487 0.97365522]
 [0.73350238 0.83833013]
 [0.61810158 0.12580353]
 [0.59274817 0.18779828]
 [0.87150299 0.34679501]
 [0.25883219 0.50002932]
 [0.75690948 0.83429824]
 [0.29316649 0.05646578]
 [0.10409134 0.88235166]
 [0.06727785 0.57784761]
 [0.38492705 0.48384792]
 [0.69234428 0.19687348]
 [0.42783492 0.73416985]
 [0.09696069 0.04883936]]
Y:
[[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]]
w1:
[[-0.8113182   1.4845988   0.06532937]
 [-2.4427042   0.0992484   0.5912243 ]]
w2:
[[-0.8113182 ]
 [ 1.4845988 ]
 [ 0.06532937]]


After 0 training steps, loss on all data is 5.13118
After 500 training steps, loss on all data is 0.429111
After 1000 training steps, loss on all data is 0.409789
After 1500 training steps, loss on all data is 0.399923
After 2000 training steps, loss on all data is 0.394146
After 2500 training steps, loss on all data is 0.390597


w1:
[[-0.7000663   0.9136318   0.08953571]
 [-2.3402493  -0.14641267  0.58823055]]
w2:
[[-0.06024267]
 [ 0.91956186]
 [-0.0682071 ]]

猜你喜欢

转载自blog.csdn.net/qq_39815222/article/details/80181321
nn
今日推荐