Lesson build neural network four steps: preparation, forward propagation and back propagation loop iteration.
√0 import module generates simulated data sets;.
Import
constant defines
generate data sets
√1 forward propagation: define input parameters and output
X = Y_ =
W1 = w2 of =
A = Y =
. Backpropagation √2: Definition loss function, back propagation method
loss =
train_step =
√3 generate session, training STEPS wheel.
with tf.session () AS Sess
Init_op global_variables_initializer TF = ().
sess_run (init_op)
STEPS = 3000
for I in Range (STEPS):
= Start
End =
sess.run (train_step, feed_dict :)
python code:
# Coding: UTF. 8- # 0 import module generates simulated data sets. # Tensorflow study notes (Peking) tf3_6.py fully resolve to build a neural network learning Import tensorflow AS TF Import numpy AS NP BATCH_SIZE = 8 the SEED = 23455 RDM = np.random.RandomState (the SEED) the X- = rdm.rand (32,2 ) Y_ = [[int (x0 + x1 <. 1)] for (x0, x1) in X-] #for (x0, x1) is to take all the values x0 and x1 from the x in x, [int (x0 + x1 <1)] when the mean x0 + x1 <1, then Y_ = 1; otherwise 0; Print ( " X-: \ n- " , X-) Print ( " the Y _: \ n- " , Y_) XTf.placeholder = (tf.float32, Shape = (None, 2)) # a placeholder for input defined Y_ = tf.placeholder (tf.float32, Shape = (None,. 1)) # implemented with placeholder placeholder W1 = TF .Variable (tf.random_normal ([2,. 3], STDDEV. 1 =, = SEED. 1)) # normal random numbers w2 = tf.Variable (tf.random_normal ([3 , 1], stddev = 1, seed = 1)) # normal random number A = tf.matmul (X, W1) # dot product Y = tf.matmul (A, w2 of) # dot product of # 2 and the loss function defined back propagation method. = tf.reduce_mean loss_mse (tf.square (Y- Y_)) train_step = tf.train.GradientDescentOptimizer (from 0.001 ) .minimize (loss_mse) #= tf.train.MomentumOptimizer train_step (0.001,0.9) .minimize (loss_mse) # train_step = tf.train.AdamOptimizer (from 0.001) .minimize (loss_mse) # . 3 generating a session, training STEPS wheel with tf.Session () as sess: init_op = tf.global_variables_initializer () # initialize sess.run (init_op) # output current (untrained) parameter values. Print ( " W1: \ the n- " , sess.run (W1)) Print ( " W2: \ the n- " , sess.run (W2)) Print ( " \ the n- " ) # training model. = 3000 the STEPS for I in range(STEPS):#3000轮 start = (i*BATCH_SIZE) % 32 #i*8%32 end = start + BATCH_SIZE #i*8%32+8 sess.run(train_step, feed_dict={x: X[start:end], y_: Y_[start:end]}) if i % 500 == 0: total_loss = sess.run(loss_mse, feed_dict={x: X, y_: Y_}) print("After %d training step(s), loss_mse on all data is %g" % (i, total_loss)) # 输出训练后的参数取值。 print("\n") print(" W1: \ n- " , sess.run (W1)) Print ( " w2 of: \ n- " , sess.run (w2 of)) # , only the computation process carried build # calculation map, no operation, if we want operation the results will be used "session session ()" up. # √ session (Session): performing calculations nodes in the graph operation Print ( " W1: \ n- " , W1) Print ( " w2 of: \ n- " , w2 of) "" " X-: [[.83494319 .11482951] [.66899751 0.46594987] [0.60181666 0.58838408] [0.31836656 0.20502072] [0.87043944 0.02679395] [0.41539811 0.43938369] [0 0.68635684. [0 0.97315228. [ 0.03081617 0.89479913] [ 0.24665715 0.28584862] [ 0.31375667 0.47718349] [ 0.56689254 0.77079148] [ 0.7321604 0.35828963] [ 0.15724842 0.94294584] [ 0.34933722 0.84634483] [ 0.50304053 0.81299619] [ 0.23869886 0.9895604 ] [ 0.4636501 0.32531094] [ 0.36510487 0.97365522] [ 0.73350238 0.83833013] [ 0.61810158 0.12580353] [ 0.59274817 0.18779828] [ 0.87150299 0.34679501] [ 0.25883219 0.50002932] [ 0.75690948 0.83429824] [ 0.29316649 0.05646578] [ 0.10409134 0.88235166] [ 0.06727785 0.57784761] [ 0.38492705 0.48384792] [ 0.69234428 0.19687348] [ 0.42783492 0.73416985] [ 0.09696069 0.04883936]] Y_: [[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]] w1: [[-0.81131822 1.48459876 0.06532937] [-2.4427042 0.0992484 0.59122431]] w2: [[-0.81131822] [ 1.48459876] [ 0.06532937]] After 0 training step(s), loss_mse on all data is 5.13118 After 500 training step(s), loss_mse on all data is 0.429111 After 1000 training step(s), loss_mse on all data is 0.409789 After 1500 training step(s), loss_mse on all data is 0.399923 After 2000 training step(s), loss_mse on all data is 0.394146 After 2500 training step(s), loss_mse on all data is 0.390597 w1: [[-0.70006633 0.9136318 0.08953571] [-2.3402493 -0.14641267 0.58823055]] w2: [[-0.06024267] [ 0.91956186] [-0.0682071 ]] """
Learn from: Mu lesson APP artificial intelligence practice -Tensorflow notes; Peking University Cao Jian teacher courses
# Initialize