Basic use of TensorFlow and keras

Edit and run in jupyter nootbook

One TensorFlow

step:

。import tensorflow as tf

 

two. Load data

(Data batch)

 

three. Define placeholder (then pass into the training set)

x = tf.placeholder(tf.float32, [维度], name = "")

four. Define structure and parameters (w, b)

Write out the prediction function z = (activation function (wx + b))

Can dropout (prevent overfitting)

The fourth step is to define each layer of neural network once (the activation function of the last layer is generally different from the previous one)

w = tf.Variable(tf.zeros([]))

wx_plus_b = tf.matmul(w, x) + b

L1 = tf.nn. activation function (wx_plus_b) (the last layer is named prediction)

#dropout

keep_prob = tf.placeholder (tf.float32) (or directly defined as a constant)

L1_drop = tf.nn.dropout(L1,keep_prob)

Fives. Define the cost function (quadratic cost function, cross-entropy cost function)

loss = tf.reduce_mean (tf.square (y-prediction)) (quadratic cost function) (the following is the cross-entropy cost function)

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y,logits=prediction))

或tf.nn.sigmoid_cross_entropy_with_logits

prediction is a prediction function (without activation function (softmax or sigmoid, generally these two are used in the last layer), because this function does)

(tf.reduce_mean is to average)

six. Optimizer training, define learning rate (random gradient descent, etc.)

train_step = tf.nn.GradientDescentOptimizer(0.5).minimize(loss)

tf.nn.AdadeltaOptimizer()

tf.nn.adagradOptimizer ()

.... many optimization methods

 

Seven. Seeking accuracy

correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))

#argmax returns the position of the largest value in a one-dimensional tensor, tf.equal judges equal

accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

 

Eight. Initialize variables

init = tf.global_variabies_initializer()

 

。with tf.Session as sess:

sess.run(init)

for epoch in range (51): #Training 51 times

sess.run([] ,feed_dict = {x: , y: })

# [] Write the OP to be run, such as train_step. The feed value follows.

This function can have a return value, the return value is the execution result of OP. If OP is an element, return a value;

If OP is a list, the value of list is returned. If OP is a dictionary type, a dictionary with the same keys as OP is returned.

acc = sess.run(accury, feed_dict = {})

print("Iter " + str(epoch) + ",Testing Accuracy " + str(acc))

Can be optimized :

1. Data batching

2. How much training data

3. Change activation function

4. Increase the number of neurons and networks

5. Add dropout (adjust dropout parameter keep_prob) or regularize

6. Learning rate

7. Change the optimization method

8. Increased training times

 

 

Use tensorBoard for network visualization

1. Each part (basically divided into blocks according to the previous steps) is added a namespace in front, and a subspace can be added in it

with tf.name_scope ('Define your own space name'):

with tf.name_scope (''): (placeholder and variable need to add name parameter, nothing else)

x = placeholder(tf.float32,[None,10],name = "x_input")

 

2. Finally with tf.Session () as sess: add:

writer = tf.summary.FileWriter ('logs / (self-defined path and folder)', sess.graph)

 

3. Run, open the command prompt window cmd, enter d: (Go to the d drive)

tensorboard --logdir = path, (the path is the location of the logs folder just defined, no path is automatically defined in the project folder)

Then use Google Chrome (not 360) to open the given URL

 

After changing the program, delete the file, and then choose restart clearOutput.

 

Drawing:

tf.summary.scalar ("", w) find a parameter scalar graph (change graph)

tf.summary.histogram ("", w) histogram

You can find the scalar graphs of loss and accuracy, and the scalar graphs of w and b

 

Finally add a sentence to merge all summary

merged = tf.summary.merge_all()

 

In with, sess.run (train_step, feed_dict = {x: batch_xs, y: batch_ys}) is changed to:

summary, _ = sess.run([merged, train_step],feed_dict={x:batch_xs, y:batch_ys})

writer.add_summary(summary)

二 Hard

model = Sequential()

 

#First layer (input layer)

model.add(Dense(input_dim = 28*28, output_dim = 500))

model.add(Activation('sigmoid'))

#Second floor

model.add(Dense(output_dim = 500))

model.add(Activation('sigmoid'))

#Third layer (output layer)

model.add(Dense(output_dim = 10))

model.add(Activation('softmax'))

 

#Loss function and optimizer

model.compile(loss = 'mse',

optimizer = SGD(lr = 0.1),

metrics = ['accuracy'])

#training

model.fit(x_train, y_train, batch_size = 100, nb_epoch = 20)

#test

score = model.evaluate(x_text, y_text)

print('Total loss on Testing Set:', score[0])

print('Accuracy of Testing Set:', score[1])

#Apply Forecast

result = model.predict(x_test)

Published 59 original articles · Likes46 · Visits 30,000+

Guess you like

Origin blog.csdn.net/sinat_41852207/article/details/105479320