Tensorflow wrote the first use of a neural network

        Because training courses use LSTM + attention mechanism to make a relationship in the field of steel extraction. As white only just learned a little depth learning network with some theoretical knowledge RNN, LSTM in the b station.

But only know some theoretical knowledge is not complete relation extraction task. So borrowed "tensoflow combat ----- deep learning framework" from the library, where I began recording the tensorflow neural network programming!

       First, tell us about the operation of the mechanism tensorflow, for a specific calculation, can generally be divided into two stages, the first stage to define the calculation used to calculate the figure, the second stage is used to perform calculations.

Once you have this concept, you will find that this operation can be very good to excellent framework defines separate parts, and model training section, the following is the first experiment code: a simple classification problem, a 2, 3, 1 (three layers, each layer nodes) of the neural network.

tensorflow Import AS TF 
from numpy.random Import RandomState
the batch_size = . 8
W1 = tf.Variable (tf.random_normal (( 2 , . 3) , STDDEV = . 1 , SEED = . 1)) // initialize random weights, the second parameter is standard deviation
w2 of tf.Variable = (tf.random_normal (( . 3 , . 1) , STDDEV = . 1 , SEED = . 1)) // initialize random weights

X = tf.placeholder (tf.float32 , Shape = ( None, 2) , = name "x_input") // placeholder generally used to store input data during training, because if it is defined as a constant, the consumed too much space
Y_ = tf.placeholder (tf.float32 , Shape = (None, 1) , name = "y_input") // parameter description, you need to define the type and dimensions, None mean, I do not know there are several groups of the number of training
biases1 = tf.Variable (tf.random_normal (( 1 , 3) , = STDDEV 1)) // definition of bias, in fact, is the concept of the so-called offset intercept
biases2 = tf.Variable (tf.random_normal (( 1 , 1) , STDDEV = 1))
#A = tf.matmul (the X-, W1) + biases1

// the following is a front propagating achieve
A = tf.sigmoid (tf.matmul (X , W1) + biases1) // activation function acts as a sigmoid function is used to linearize
y = tf.matmul (a , w2 of) biases2 +
Y = tf.sigmoid (Y)
# loss loss function number is selected with a cross- entropy function number
= -tf.reduce_mean cross_entropy (Y_ * tf.log (tf.clip_by_value (Y , 1E-10 , 1.0)) + ( . 1-Y) * tf.log (tf.clip_by_value ( . 1-Y , 1E-10 , 1.0 )))
# select optimization method (i.e., back-propagation method of updating the weights used, this method does not know what the meaning adam, only know gradient descent)
train_step tf.train.AdamOptimizer = ( 0 , from 0.001) .minimize (cross_entropy )

# generated with the machine number data set
RDM RandomState = ( . 1) # with machine factor is . 1
dataset_size = 128
X-rdm.rand = (dataset_size , 2)
the Y = [[ int (X1 + X2 < . 1)] for (X1, X2) in the X-]
// generate the session began training model that performs the calculation of the aforementioned stage
with tf.Session () AS sess:
// tensorflow all tensor must initialize
 initall = tf.global_variables_initializer ()
sess.run (initall)
#Print (sess.run (biases1)) Print (sess.run (W1)) Print (sess.run (W2)) // training set to extract a small a part called a batch, the training process is a batch of a batch training Steps = 5000 for I in Range (Steps): Start = (I * the batch_size)% dataset_size End = min (Start + the batch_size , dataset_size) sess.run (train_step , feed_dict = {X: X-[Start : End] , Y_: the Y [Start: End]}) // 1000 times per training check training results, i.e., cross-entropy function, the smaller the better IF (I% 1000 == 0):









    
  = sess.run total_cross (cross_entropy , feed_dict = {X: X- , Y_: the Y})
  Print (I , "" , total_cross)
// check the final weight of the last update right
Print (sess.run (W1))
Print (Sess .run (w2))

for the first time to write a blog, also a beginner, there is a problem please point out Kazakhstan.

Guess you like

Origin www.cnblogs.com/guairenkuangren/p/12035281.html