Depth study tensorflow real notes (1) fully connected neural network (FCN) to train their own data (read from txt file)

1, prepare data

 The data into txt file (the amount of data, then it is to write a program to automatically write their own data txt file, any language can achieve), separated by commas between the data, and finally a marked data tag ( for classification), such as 0,1. Each row represents a training sample. As shown below.

 

 Wherein data representing the first three columns (feature), the last column indicates data (feature) labels. Note: The label needs to start from zero coding!

2, to achieve full network connection

 This process I do not say, how very simple, it is common code implementation, the focus of this blog is to use your own data, some caveats I would do comment below. Directly on the code

1  # hidden layer parameter 
2 in_units. 3 =   # input neuron number 
. 3 h1_units =. 5   # hidden layer and the output neuron number 
. 4   
. 5  # number of neurons in the second hidden layer 
. 6 h2_units =. 6
 . 7   
. 8   
. 9 tf.Variable = W1 of (tf.truncated_normal ([in_units, h1_units], STDDEV = 0.1)) # hidden layer weights, W is initialized to cut off normal 
10 B1 = tf.Variable (tf.zeros ([h1_units]))   # hidden layer is set to paranoia 0 
. 11 W2 of tf.Variable = (tf.truncated_normal ([h1_units, h2_units], STDDEV = 0.1)) # the second hidden layer weights, W is initialized to normal cutoff 
12 is B2 = TF .Variable (tf.zeros ([h2_units]))   # the second hidden layer paranoia set to zero 
13 is  
14 W3 of tf.Variable = (tf.zeros ([h2_units, 2])) # output layer weights and paranoia are set to 0 
15 B3 = tf.Variable (tf.zeros ([2 ]))
 16   
. 17  # define input variables x and the dropout rate 
18 is x = tf.placeholder (tf.float32, [None,. 3]) # column 
. 19 keep_prob = tf.placeholder (tf.float32)
 20 is   
21 is  # define a hidden layer 
22 is hidden1 = tf.nn. RELU (tf.matmul (X, W1 of) + B1)
 23 is hidden1_drop = tf.nn.dropout (hidden1, keep_prob)
 24   
25  # define a second hidden layer 
26 is hidden2 = tf.nn.relu (tf.matmul (hidden1_drop, W2 of) + B2)
 27 hidden2_drop=tf.nn.dropout(hidden2,keep_prob)

Needs attention

in_units = 3 # input neuron number, dimensions and characteristics of the association

 

x = tf.placeholder (tf.float32, [None, 3]) # dimensions and characteristics of the association

3, realized loss function

      Standard softmax and cross-entropy, not much to say.    

. 1 Y = tf.nn.softmax (tf.matmul (hidden2_drop, W3 of) + B3)
 2   
. 3  # define loss function and a selection optimizer 
. 4 Y_ = tf.placeholder (tf.float32, [None, 2])   # column is 2 shows two rows represents the number of training input samples, None indicates uncertain 
. 5   
. 6 corss_entropy = tf.reduce_mean (-tf.reduce_sum (Y_ * tf.log (Y), reduction_indices = [. 1 ]))
 . 7 train_step = tf.train.AdagradOptimizer (0.3) .minimize (corss_entropy)

 Important considerations:

y_ = tf.placeholder (tf.float32, [None, 2]) # There are several types to write a few, I wrote two categories, so that 2

4, the data is read from txt, and make treatment

    Focus here, first of all in the data read from txt out, and then the labels are one-hot encoding, what is the one-hot encoding? Index indicates the category, which the category is non-zero-dimensional (with 1). Code:

. 1 Data = np.loadtxt ( ' txt.txt ' , DTYPE = ' a float ' , DELIMITER = ' , ' )
 2   
. 3  # converting samples into hot encoded label 
. 4  DEF label_change (BEFORE_LABEL):
 . 5      label_num = len (BEFORE_LABEL)
 . 6      change_arr np.zeros = ((label_num, 2))   # 2 indicates there are two types 
. 7      for I in Range (label_num):
 . 8          # the sample tag data requests from zero 
. 9              change_arr [I, int (BEFORE_LABEL [I]) ]. 1 =
 10      return change_arr
 . 11  
12  # for extracting data 
13 is  DEF Train (Data):
 14      data_train_x Data = [: 7,: 3]    # front lines taken as training data, front line 7 represents 7, 3 represents taking the first three columns, exclude data tag 
15      = label_change data_train_y (Data [:. 7, -1 ])
 16      return data_train_x, data_train_y
 . 17   
18 is   
. 19 data_train_x, data_train_y = Train (Data)

Areas requiring attention in the code I have done the comment, not repeat them.

5, start training and testing

Training section

. 1  for I in Range (. 5):   # iterations take batch training 
2     img_batch, label_batch tf.train.shuffle_batch = ([data_train_x, data_train_y],    # random sampling present 
. 3                                                      the batch_size = 2 ,
 . 4                                                      NUM_THREADS = 2 ,
 . 5                                                      Capacity = . 7 ,
 . 6                                                      min_after_dequeue = 2 ,
 . 7                                                      enqueue_many = True)
 . 8    coord = tf.train.Coordinator()  
 9    threads = tf.train.start_queue_runners(coord=coord, sess=sess) 
10  
11  
12    img_batch,label_batch=sess.run([img_batch,label_batch])
13  
14    train_step.run({x:img_batch,y_:label_batch,keep_prob:0.75}    
1 #预测部分
2 correct_prediction=tf.equal(tf.argmax(y,1),tf.argmax(y_,1))
3 accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
4 print(accuracy.eval({x:data_train_x,y_:data_train_y,keep_prob:1.0}))   

Such completion of all the processes. Wherein the network structure can be modified accordingly, the core is how to read data is input to its own (MLP) from a fully connected neural network training and the txt testing.

Of course, the definition may be entered directly in the time variable, not read from the txt. which is:

1 image=[[1.0,2.0,3.0],[9,8,5],[9,5,6],[7,5,3],[6,12,7],[8,3,6],[2,8,71]]  
2 label=[[0,1],[1,0],[1,0],[1,0],[1,0],[0,1],[0,1]]        
3 image_test=[[9,9,9]]     
4 label_test=[[0,1]] 

Direct scheduled data, the situation for small amounts of data, large data amount is not applicable.

  Well, this blog introduced to this end. Next how to handle image data.

Guess you like

Origin www.cnblogs.com/pypypy/p/11829700.html