Basics of deep learning logistic regression fitting two-dimensional data

  Starting today, I'll be here learning about the record depth study of the footprints, the purpose is very simple, there are nearly at hand has been marked 3w a good worth code correctly, you want to train from scratch a model that can be used,

Although I know the relevant models and demo online a lot, but still very much hope that they can personally engage in a usable out, learning books are: "Getting Tensorflow depth study of the principles and advanced combat" Li Jinhong teacher.

In addition, I will also verify the source code, models, and nearly 3w a fully open source code out of the code recognition model trained. Mutual encouragement.

  

. 1  # ! / Usr / bin / Python the env 
2  # - * - Coding: UTF-. 8 - * - 
. 3  # @time: 2019/9/23 21:27 
. 4  # @author: SongSa 
. 5  # @Desc: 
. 6  # @ File: fitting two-dimensional data .py 
. 7  # @Software: PyCharm 
. 8  
. 9  Import tensorflow TF AS
 10  Import numpy AS NP
 . 11  Import matplotlib.pyplot AS PLT
 12 is  
13 is  "" " 
14  depth study is divided into four steps:
 15      prepare the data
 16      construction model
 17      iterative training
 18     Use Model
 19  "" " 
20 is  
21 is  # ####### preparing data ######## 
22 is train_X np.linspace = (-1,. 1, 100 )
 23 is train_Y = 2 * + np.random train_X .randn (100) 0.3 *   # Y = 2x but noise added 
24 plt.plot (train_X, train_Y, ' RO ' , label = ' Original data ' )   # shows the analog data points 
25 plt.legend ()   # has display Legend label 
26 is  plt.show ()
 27  
28  
29 plotdata = { " BatchSize " : [], " Loss ": []}
 30  DEF MOVING_AVERAGE (A, W = 10 ):
 31 is      IF len (A) < W:
 32          return A [:]
 33 is      return [Val IF IDX <W the else SUM (A [(IDX-W): IDX ]) / W for IDX, Val in the enumerate (A)]
 34 is  
35  
36  # ####### ######## construction model 
37  # model is divided into two directions: forward and reverse 
38  # Create model 
39 X-tf.placeholder = ( " a float " )   # placeholder 
40 the Y = tf.placeholder ( "a float " )
 41 is  # model parameter 
42 is W is = tf.Variable (tf.random_normal ([1]), name = " weight " )   # W is is initialized to [-1, 1] of the random number, the shape of one-dimensional digital 
43 is B = tf.Variable (tf.zeros ([. 1]), name = " BIAS " )   # B is initialized to 0. 
44 is  # forward structure 
45 Z = tf.multiply (X-, W is) + B   # tf.multiply () function is multiplied * X-W is = + Z B 
46 is  
47  # reverse construction model 
48  # neural network training data flow in two directions, a first value generated by the positive, then the true value observed gap, then the parameters inside adjusted by the reverse process, 
49  #Next to generate positive predictive value in the real value comparison, so the cycle, we know the parameters adjusted to get the right value, will introduce back-propagation algorithm to achieve proper adjustment of parameters. 
50 cost = tf.reduce_mean (tf.square (the Y - Z))   # cost equal to the square of the difference from the true value generated value 
51 is  # tf.reduce_mean () for calculating an average value of the tensor along the axis designated 
52 is  # tf.square ( ) for calculating the square Yz 
53 is learning_rate = 0.01   # learning rate (a higher number indicating greater speed of adjustment, accurate but not vice versa) 
54 is Optimizer = tf.train.GradientDescentOptimizer (learning_rate) .minimize (cost)   # encapsulated gradient descent algorithm 
55  
56 is  
57 is  # ####### iterative training ######## 
58  # Tensorflow tasks are carried out by the session 
59 the init = tf.global_variables_initializer ()   # initialize all variable
60  # custom parameter 
61 is training_epochs = 20 is
 62 is display_stop 2 =
 63 is  # start the session 
64  with tf.Session () AS Sess:
 65      sess.run (the init)
 66      plotdata = { " BatchSize " : [], " Loss " : [] }   # store batch value and loss value 
67      for Epoch in Range (training_epochs):
 68          for (X, Y) in ZIP (train_X, train_Y):
 69              sess.run (Optimizer, feed_dict = {X-: X, the Y: Y })
 70  
71 is          #Training details display 
72          IF Epoch% display_stop == : 0
 73 is              Loss = sess.run (cost, feed_dict = {X-: train_X, the Y: train_Y})
 74              Print ( " Epoch: " , + Epoch. 1, " cost = " , Loss, " W is = " , sess.run (W is), " B = " , sess.run (B))
 75              IF  Not (Loss == " NA " ):
 76                  plotdata [ ' BatchSize ' ].
append(epoch)
77                 plotdata["loss"].append(loss)
78 
79     print("Finished!")
80     print("cost=", sess.run(cost, feed_dict={X:train_X, Y:train_Y}), "W=", sess.run(W), "b=", sess.run(b))
81 
82     # 训练模型可视化
83     plt.plot(train_X, train_Y, 'ro', label="Original data")
84     plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label="Fittedline")
85     plt.legend()
86     plt.show()
87 
88     plotdata['avgloss'] = moving_average(plotdata["loss"])
89     plt.figure(1)
90     plt.subplot(211)
91     plt.plot(plotdata["batchsize"], plotdata['avgloss'], 'b--')
92     plt.ylabel("Loss")
93     plt.title ( " Minibatch FC vs. RUN Training Loss " )
 94      plt.show ()
 95  
96  
97  # ####### usage model ######## 
98      Print ( ' using the model: \ n \ n- ' )
 99      Print ( " X = 0.2, Z = " , sess.run (Z, X-feed_dict = {:} 0.2))

Finally, make ad: To learn more about the Python content crawlers, data analysis, data source to obtain a large number of reptiles crawling to welcome everyone concerned about my micro-channel public number: enlightenment Python

Guess you like

Origin www.cnblogs.com/ss-py/p/11575546.html