Before Tensorflow build neural network (1) to the network

Parameter (a) neural network

  (1) tf.random_normal () to generate a random lognormal

tf.Variable = W (tf.random_normal ([2,3], STDDEV = 2, Mean = 0, = SEED. 1 ))
 # ## generates a normal random number, two rows and three columns, two standard deviation, mean 0 , random seed 1

  (2) tf.truncated_normal () to remove the generated random number is too large off-normal distribution point, that is, if random data is generated from the average of more than two standard deviation, the data is regenerated

w=tf.Variable(tf.Truncated_normal([2,3],stddev=2, mean=0, seed=1))

  (3) tf.random_uniform () generates a uniformly distributed random number

tf.Variable = W (tf.random_uniform ([2,3], MINVAL = 0, = MAXVAL. 1, DTYPE = tf.float32, SEED =. 1 ))
 # denotes from a uniform distribution [minval maxval) random sampling, generating the number of evenly distributed, note that the domain left and right to open and close, i.e. comprising MINVAL, does not contain maxval.

  These functions, if no special requirements for standard deviation, mean, random seed can not be written. Look at the specific usage.

  (4) Other functions: tf.zeros represents 0 to generate a full array
         tf.ones represents 1 to generate a full array
         tf.fill predetermined value to generate a full array represents
         tf.constant directly produce the array represents a given value

tf.zeros ([3,2-], Int32)   # indicates generate [[0,0], [0,0], [0,0]] 
tf.ones ([3,2-], Int32)    # indicates generation [ [1,1], [1,1], [1,1] 
tf.fill ([3,2-],. 6)        # indicates generate [[6,6], [6,6], [6,6] ] 
tf.constant ([3,2,1])    # indicates generation [3,2,1]

FNN:

##### ###### dataset using daffodils
Import load_iris sklearn.datasets from
Import
tensorflow TF AS from numpy.random Import RandomState Import numpy AS NP IRIS = load_iris () iris_data = iris.data iris_lable = iris.target iris_label = np.reshape (iris_lable, (150, 1 )) # array np.reshape (array, (m, n )) without changing the array and returns a dimension (m, n) of # Print (np.shape (iris_label)) BATCH_SIZE = 2 X = tf.placeholder (TF .float32, shape = (None,. 4 )) # placeholder placeholder, first specify the type of data, may then be a predetermined shape, because we only need to account for a set of data, and there are two characteristic values, the shape of (1, 2)
Y_
= tf.placeholder(tf.float32,shape=(None,1)) w1 = tf.Variable(tf.random_normal([4,10],stddev=1,seed=1)) w2 = tf.Variable(tf.random_normal([10,1],stddev=1,seed=1)) a = tf.matmul(x,w1) #矩阵乘法 y = tf.matmul(a,w2) #150*1 #print(np.shape(y)) loss = tf.reduce_mean(tf.square(y-y_)) train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss) with tf.Session() as sess: init = tf.global_variables_initializer() sess.run(init) steps = 3000 for i in range(steps): start = (i*Batch_size) % 32 end = start + Batch_size sess.run(train_step,feed_dict={x:iris_data[start:end],y_:iris_label[start:end]}) if i%100 == 0: total_loss = sess.run(loss,feed_dict={x:iris_data,y_:iris_label}) print(total_loss)

  Thoughts: Like (150) such dimension matrix operation can not be performed directly, it shall become the second-order tensor (150, 1) to carry out operations.

The basic process of building the network:

  (1) introduced into the module, to generate simulated data sets

          import

          Constant Defined

          Generating a data set

  (2) Forward propagation: define input and output parameters

          x = yi =

          w1=            w2=

          a = y =

  (3) propagating: loss function defined, back propagation method

          loss=

          train_step=

  (4) generating a session, training wheel STEPS

        with tf.Session as sess:

          init_op = tf.global_variables_initializer()

          sess.run (init_op)

          STEPS = 

            for i in range(STEPS):

            start = 

            end = 

            sess.run(train_step, feed_dict={ })

 

 

(2) tf.truncated_normal () to remove the generated random number is too large off-normal distribution point, that is, if random data is generated from the average of more than two standard deviation, the data is regenerated

w=tf.Variable(tf.Truncated_normal([2,3],stddev=2, mean=0, seed=1))

Guess you like

Origin www.cnblogs.com/hanouba/p/11426650.html