deep_learning_Function_rnn_cell.BasicLSTMCell

tf.nn.rnn_cell.BasicLSTMCell (n_hidden, forget_bias = 1.0, state_is_tuple = True): n_hidden represents the number of neurons, forget_bias is LSTM have forgotten factor, if equal to 1, that is, will not forget any information. If equal to 0, we have forgotten. state_is_tuple default is True, the official suggested True, that represents the return of the state represented by a Ganso. The presence of a state initialization function which is zero_state (batch_size, dtype) two parameters. batch_size is to enter the number of samples of the batch, dtype is the data type.

E.g:

import tensorflow as tf
 
batch_size = 4
input = tf.random_normal(shape=[3, batch_size, 6], dtype=tf.float32)
cell = tf.nn.rnn_cell.BasicLSTMCell(10, forget_bias=1.0, state_is_tuple=True)
init_state = cell.zero_state(batch_size, dtype=tf.float32)

output, final_state = tf.nn.dynamic_rnn (cell, input, initial_state = init_state, time_major = True) #time_major if it is True, it means RNN of steps indicated by the first dimension, it is recommended to use this, run a little faster.
# If it is False, then the input of the second dimension is the steps.
# If it is True, output dimensions are [steps, batch_size, depth], the contrary is [batch_size, max_time, depth]. And input is the same

#final_state is the ultimate state of the entire LSTM output, and comprising c h. c and h are the dimensions of [the batch_size, n_hidden] 
with tf.Session () AS Sess: 
    sess.run (tf.global_variables_initializer ()) 
    Print (sess.run (Output)) 
    Print (sess.run (final_state))

----------------
Original link: https://blog.csdn.net/UESTC_C2_403/article/details/73353145

Guess you like

Origin www.cnblogs.com/0405mxh/p/11634844.html