[Reserved] [] TensorFlow the difference static_rnn and dynamic_rnn

Original Address:

https://blog.csdn.net/qq_20135597/article/details/88980975

 

 

---------------------------------------------------------------------------------------------

 

 

 

tensorflow provides rnn interface has two, one is static rnn, and a dynamic rnn

 

 

 

Common usage:

1, static interface: static_rnn

The main use  tf.contrib.rnn

x = tf.placeholder("float", [None, n_steps, n_input])
x1 = tf.unstack(x, n_steps, 1)
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
outputs, states = tf.contrib.rnn.static_rnn(lstm_cell, x1, dtype=tf.float32)
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None)

 

 

Rnn static means that creates a drawing in a fixed length (n_steps) network. This will lead to

 

Disadvantages:

  1. Generation process takes longer, accounting for more memory, bigger export model;
  2. Longer sequences (> n_steps) than originally specified could not be delivered.

advantage:

Model information with a sequence of intermediate stage, they and debugging.

 

 

 

 

 

 

 

 

2, the dynamic interface: dynamic_rnn

The main use tf.nn.dynamic_rnn

x = tf.placeholder("float", [None, n_steps, n_input])
lstm_cell = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0)
outputs,_  = tf.nn.dynamic_rnn(lstm_cell ,x,dtype=tf.float32)
outputs = tf.transpose(outputs, [1, 0, 2])
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None)

 

 

    The dynamic tf.nn.dynamic_rnn is performed, which use a loop to dynamically build pattern. this means

 

advantage:

  1. Graphics creation faster and uses less memory;
  2. And can provide a batch of variable size.

Disadvantages:

  1. Only the last model of the state.

 

       Dynamic rnn only means to create a sequence RNN sample, other sequences will enter the data by cyclic operation RNN

 

 

 

 

The difference between:

 1, different input and output:

        dynamic_rnn achieve is the ability to make different iterations of the incoming batch of data can be of different lengths, but all the same iteration of the length of the internal data of a batch is still fixed. For example, a first time incoming data shape = [batch_size, 10], the second moment of the incoming data shape = [batch_size, 12], the third time the incoming data shape = [batch_size, 8] and the like.

      But static_rnn can not do this, it requires that each time the incoming batch of data [batch_size, max_seq], remain the same in each iteration.

2, different training methods:

DETAILED see Reference 1
 

 

 

 

 

 

Comparative multilayer LSTM code implements:

1, the static multi-layer RNN

import tensorflow as tf
# 导入 MINST 数据集
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("c:/user/administrator/data/", one_hot=True)
 
n_input = 28 # MNIST data 输入 (img shape: 28*28)
n_steps = 28 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 10  # MNIST 列别 (0-9 ,一共10类)
batch_size = 128
 
tf.reset_default_graph()
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
 
gru = tf.contrib.rnn.GRUCell(n_hidden*2)
lstm_cell = tf.contrib.rnn.LSTMCell(n_hidden)
mcell = tf.contrib.rnn.MultiRNNCell([lstm_cell,gru])
 
x1 = tf.unstack(x, n_steps, 1)
outputs, states = tf.contrib.rnn.static_rnn(mcell, x1, dtype=tf.float32)
 
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None)
 
learning_rate = 0.001
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
 
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
 
training_iters = 100000
 
display_step = 10
 
# 启动session
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    step = 1
    # Keep training until reach max iterations
    while step * batch_size < training_iters:
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        # Reshape data to get 28 seq of 28 elements
        batch_x = batch_x.reshape((batch_size, n_steps, n_input))
        # Run optimization op (backprop)
        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
        if step % display_step == 0:
            # 计算批次数据的准确率
            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
            # Calculate batch loss
            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
            print ("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                  "{:.6f}".format(loss) + ", Training Accuracy= " + \
                  "{:.5f}".format(acc))
        step += 1
    print (" Finished!")
 
    # 计算准确率 for 128 mnist test images
    test_len = 100
    test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
    test_label = mnist.test.labels[:test_len]
    print ("Testing Accuracy:", \
        sess.run(accuracy, feed_dict={x: test_data, y: test_label}))

 

 

 

 

2, the dynamic multi-layer RNN

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("c:/user/administrator/data/", one_hot=True)
 
n_input = 28 # MNIST data 输入 (img shape: 28*28)
n_steps = 28 # timesteps
n_hidden = 128 # hidden layer num of features
n_classes = 10  # MNIST 列别 (0-9 ,一共10类)
batch_size = 128
 
tf.reset_default_graph()
# tf Graph input
x = tf.placeholder("float", [None, n_steps, n_input])
y = tf.placeholder("float", [None, n_classes])
 
gru = tf.contrib.rnn.GRUCell(n_hidden*2)
lstm_cell = tf.contrib.rnn.LSTMCell(n_hidden)
mcell = tf.contrib.rnn.MultiRNNCell([lstm_cell,gru])
 
outputs,states  = tf.nn.dynamic_rnn(mcell,x,dtype=tf.float32)#(?, 28, 256)
outputs = tf.transpose(outputs, [1, 0, 2])#(28, ?, 256) 28个时序,取最后一个时序outputs[-1]=(?,256)
 
pred = tf.contrib.layers.fully_connected(outputs[-1],n_classes,activation_fn = None)
 
learning_rate = 0.001
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=pred, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
 
# Evaluate model
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
 
training_iters = 100000
 
display_step = 10
 
# 启动session
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    step = 1
    # Keep training until reach max iterations
    while step * batch_size < training_iters:
        batch_x, batch_y = mnist.train.next_batch(batch_size)
        # Reshape data to get 28 seq of 28 elements
        batch_x = batch_x.reshape((batch_size, n_steps, n_input))
        # Run optimization op (backprop)
        sess.run(optimizer, feed_dict={x: batch_x, y: batch_y})
        if step % display_step == 0:
            # 计算批次数据的准确率
            acc = sess.run(accuracy, feed_dict={x: batch_x, y: batch_y})
            # Calculate batch loss
            loss = sess.run(cost, feed_dict={x: batch_x, y: batch_y})
            print ("Iter " + str(step*batch_size) + ", Minibatch Loss= " + \
                  "{:.6f}".format(loss) + ", Training Accuracy= " + \
                  "{:.5f}".format(acc))
        step += 1
    print (" Finished!")
 
    # 计算准确率 for 128 mnist test images
    test_len = 100
    test_data = mnist.test.images[:test_len].reshape((-1, n_steps, n_input))
    test_label = mnist.test.labels[:test_len]
    print ("Testing Accuracy:", \
        sess.run(accuracy, feed_dict={x: test_data, y: test_label}))

 

 

 

 

 

【references】:

1、https://www.jianshu.com/p/1b1ea45fab47

2、What's the difference between tensorflow dynamic_rnn and rnn?

 

 

 

 

 

 

------------------------------------------------------------------------

 

Guess you like

Origin www.cnblogs.com/devilmaycry812839668/p/11108847.html