Natural language processing--keras implements stacked LSTM (two-layer LSTM)

The unfolding structure diagram of the two-layer LSTM recurrent neural network:

Insert picture description here

Training stacked layers is very computationally expensive, but it only takes a few seconds to stack them in Keras. The code is as follows:

from keras.models import Sequential
from keras.layers import LSTM

model = Sequential()
# 假如要正确构建模型,需要在第一层和中间层使用参数 return_sequences=True。
# 这个要求是有意义的,因为每个时刻的输出都需要作为下一层时刻的输入。
model.add(LSTM(num_neurons, return_sequences=True, input_shape=X[0].shape))
model.add(LSTM(num_neurons_2, return_sequences=True))

Remarks: Please keep in mind that creating a model that can represent more complex relationships than those present in the training data may lead to strange results. Simply overlaying layers on the model is very interesting, but it is rarely the most useful to build Model solution.

Guess you like

Origin blog.csdn.net/fgg1234567890/article/details/113533117