Natural language processing-Keras realizes the expansion of recurrent neural network-gated recurrent unit GRU

LSTM is an extension of the basic concept of recurrent neural networks, and there are also various other extensions. All these extensions are nothing more than some fine-tuning of the number of intracellular gates or operations. For example, the gated loop unit combines candidate selection branches in the forget gate and the candidate gate into one update gate. This gate reduces the number of parameters that need to be learned, and has been proven to be comparable to standard LSTM, while the computational overhead is much smaller.
Structure:
GRU: There are two inputs and two outputs
Insert picture description here

Keras provides a GRU layer, we can use it like LSTM, such as the code:

from keras.models import Sequential
from keras.layers import GRU

model = Sequential()
model.add(GRU(num_neurons, return_sequences=True, input_shape=X[0].shape))

Guess you like

Origin blog.csdn.net/fgg1234567890/article/details/113533057