Natural language processing-creating a bidirectional recurrent neural network

The direction in which we read sentences is one-way, but when new information is received, the brain can quickly return to the previous content of the text. Humans can process information that is not presented in the best order. It would be great if we could allow the model to switch back and forth between inputs. This is where the bidirectional recurrent neural network comes in.

The basic idea: Put two RNNs side by side, pass the input to one of the RNNs like the input of a normal one-way RNN, and pass the same input from the reverse to the other RNN, as shown in the figure, then, At each moment, the outputs of these two networks are spliced ​​together as the input of the corresponding (same input term) moment in the other network. After we get the output at the last moment of the input, we concatenate it with the output generated by the same input term at the first moment of the reverse network.

Insert picture description here
Benefits:
Not only can the text be predicted and classified, but also the language itself and how it is used can be modeled. With it, we can generate brand new sentences, not just imitating the text that the model has seen before

Code:

from keras.models import Sequential
from keras.layers import SimpleRNN
from keras.layers.wrappers import Bidirectional

num_neurons = 10
maxlen = 100
embedding_dims = 300

model = Sequential()
model.add(Bidirectional(SimpleRNN(
    num_neurons, return_sequences=True), input_shape=(maxlen, embedding_dims)))

Guess you like

Origin blog.csdn.net/fgg1234567890/article/details/113360020