Depth hands-on science learning task4

Machine Translation and seq2seq

Since the input and output machine translation tasks are variable length sequence, we can use the coder - decoder (encoder-decoder) or seq2seq model.

Here Insert Picture Description

Attentional mechanisms

attention mimic human attention, to make the model more attention to local data.

Released five original articles · won praise 0 · Views 49

Guess you like

Origin blog.csdn.net/u012302260/article/details/104398275