2014 proposed Seq2Seq model.
Training step is divided into pre-processing, alignment word, phrase alignment feature extraction phrase, the language model training, the learning feature weights many steps and the like.
The basic idea is: a loop using the neural network reads the input sentence, the entire sentence compressed information to the encoding of a fixed dimension; reused for another cycle read the coded neural network, and extract it into a target language sentence.