August 12, 2019 "TensorFlow Google real depth learning framework" study 20,190,813

2014 proposed Seq2Seq model.

Training step is divided into  pre-processing, alignment word, phrase alignment feature extraction phrase, the language model training, the learning feature weights many steps and the like.

The basic idea is: a loop using the neural network reads the input sentence, the entire sentence compressed information to the encoding of a fixed dimension; reused for another cycle read the coded neural network, and extract it into a target language sentence.

 

Guess you like

Origin www.cnblogs.com/beautifulchenxi/p/11348044.html