Depth learning Seq_seq network

Knowledge Point

"" " 
Machine Translation: 
    History: 
        1, verbatim 
        2, based on statistical machine translation 
        3, and an encoding loop network 
translation: Input -> encoder -> Vector -> Decoder -> Output 
                    (RNN) (RNN) 
seq_seq applications: text summary, chat robots, machine translation 
seq_seq problems: 
    1, compression loss of information 
    2, the length limit (usually 10-20 best) 
solution: 
    a high-resolution picture and then focus: Attention mechanism a particular region, and in low resolution mode sensing region of the image around the 
    specific performance: the weight of the layer encoder 

    Bucket mechanisms: normal to complement all sentences 

based Seq_seq mainly comprises three parts: 
    . 1, encoder 
    2 , hidden layer state vector (encoder connections and Decoder) 
    . 3, Decoder 
"" "

Hey! Or look at other people blog to understand it

 

Guess you like

Origin www.cnblogs.com/ywjfx/p/11131256.html