Probabilistic graph model in nlp

HMM

https://zhuanlan.zhihu.com/p/85454896

The formula for learning HMM is "1-2-3":

(1) One parameter contains: Π, a, b

(2) 2 hypotheses: first-order Markov hypothesis + observation independence hypothesis;

(3) 3 problems: probability calculation problem (forward and backward); parameter learning problem (EM/MLE); prediction problem (Viterbi) (the third point is too difficult)

IN

The maximum entropy model can ultimately be attributed to learning the best parameter w. em doesn’t seem to say very clearly, e step, m step

CRF

CRF is a serialized labeling algorithm (sequence labeling algorithm)-----conditional random field

 

 

 

Seq2seq

 

 

Guess you like

Origin blog.csdn.net/weixin_45316122/article/details/107785667