Sequence annotation understanding

Sequence annotation understanding

The difference between the classic sequence labeling algorithm models, namely Bayesian NB, Hidden Markov HMM, Logistic Regression LR, Maximum Entropy Hidden Markov MEMM, and Conditional Random Field CRF.

1. Icon

insert image description here

2. Description

2.1 Differences

The main differences are:

  • Can be divided into generative models (need to first seek joint probability distribution, Bayesian NB, hidden Markov HMM), discriminant models (only need to require conditional probability distribution, logistic regression LR, maximum entropy hidden Markov MEMM, conditional random Airport CRF).
  • The probability distribution and LR of the generative model are summations, while the discriminant MEMM and CRF are cross products. Addition and multiplication correspond to whether the global observation sequence O(1-n) is considered as a whole.
  • The difference between MEMM and CRF is that when the scoring function calculates the probability, the normalization strategy of the denominator is different. CRF is the softmax (all paths) of the sequence scoring function as a whole , and MEMM decomposes it into item-by-item multiplication (the current local path, the path starts from the current hidden sequence, that is, the previously determined hidden sequence is equivalent to fixed , a bit like greedy?).

3. Reference

Guess you like

Origin blog.csdn.net/rensihui/article/details/125853452