Markov chain, hidden Markov model, Bayesian network, factor graph


Bayesian network is the basis of many probability models, and it is also a mathematical theory tool that must be mastered for slam research.

1. Markov chain and hidden Markov model

1.1 Concept

Let's first understand the concepts related to Markov.

  1. Markov property (Markov property): In a random process, the conditional probability distribution of future state depends only on the current state, regardless of the previous state.
  2. Markov chain (Markov chain): also known as discrete-time Markov chain (discrete-time Markov chain, abbreviated as DTMC). The probability distribution of the next state can only be determined by the current state, and has nothing to do with the previous state in the time series. In fact, it is Markov nature. At each step of the Markov chain, the system can change from one state to another according to the probability distribution, or it can maintain the current state. The change of state is called transition, and the probability associated with different state changes is called transition probability .
  3. Hidden Markov Model (Hidden Markov Model, HMM) is a statistical model that describes a Markov process with hidden unknown parameters . The difficulty is to determine the hidden parameters of the process from the observable parameters, and then use these parameters for further analysis .
  • In the Markov model, the state is directly visible to the observer, and the transition probability of the state is all the parameters.
  • In the hidden Markov model, the state is not directly visible, but some variables affected by the state are visible. Each state has a probability distribution on the possible output symbols. Therefore , the sequence of output symbols can reveal some information about the state sequence .

1.2 An example of hidden Markov model

If you have three kinds of dice D4, D6, D8, which means that the dice are 4, 6, and 8 sides respectively, D6 is the most common dice, and the probability of each side of them is 1/4, 1/6, 1/8. .
Now suppose you randomly choose a dice from it, and then get a string of numbers: 1 6 3 5 2 7 3 5 2 4. This string of numbers is called the visible state chain . In addition, you also have a chain of implicit states , which is the sequence of dice. For example, it may be: D6 D8 D8 D6 D4 D8 D6 D6 D4 D8. The process is as follows:
Insert picture description here
The Markov chain mentioned in HMM actually refers to the chain of hidden states, because there is a transition probability between hidden states (die). In our example, the next state of D6 is D4, and the probability of D6 and D8 is 1/3. The next state of D4 and D8 is D4, and the transition probability of D6 and D8 is the same as 1/3. This setting is for easy explanation at the beginning, but we can actually set the conversion probability at will. For example, we can define that D6 cannot be followed by D4, the probability that D6 is followed by D6 is 0.9, and the probability of D8 is 0.1. This is a new HMM.
Insert picture description here
Application of Hidden Markov Model:

Prediction (filter): Knowing the model parameters and a specific output sequence, find the probability distribution of each hidden state at the last moment.
Smoothing : Knowing the model parameters and a specific output sequence, find the probability distribution of each hidden state at the intermediate moment. Usually use forward-backward> algorithm to solve.
Decoding (most likely explanation): Knowing the model parameters, looking for the most likely hidden state sequence that can produce a specific output sequence,
usually using the Viterbi algorithm to solve.

2. Bayesian network

Bayesian network, also known as Belief Network, or directed acyclic graphical model, is a probabilistic graphical model, first proposed by Judea Pearl in 1985. It is an uncertainty processing model that simulates the causal relationship in the human reasoning process, and its network topology is a Directed Acyclic Graph.
Node: Represents an attribute. They can be observed variables or hidden variables, unknown parameters, etc.
Between nodes (arcs): represents the probability dependence between attributes P(B|A).

An arc points from one attribute A to another attribute B, indicating that the value of attribute A can affect the value of attribute B. Because it is a directed acyclic graph, there will be no directed loop between A and B.
A: arc tail, cause, parents
B: arc head, effect, children

1. Head-to-head
Insert picture description here
can be seen from the figure above: P(a,b,c)=P(a)P(b)P(c|a,b) holds. a and b are independent.

2. When tail-to-tail
Insert picture description here
c is unknown: P(a,b,c)=P©P(a|c)P(b|c), at this time, P(a,b) = P(a)P(b), so a and b are not independent and
c is known: P(a,b|c)=P(a,b,c)/P©, a and b are independent.

3. head-to-tail
Insert picture description here

c Unknown situation: P(a,b,c)=P(a)P(c|a)P(b|c), but can not deduce P(a,b) = P(a)P(b), a and b are not independent.
c Known situation: a and b are independent.

The head-to-tail is actually a chain network, and it is a Markov chain.
Insert picture description here

3. Factor graph

The so-called factor graph is a probability graph obtained by factoring a function. Generally contains two nodes, variable node and function node . We know that a global function can be decomposed into the product of multiple local functions through factorization, and the relationship between these local functions and the corresponding variables is reflected in the factor graph.
g (x 1, x 2, x 3, x 4, x 5) = f A (x 1) f B (x 2) f C (x 1, x 2, x 3) f D (x 3, x 4 ) f E (x 3, x 5) g(x_1,x_2,x_3,x_4,x_5)=f_A(x_1)f_B(x_2)f_C(x1,x2,x3)f_D(x_3,x_4)f_E(x_3,x_5 )g(x1,x2,x3,x4,x5)=fA(x1)fB(x2)fC(x1,x2,x3)fD(x3,x4)fE(x3,x5)
其中f A, f B, f CfA,fB,fC,fD,fEFor each function, it represents the direct relationship between variables, which can be conditional probability or other relationships. The corresponding factor graph is:
Insert picture description here
or:
Insert picture description here
reference:
https://www.cnblogs.com/skyme/p/4651331.html
https:/ /blog.csdn.net/v_july_v/article/details/40984699

Guess you like

Origin blog.csdn.net/QLeelq/article/details/111768766