Hidden Markov HMM algorithm

Algorithm introduction:

        A system has different states at all times, and their states are formed by various complex mechanisms, and the states are constantly changing with changes, and these complex mechanisms are some highly nonlinear and complex projective relationships.

        As an example. For example: a car is driving on a road from left to right, we use three features to describe the state of the car, namely speed, acceleration, and displacement. At time T , we record these three features as a vector Vt=(v,a,x). Obviously, this is only the value we observed and cannot represent the real state of the car when it is moving. In addition, we can think that the motion state of the car at any time obeys a 3-dimensional normal distribution , but the expression of this normal distribution is difficult to find, and we can only approximate it. There are many methods, such as Bayesian formulas, neural networks, GANs, and more.

        Hidden Markov process is to figure out how a system transitions when it transitions, and the purpose is to find out the mathematical expression in this transition process. It assumes that the state at time t is only related to the state at time t-1, and the value of the state at time t is the value at time t-1 through conditional probability calculation. But the problem is that we can only get the observed value at each moment, which is not the real value of the system state. Hidden Markov's hidden word is because the state of the system is unknown during this process.

 

1. Filtering for HMM applications

        Filtering is prediction. The observed state values ​​from historical time Tn to T are known, and the system state at T time is estimated.

2. Smoothing of HMM application

        Smoothing is to estimate the state of the system at the intermediate time Tnk by knowing the observed state values ​​from historical time Tn to T.

3. Decoding of HMM application

        The one-sentence summary of decoding is: given the observed state values ​​at historical moments Tn to T, to approximate the conditional probability distribution expression of the system conversion process.

4. Expansion of Markov chain

        Generally, Markov only evolves in one direction. If there is a situation where it evolves in multiple directions, it is called a Markov random field, that is, the dimension is increased from line to plane.

        Since the conversion probability expression of the Markov process is very complicated, the parameter length is obtained by the Monte Carlo method, and the MCMC method, namely the Markov Monte Carlo method, is derived from this.

Reference: Brother Gengzhi at station b

Guess you like

Origin blog.csdn.net/weixin_44992737/article/details/131948510