HMM-- Viterbi algorithm (Viterbi algorithm)

1. Introduction
Viterbi algorithm for HMM third question, namely decoding or predictive questions, find the most likely hidden state sequence:

For a particular hidden Markov model (HMM) and a corresponding sequence of observations, find the hidden state sequence most likely to generate this sequence.

That is, given the parameters of the HMM model and an observation sequence, calculating a series of hidden state, such that this observation may be the maximum sequence occurs, i.e., maximizes P (hidden states | observation sequence), for a given observation sequence, for the most possible hidden states corresponding to a sequence.

In fact solve this problem, "statistical learning methods" are given two solutions, one is the approximation algorithm, the other is the Viterbi algorithm (Viterbi algorithm)

2. The approximation algorithm
ideas: calculated for each time point are most likely hidden state, resulting in a sequence of states as a result of the prediction.

Algorithm: Given HMM model parameters λ and the observation sequence O, in the state qi at time t is the probability that:

 

Then each time t is most likely state

 

I feel very similar to the greedy algorithm, each time point is calculated once, taking the biggest.

The advantage is a simple calculation; but the disadvantage is quite clear, without considering the timing relationship, we can not guarantee that the whole sequence is predicted state best possible state sequence the state sequence predicted actual part might not occur.

 

3. Viterbi algorithm
 

Dynamic programming to solve the greatest probability path, a path to a sequence of states.

Optimal Dynamic Path Select Solver: If the optimal path through a node i at time t, then this path from node i to the end portion of the path, the path to the end node i must be optimal.

The maximum probability can begin by this principle from time t = 1, continue to the next state backward recursion path until reaching the final in the last end of the optimal path, and then back to the starting point based on the end, so that we can optimal path.

The specific process :( "Lee Hang statistical learning")

Input: observation sequence and the model parameters λ

Output: the optimal path hidden state

(1) a first layer of all the state probabilities N

 

(2) to gradually recursive t = 2,3, ...,. T

 

The first record is a recursive formula to time t when the joint probability of all nodes through the optimal path.

The second formula is recorded in the end node which is reachable, i.e. which hidden state at time t

In fact, this step it plainly calculated to calculate the time and before the same, just before the state calculated to add value and replaced take max

(3) termination

 

(4) optimal path back, t = T-1, T-2, ..., 1

 

Example 4. The Viterbi algorithm
Example 4.1 weather
taken from a known instance on almost: https: //www.zhihu.com/question/20962240

 

This chart tells us the model parameters of the HMM

Initial probability π = [0.6 0.4]

Transition probability (mutual transfer between weather (implicit state))

 

  Sun Rain
Rain 0.7 0.3
Sun 0.4 0.6
(the probability of each weather (hidden state) corresponding to the behavior (observable)) of the confusion matrix

 

  Shop Clean Walk
Rain 0.1 0.4 0.5
Sun 0.6 0.3 0.1
known model parameters, and the behavior of three days (walk, shop, clean)

Solving: The most likely weather conditions correspond to three days

answer:

[Note] δ of the index is not representative of the first few days, but the current state of the first few days of the hidden, note that the following day the second δ index are wrong, it should be δ2, because it is not written in markdown and too lazy to change, and their derivation note

① first initializes, for each state of the weather, find the corresponding behavior of the probability of the day

 

Initialization, the first day of do not look for the maximum, because the first day to know where the most likely path, the path that links two nodes, the nodes can not be called a path

② first day of the second day of the arrival path probability

 

③ the next day to the third day of probability path

 

④ back

Find the greatest probability last day

 

The probability of finding a third state corresponding to the first maximum, and this probability by a first state of ③ ψ1 ψ1 in the next day reaches the first state of the third obtained, corresponding to the next day it should It is the first state.

The first state to reach the maximum probability of the next day is the first day of the second state, by ② in ψ1 can see it.

So together is the first day of the second state -> the second day of the first state -> third day of the first state

That should be three days the weather (sun, rain, rain)

Examples 4.2 ball and the box
still is an example of the forward algorithm selected from "Li Hang statistical learning"

Known: three boxes are hidden state, ball color is the observation sequence (red, white, red)

Transfer Matrix

 

  2. 3. 1
1 0.5 0.2 0.3
2 0.3 0.5 0.2
2 0.2 0.3 0.5
Confusion Matrix

 

  red white
1 0.5 0.5
2 0.4 0.6
3 0.7 0.3

The initial probability: π = (0.2,0.4,0.4)
Solution process:

① initialization, you get the red ball

 

② second time to get the white ball, the path probability

Thank you issue @ journey began pointed out, before the formula was wrong here, typesetting mistake when uploading pictures

③ third time to get the red ball, the path probability

 

④ by third take ψ, look for the third time to take the greatest probability: ψ3 = 0.0147

Corresponding to the hidden state is the third, then it should look at the second transfer path over which one to see ③ in ψ3 = 3, described is the third state of the second path

And then view the third and the second path corresponding to the state which is a transition from a first state came to see ② in ψ3 = 3, described is the third state of the first path

In summary, the transfer process is hidden states (3,3,3)
---------------------
Author: wing ice boat
Source: CSDN
Original: https: //blog.csdn.net/zb1165048017/article/details/48578183
Disclaimer: This article is a blogger original article, reproduced, please attach Bowen link!

Guess you like

Origin www.cnblogs.com/jfdwd/p/11099376.html