m Matlab simulation of driver's driving intention recognition algorithm based on HMM hidden Markov model

Table of contents

1. Algorithm simulation effect

2. Algorithms involve an overview of theoretical knowledge

3. MATLAB core program

4. Complete algorithm code file


1. Algorithm simulation effect

The matlab2022a simulation results are as follows:

2. Algorithms involve an overview of theoretical knowledge

       With the development of intelligent transportation systems, the recognition of driver's driving intention has attracted more and more attention. Accurately identifying the driver's driving intention is of great significance for improving road safety and realizing automatic driving technology. A driver's driving intention recognition method based on Hidden Markov Model (HMM) is proposed. By modeling and analyzing the driver's behavior data, the real-time recognition of the driver's driving intention is realized. HMM is a statistical model that can be used to process data with a time series structure. In many fields, such as speech recognition, handwriting recognition, etc., HMM has achieved remarkable research results. This paper proposes a driver's driving intention recognition method based on HMM. By modeling and analyzing the driver's behavior data, the real-time recognition of the driver's driving intention is realized.

          According to the driving process of the driver, there are mainly several driving states on the road: rapid acceleration, acceleration, constant speed maintenance, deceleration, and rapid deceleration. The above five driving states are abbreviated as HD, D, N, P, and HP respectively. Between the time nodes, the five driving states can reflect the driver's current driving intention. Each type of driving state has a corresponding driving action, which is mainly reflected in the opening of the accelerator pedal, the opening of the brake pedal, and the rate of change. Assuming that the five types of driving intentions on the driving road are (n=1,2,3,4,5), the driving intention state will probabilistically transfer or remain unchanged in continuous time nodes. The type of driving observation value is (m=1,2,3), and the mapping relationship between , and is represented by a probability value. The model of driving intention and observed value in the node network association mode is shown in Figure 1. In the figure, a and b represent the probability of occurrence and the driver's driving intention. 

         The hidden Markov model (Hidden Markov Model, HMM) is a kind of Markov chain, its state cannot be directly observed, but can be observed through the observation vector sequence, each observation vector is passed through some probability density distribution Expressed as various states, each observation vector is generated by a state sequence with a corresponding probability density distribution. Hidden Markov models are statistical models used to describe a Markov process with hidden unknown parameters. The difficulty is to determine the implicit parameters of the process from the observable parameters. These parameters are then used for further analysis, such as pattern recognition. Therefore, the hidden Markov model is a double stochastic process with a certain number of state hidden Markov chain and explicit random function set. In the 1990s, HMM also began to be applied in the fields of pattern recognition and fault diagnosis. 

       A hidden Markov model (HMM) is a statistical model used to describe an observable data sequence produced by a hidden Markov process. HMM consists of two stochastic processes, one is a hidden Markov process and the other is an observation process. A hidden Markov process is a discrete-time Markov chain whose transitions between states satisfy the Markov property. The observation process is to generate a conditional probability distribution of the observed data given the hidden state.
       Driving intention recognition: Given a new observation sequence, use the Viterbi algorithm to calculate the most likely hidden state sequence to identify driving intentions. Specifically, according to the trained HMM model, the probability of the observation sequence in each hidden state is calculated, and the hidden state with the highest probability is selected as the driving intention.
      Driving Intention Prediction: To achieve real-time driving intention prediction, a sliding window method is adopted in this paper. Add new observation data to the window and remove the oldest observation data in the window. Realize real-time prediction of driver's driving intention.

3. MATLAB core program

function [LL1,prior1,transmat1,mu1,Sigma1,mixmat1,LL2,prior2,transmat2,mu2,Sigma2,mixmat2,LL3,prior3,transmat3,mu3,Sigma3,mixmat3,LL4,prior4,transmat4,mu4,Sigma4,mixmat4,LL5,prior5,transmat5,mu5,Sigma5,mixmat5]=func_HMM_Train(Dat1,Dat2,Dat3,Dat4,Dat5,Dat1s,Dat2s,Dat3s,Dat4s,Dat5s);


M         = 2;
Q         = 3;


O         = 1;
T         = 3;
nex       = length(Dat1);
prior0    = normalise(rand(Q,1));
transmat0 = mk_stochastic(rand(Q,Q));
Sigma0    = repmat(eye(O), [1 1 Q M]);
indices   = randperm(T*nex);
mu0       = reshape(Dat1s(:,indices(1:(Q*M))), [O Q M]);
mixmat0   = mk_stochastic(rand(Q,M));
[LL1, prior1, transmat1, mu1, Sigma1, mixmat1] = mhmm_em(Dat1s, prior0, transmat0, mu0, Sigma0, mixmat0, 'max_iter', 1000); 


nex       = length(Dat2);
prior0    = normalise(rand(Q,1));
transmat0 = mk_stochastic(rand(Q,Q));
Sigma0    = repmat(eye(O), [1 1 Q M]);
indices   = randperm(T*nex);
mu0       = reshape(Dat2s(:,indices(1:(Q*M))), [O Q M]);
mixmat0   = mk_stochastic(rand(Q,M));
[LL2, prior2, transmat2, mu2, Sigma2, mixmat2] = mhmm_em(Dat2s, prior0, transmat0, mu0, Sigma0, mixmat0, 'max_iter', 1000); 


nex       = length(Dat3);
prior0    = normalise(rand(Q,1));
transmat0 = mk_stochastic(rand(Q,Q));
Sigma0    = repmat(eye(O), [1 1 Q M]);
indices   = randperm(T*nex);
mu0       = reshape(Dat3s(:,indices(1:(Q*M))), [O Q M]);
mixmat0   = mk_stochastic(rand(Q,M));
[LL3, prior3, transmat3, mu3, Sigma3, mixmat3] = mhmm_em(Dat3s, prior0, transmat0, mu0, Sigma0, mixmat0, 'max_iter', 1000); 


nex       = length(Dat4);
prior0    = normalise(rand(Q,1));
transmat0 = mk_stochastic(rand(Q,Q));
Sigma0    = repmat(eye(O), [1 1 Q M]);
indices   = randperm(T*nex);
mu0       = reshape(Dat4s(:,indices(1:(Q*M))), [O Q M]);
mixmat0   = mk_stochastic(rand(Q,M));
[LL4, prior4, transmat4, mu4, Sigma4, mixmat4] = mhmm_em(Dat4s, prior0, transmat0, mu0, Sigma0, mixmat0, 'max_iter', 1000); 

nex       = length(Dat5);
prior0    = normalise(rand(Q,1));
transmat0 = mk_stochastic(rand(Q,Q));
Sigma0    = repmat(eye(O), [1 1 Q M]);
indices   = randperm(T*nex);
mu0       = reshape(Dat5s(:,indices(1:(Q*M))), [O Q M]);
mixmat0   = mk_stochastic(rand(Q,M));
[LL5, prior5, transmat5, mu5, Sigma5, mixmat5] = mhmm_em(Dat5s, prior0, transmat0, mu0, Sigma0, mixmat0, 'max_iter', 1000); 
08_056_m

4. Complete algorithm code file

V

Guess you like

Origin blog.csdn.net/hlayumi1234567/article/details/130207828