Gray Markov model matlab implementation

Gray prediction GM(1,1)

  • Treat the prediction sequence X 0 = { x 0 , 1 , x 0 , 2 , ⋯ , x 0 , n } 0 , n } \}X0={ x0,1,x0,2,,x0,n} , generateX 0 X_0X0An accumulation sequence of X 1 = { x 1 , 1 , x 1 , 2 , ⋯ , x 1 , n } { 1 , n } \}X1={ x1,1,x1,2,,x1,n},其中:
    x 1 , k = ∑ t = 1 k x 0 , i ( k = 1 , 2 , ⋯   , n ) x _ { 1 , k } = \sum _ { t = 1 } ^ { k } x _ { 0 , i } \quad ( k = 1 , 2 , \cdots , n ) x1,k=t=1kx0,i(k=1,2,,n)
  • Conduct a rank comparison test on the original data. First calculate the level ratio ρ k \rho_k of the original datark序列:
    ρ k = x 0 , k − 1 x 0 , k ( k = 2 , ⋯   , n ) \rho_ {k } = \frac { x _ { 0 , k - 1 } } { x _ { 0 , k } } \quad ( k = 2 , \cdots , n ) rk=x0,kx0,k1(k=2,,n)
  • Then judge ρ k \rho_krkWhether they are all in the admissibility coverage interval ∂ = ( e − 2 / ( n + 1 ) , e 2 / ( n + 1 ) ) \partial = ( e ^ { - 2 /( n + 1 )} , e ^ { 2/( n + 1 )} )=(e2/(n+1),e2/ ( n + 1 ) ). If so, the corresponding data sequence can establish grayGM ( 1 , 1 ) GM ( 1, 1)GM(1,1 ) model; otherwise, the appropriate constantbbb Perform translational transformation processing on this set of data, so that the processed data sequenceY 0 = y 0 , 1 , y 0 , 2 , ⋯ , y 0 , n Y_0 = { y _ { 0 , 1 } , y _ { 0 , 2 } , \cdots , y _ {0 , n} }Y0=y0,1,y0,2,,y0,nThe level ratio of falls within the admissibility coverage interval, and its translation transformation process is:
    y 0 , k = x 0 , k + by _ { 0 , k } = x _ { 0 , k } + by0,k=x0,k+b
    passes through the accumulation sequenceX 1 X_1X1, establish Nansihu gray GM (1, 1) GM (1, 1)First-order differential equation of GM ( 1 , 1 )
    model: d X 1 dt + α X 1 = q \frac { ddtdX1+αX1=q
    where:α , q \alpha, qa ,q are the development coefficient and gray action amount respectively.
  • a = ( α , q ) T a = ( \alpha , q ) ^ { T } a=( a ,q)T uses the least squares method to solve:
    a = ( α , q ) T = ( BTB ) − 1 BTD a = ( \alpha , q ) ^ { T } = ( B ^ { T } B ) ^ { - 1 } B ^ { T } Da=( a ,q)T=(BTB)1BTD
    其中 B = [ − 0.5 ( x 1 + x 2 ) 1 ⋯ ⋯ ( − 0.5 x n − 1 + x n ) 1 ] D = [ x 0 , 2 x 0 , 0 , 2 ] B = \left[ \begin{array} { l l } { - 0.5 (x _ { 1 }+x_2) } & { 1 } \\ { \cdots } & \cdots \\ { (- 0.5 x _ { n-1 } +x _ { n} ) } & { 1 } \end{array} \right] \quad D = \left[ \begin{array} { l l } { x _ { 0 , 2 } } \\ { x _ { 0 , 0 , 2 } } \end{array} \right] B= 0.5(x1+x2)( 0.5 xn1+xn)11 D=[x0,2x0,0,2]
  • Obtain the gray GM(1,1) model:
    x ^ 1 , k + 1 = ( x 0 , 1 − q α ) e − ak + q α \hat{x}_ { 1 , k + 1 } = ( x _ { 0 , 1 } - \frac { q } { \alpha } ) e ^ { - ak } + \frac { q } { \alpha }x^1,k+1=(x0,1aq)e- and+aq
  • The accumulated value x ^ 1 , k + 1 \hat{x}_ { 1 , k + 1 }x^1,k+1After a cumulative reduction, it is restored to the predicted value x ^ 0 , k + 1 \hat{x}_ { 0 , k + 1 }x^0,k+1
    x ^ 0 , k + 1 = x ^ 1 , k + 1 − x ^ 1 , k \hat{x}_ { 0 , k + 1 } = \hat{x} _ { 1 , k + 1 } - \hat{x}_ { 1 , k } x^0,k+1=x^1,k+1x^1,k

Model testing

In order to test the credibility of the model, the posterior difference test needs to be performed on the predicted values. Establish a first-order residual sequence:
E 0 = { e 0 , 1 , e 0 , 2 , ⋯ , e 0 , n } = { x ^ 0 , 2 − x ^ 0 , 2 , x ^ 0 , 3 − x ^ 0 , 3 , ⋯ , x ^ 0 , k − x ^ 0 , k , x ^ 0 , k } E _ { 0 } = \{ e _ { 0 , 1 } , e _ { 0 , 2 } , \cdots , e _ { 0 , n } \} =\\ \{ \hat{x}_ { 0 , 2 } - \hat{x}_ { 0 , 2 } , \hat{x}_ { 0 , 3 } - \hat{x}_ { 0 , 3 } , \cdots , \hat{x}_ { 0 , k } - \hat{x}_ { 0 , k } , \hat{x}_ { 0 , k } \}E0={ e0,1,e0,2,,e0,n}={ x^0,2x^0,2,x^0,3x^0,3,,x^0,kx^0,k,x^0,k}
The variance of the original data sequence iss 1 s_1s1, the residual sequence E 0 E_0E0The variance of is s 2 s_2s2, respectively calculate the posterior ratio ccc and small error probabilityppp:
c = s 2 s 1 p = { 1 4 0 , k − A 0 ∣ < 0.6745 s 1 } c = \frac { s _ { 2 } } { s _ { 1 } }\\ p = \{ 14 _ { 0 , k } - A _ { 0 } | \lt 0.6745 s _ { 1 } \} c=s1s2p={ 140,kA0<0.6745 s1}
wherepp_p andccThe size of c jointly determines the model accuracy level. The table gives the model accuracy levels of 4 levels: good, qualified, basically qualified and unqualified. modelccThe smaller c is,ppThe larger p is, the higher the model accuracy is. ccThe smaller c is, thens 1 s_1s1The larger, s 2 s_2s2The smaller the value, that is, the greater the discrete degree of the original data sequence, the smaller the discrete degree of the residual sequence, and the smaller the difference between the predicted value obtained by the model and the original data, ppA larger p value indicates that the predicted values ​​are more uniform. If the inspection accuracy level meets the requirements, the established grayGM (1, 1) GM(1, 1)The GM ( 1 , 1 ) model can directly predict the data; if the accuracy level does not meet the requirements, the predicted data will be corrected.
image.png

Gray Markov prediction model

E 0 E_0E0Create gray GM (1, 1) GM(1, 1)GM ( 1 , 1 ) results:
e ^ 1 , k + 1 = ( e 0 , 2 − q ′ α ′ ) e − α ′ t + q ′ α ′ ( k = 2 , 3 , ⋯ , n ) \hat {e}_{1,k+1} = (e_{0,2}-\frac{q^{\prime}}{\alpha^{\prime}}) e^{-\alpha^{ { \prime}t}} + \frac{q^{\prime}}{\alpha^{\prime}}\quad(k=2,3,\cdots,n)e^1,k+1=(e0,2aq)eat+aq(k=2,3,,n )
Perform cumulative reduction and reduction of the model to obtain the residual correction valuee ^ 0 , k + 1 \hat{e} _ { 0 , k + 1 }e^0,k+1:
e ^ 0 , k + 1 = e ^ 1 , k + 1 − e ^ 1 , k \hat{e} _ { 0, k + 1 } = \hat{e} _ { 1 , k + 1 } - \hat{e} _ { 1 , k } e^0,k+1=e^1,k+1e^1,k
Use the residual correction value e ^ 0 , k + 1 \hat{e} _ { 0, k + 1 }e^0,k+1The prediction values ​​of the traditional GM(1,1) model are corrected and the corrected values ​​are obtained:
x ^ 0 , k ′ = x ^ 0 , k + m 0 , ke ^ 0 , k + 1 \hat{x} _ { 0 , k } ^ { \prime }= \hat{x} _ { 0 , k } + m _ { 0 , k }\hat{e} _ { 0, k + 1 }x^0,k=x^0,k+m0,ke^0,k+1
For example:
m 0 , k = { 1 ( x 0 , k − x 0 , k > 0 ) − 1 ( x 0 , k − x 0 , k < 0 ) m _ { 0 , k } = \{ \begin{ array}{ll}{1}&{(x_{0,k}-x_{0,k}\gt0)}\\{-1}&{(x_{0,k}-x_ { 0 , k } \lt 0 ) } \ end { array }m0,k={ 11(x0,kx0,k>0)(x0,kx0,k<0)
Introduce the gray Markov model to determine m 0 , km _ { 0 , k }m0,kpositive or negative. It is suitable for predicting random and irregular data, making up for the traditional GM (1, 1) GM (1, 1)The GM ( 1 , 1 ) model has the disadvantage of low prediction accuracy for volatility and trend data. The calculation process is as follows:

  • According to E 0 E_0E0Division status. This article divides two states, state 1 means the residual is positive, and state 2 means the residual is negative.
  • Find from state iii moves to statejjThe probability pij p_{ij} of the number of times j passespij:
    p i j = M i j M i ( i = 1 , 2 ; j = 1 , 2 ) p _ { i j } = \frac { M _ { i j } } { M _ { i } } ( i = 1 , 2 ; j = 1 , 2 ) pij=MiMij(i=1,2;j=1,2 )
    In formula:M ij M_{ij}Mijfor state iii moves to statejjThe number of times j passes;M i M_iMifor state iiThe total number of times i appears. Obtain the state transition matrixPPP
    P = [ P 11 P 12 P 21 p 22 ] P = \left[\begin{array}{ll}{P_{11}}&{P_{12}}\\{P_{21}} &{p_{22}}\end{array}\right]P=[P11P21P12p22]
  • Select the state of the last value of the residual sequence as the initial state vector μ 0 \mu_0m0。 Let μ 0 = ( μ 0 , 1 , μ 0 , 2 ) \mu_0 = ( \mu_{0,1}, \mu_0,2 )m0=( m0,1,m0,2 ) , among whichμ 0 , 1 , μ 0 , 2 \mu_{0,1}, \mu_{0,2}m0,1,m0,2represent the probabilities of being in state 1 and state 2 respectively. That is, if the last residual value is positive, μ 0 = (1, 0) \mu_0 = (1, 0)m0=( 1 , 0 ) ; if negative,μ 0 = (0, 1) \mu_0 = (0, 1)m0=(01)
  • According to μ t = μ 0 P t \mu_ { t } = \mu _ { 0 } P ^ { t }mt=m0Pt is found afterttAfter t state transitions, thettthThe state probability of t times. Select the state with the highest probability as the final result. If the probabilities of the two states are equal, take the result of the previous calculation:

Simulation results

image.png

matlab source code

https://mbd.pub/o/bread/YpWWkphs source code

Guess you like

Origin blog.csdn.net/abcwsp/article/details/123283541