Motor FOC non-inductive position estimation (2): Kalman filter and corresponding motor system model (KF/EKF)


Brushless motor FOC extended Kalman filter position estimation matlab simulation has been posted on the WeChat public account: BOBO's Experimental Classroom

introduce

Kalman filtering is to combine the state estimation of the system at the next moment and the feedback obtained from the measurement, and finally obtain a more accurate state estimation (prediction + measurement feedback) at this moment. What we generally call Kalman filtering KF is for For linear systems, its ideas are also adaptable to nonlinear systems, and extensions include EKF, UKF, etc.

Let’s start with a simple question to understand the idea of ​​Kalman filtering.

Suppose we want to know the thickness of a coin and there are k measurements z 1 , z 2 … zk z_1, z_2…z_kz1z2zk, in order to estimate the true value, we generally take the mean, so the estimator

x ^ k = 1 k ( z 1 + z 2 … + z k ) = 1 k k − 1 k − 1 ( z 1 + z 2 … + z k − 1 ) + 1 k z k = k − 1 k x ^ k − 1 + 1 k z k x ^ k = x ^ k − 1 + 1 k ( z k − x ^ k − 1 ) = x ^ k − 1 + K k ( z k − x ^ k − 1 ) \hat{x}_k = \frac{1}{k}(z_1+z_2…+z_k) = \frac{1}{k}\frac{k-1}{k-1}(z_1+z_2…+z_{k-1}) +\frac{1}{k}z_k = \frac{k-1}{k}\hat{x}_{k-1}+\frac{1}{k}z_k\\ \hat{x}_k = \hat{x}_{k-1}+\frac{1}{k}(z_k - \hat{x}_{k-1}) = \hat{x}_{k-1}+K_k(z_k - \hat{x}_{k-1}) x^k=k1(z1+z2+zk)=k1k1k1(z1+z2+zk1)+k1zk=kk1x^k1+k1zkx^k=x^k1+k1(zkx^k1)=x^k1+Kk(zkx^k1)

From formula ①, when we only have one measurement value, that is, k = 1, we can only choose to believe the measurement value, and when k — > ∞ k — >\inftyk>When ∞ , we can trust the last estimate. And this estimated value is only related to this measurement value and the last estimated value, and there is no need to go back to the previousk − 1 k-1k1 measurement

where K k K_kKkis the Kalman gain. In order to minimize the estimation error, just make the variance σ 2 x ^ k \sigma^2\hat{x}_kp2x^kFor example, if∂ σ 2 xk ^ ∂ K k = 0 \frac{\partial{\sigma^2\hat{x_k}}}{\partial{K_k}} =Kkσ2xk^=0K k = σ 2 x ^ k − 1 σ 2 x ^ k − 1 + σ 2 z ^ k K_k = \frac{\sigma^2\hat{x}_{k-1}}{\sigma^ 2\hat{x}_{k-1}+\sigma^2\hat{z}_{k}}Kk=p2x^k1+ p2z^kp2x^k1, and do the following transformation to get the general form of Kalman gain, this K k K_kKkThe value is mathematically optimal

K k = e E S T [ k − 1 ] e E S T [ k − 1 ] + e M E A [ k ] K_k = \frac{e_{EST}[k-1]}{e_{EST}[k-1]+e_{MEA}[k]} Kk=eIS[k1]+eAND A[k]eIS[k1]

and EST e_{EST}eISis the estimation error, e MEA e_{MEA}eAND Ais the measurement error. Observing equation ②, we know that when e EST ≫ e MEA e_{EST}≫e_{MEA}eISeAND A, K k = 1 K_k = 1Kk=1 , which will be brought into equation①x ^ k = zk \hat{x}_k =z_kx^k=zk, that is, when the estimation error is relatively large, we choose to believe the measured value. On the contrary, when e EST ≪ e MEA e_{EST}≪e_{MEA}eISeAND A, K k = 0 K_k = 0Kk=0 x ^ k = x ^ k − 1 \hat{x}_k =\hat{x}_{k-1} x^k=x^k1, at this time we choose to believe the estimated value
The general steps of Kalman filtering in this case:

step official
step 1 K k = e E S T [ k − 1 ] e E S T [ k − 1 ] + e M E A [ k ] K_k = \frac{e_{EST}[k-1]}{e_{EST}[k-1]+e_{MEA}[k]} Kk=eIS[k1]+eAND A[k]eIS[k1]
step 2 x ^ k = x ^ k − 1 + K k ( z k − x ^ k − 1 ) \hat{x}_k = \hat{x}_{k-1}+K_k(z_k - \hat{x}_{k-1}) x^k=x^k1+Kk(zkx^k1)
step 3 e IS [ k ] = ( 1 − K k ) e IS [ k − 1 ] e_{IS}[k] = (1-K_k)e_{IS}[k-1]eIS[k]=(1Kk)eIS[k1]

Kalman filter

Consider the system

{ x ⃗ [ k ] = A x ⃗ [ k − 1 ] + B u ⃗ [ k − 1 ] + w [ k − 1 ] z ⃗ = H x ⃗ [ k ] + v [ k ] \left\{ \begin{array}{ll} \vec{x}[k] = A\vec{x}[k-1] + B \vec{u}[k-1] +w[k-1] \\ \vec{z} = H\vec{x}[k]+v[k] \end{array} \right. { x [k]=Ax [k1]+Bu [k1]+w[k1]z =Hx [k]+v[k]

Among them, ω is the process noise, which obeys the N(0,Q) distribution. υ is the measurement noise, obeying N(0,R) distribution

1. Covariance matrix

Represent variance and covariance through a matrix

方差: σ x 2 = 1 n − 1 ∑ i = 1 n ( x i − x ˉ ) 2 σ^2_x = \frac{1}{n-1}\sum_{i=1}^n(x_i - \bar{x})^2 px2=n11i=1n(xixˉ)2

刀方差:σ x , y = 1 n − 1 ∑ i = 1 n ( xi − x ˉ ) ( yi − y ˉ ) σ_{x,y} = \frac{1}{n-1}\sum_{i =1}^n(x_i - \bar{x})(y_i - \bar{y})px,y=n11i=1n(xixˉ )(andiyˉ)

Consider the following third-order covariance matrix

P = [ σ 2 x σ x σ y σ x σ z σ y σ x σ 2 y σ y σ z σ z σ x σ z σ y σ 2 y ] P = \begin{bmatrix} σ^2x&σxσy&σxσz \\ σyσx&σ ^2y&σyσz \\ σzσx&σzσy&σ^2y \end{bmatrix}P= p2x _y y y xσzσxx x x yp2 yσ z σ ys x s zs y s zp2 y

Construct the transition matrix T to find this covariance matrix, 1 3 ones ( 3 , 3 ) [ x , y , z ] \frac{1}{3} ones(3,3)[x,y,z]31ones(3,3)[x,y,z ]构成了x , y , zx,y,zx,y,mean matrix of z

T = [ x 1 y 1 z 1 x 2 y 2 z 2 x 3 y 3 z 3 ] − 1 3 [ 1 1 1 1 1 1 1 1 1 ] [ x 1 y 1 z 1 x 2 y 2 z 2 x 3 y 3 z 3 ] T = \begin{bmatrix}x_1&y_1&z_1 \\x_2&y_2&z_2 \\x_3&y_3&z_3\end{bmatrix} - \frac{1}{3} \begin{bmatrix}1&1&1 \\1&1&1 \\1&1&1\end{bmatrix} \begin{bmatrix}x_1&y_1&z_1 \\x_2&y_2&z_2 \\x_3&y_3&z_3\end{bmatrix} T= x1x2x3y1y2y3z1z2z3 31 111111111 x1x2x3y1y2y3z1z2z3

P = 1 3 T T T P = \frac{1}{3}T^TT P=31TTT

For the system represented by equation ③

由于 D ( X ) = E ( X 2 ) − E 2 ( X ) D(X) = E(X^2) -E^2(X) D(X)=E ( X2)E2 (X),而E ( ω i ) = 0 E(ω_i) = 0E ( ohi)=0 , then the covariance matrix can be expressed by the following formula:

Q = E ( ω ω T ) = E ( [ w 1 ⋮ wn ] [ w 1 ... wn ] ) = [ E ( w 1 2 ) E ( w 1 w 2 ) ... E ( w 1 wn ) E ( w w 1 ) E ( w 2 2 ) ... E ( w 2 wn ) ⋮ ⋮ ⋮ E ( wnw 1 ) E ( wnw 2 ) ... E ( wn 2 ) ] ) Q = E(ωω^T) = E(\begin {bmatrix}w_1 \\ \vdots \\w_n \end{bmatrix} \begin{bmatrix} w_1 &...& w_n \end{bmatrix}) = \begin{bmatrix} E(w_1^2) & E(w_1w_2)& ...& E(w_1w_n) \\ E(w_2w_1) & E(w_2^2)&...& E(w_2w_n) \\ \vdots & \vdots& & \vdots \\ E(w_nw_1) & E(w_nw_2)&...& E(w_n^2) \\\end{bmatrix})Q=E ( oh ohT)=E( w1wn [w1wn])= E ( w12)E ( w2w1)E ( wnw1)E ( w1w2)E ( w22)E ( wnw2)E ( w1wn)E ( w2wn)E ( wn2) )

At the same time, ω1, ω2,..., ωn are independent of each other

Q = [ σ w 1 2 σ w 1 , w 2 … σ w 1 , wn σ w 2 , w 1 σ w 2 2 … σ w 2 , wn ⋮ ⋮ ⋮ σ wn , w 1 σ wn , w 2 … σ wn 2 ] = [ σ w 1 2 0 … 0 0 σ w 2 2 … 0 ⋮ ⋮ ⋮ 0 0 … σ wn 2 ] Q = \begin{bmatrix} σ_{w_1}^2 & σ_{w_1,w_2}& …& σ_{w_1,w_n} \\σ_{w_2,w_1} & σ_{w_2}^2&…& σ_{w_2,w_n} \\\vdots & \vdots& & \vdots \\σ_{w_n,w_1} & σ_{w_n,w_2}&…& σ_{w_n}^2 \\ \end{bmatrix} =\begin{bmatrix} σ_{w_1}^2 &0&…& 0 \\ 0 & σ_{w_2}^2&…& 0 \\ \vdots & \vdots& & \vdots \\ 0 & 0&…& σ_{w_n}^2 \\ \end{bmatrix}Q= pw12pw2,w1pwn,w1pw1,w2pw22pwn,w2pw1,wnpw2,wnpwn2 = pw12000pw22000pwn2

In the same way R = E ( υ υ T ) R = E(υυ^T)R=E ( y yT)

2. A priori estimation

Calculated from the state space equation without considering process errors and measurement errors.

x ⃗ ^ − [ k ] = A x ⃗ [ k − 1 ] + B u ⃗ [ k − 1 ] \hat{\vec{x}}^-[k] = A\vec{x}[k-1 ]+B\vec{u}[k-1]x ^[k]=Ax [k1]+Bu [k1]

But in the motor model ω and θ ω and θω and θ cannot be obtained, we can changex ⃗ [ k − 1 ] \vec{x}[k-1 ]x [k1 ] replaced byx ⃗ ^ [ k − 1 ] \hat{\vec{x}}[k-1 ]x ^[k1]

At the same time, by measuring z ⃗ = H x ⃗ [ k ] \vec{z} = H\vec{x}[k]z =Hx [ k ] is,

x ⃗ ^ [ k − mea ] = H − 1 z ⃗ [ k ] \hat{\vec{x}}[k-mea] = H^{-1}\vec{z}[k]x ^[km e a ]=H1z [k]

3. Posterior estimation

Estimation + Measurement Feedback

x ⃗ ^ [ k ] = x ⃗ ^ − [ k ] + G ( H − 1 z ⃗ [ k ] − x ⃗ ^ − [ k ] ) \hat{\vec{x}}[k] = \hat{ \vec{x}}^-[k] + G(H^{-1}\vec{z}[k] - \hat{\vec{x}}^-[k])x ^[k]=x ^[k]+G(H1z [k]x ^[k])

Let G = K k H G = K_k HG=KkH , then

x ⃗ ^ [ k ] = x ⃗ ^ − [ k ] + K k ( z ⃗ [ k ] − H x ⃗ ^ − [ k ] ) \hat{\vec{x}}[k] = \hat{\ vec{x}}^-[k] + K_k(\vec{z}[k] - H\hat{\vec{x}}^-[k])x ^[k]=x ^[k]+Kk(z [k]Hx ^[k])

4. Kalman gain

Kalman gain K k K_kKkis the pole of the concave function, so the Kalman filter is both an observer and an optimality estimate

e ⃗ k = x ⃗ k − x ⃗ ^ k \vec{e}_k = \vec{x}_k-\hat{\vec{x}}_ke k=x kx ^k, obey N(0,P) distribution

P = E ( ekek T ) P = E(e_ke_k^T)P=And ( ekekT)

In order to make tr ( P ) tr(P)t r ( P ) is the minimum, that is, the variance is the minimum, let∂ tr ( P ) ∂ K k = 0 \frac{\partial{tr(P)}}{\partial{K_k}} = 0Kktr(P)=0 got

K k = P − [ k ] H T H P − [ k ] H T + R K_k= \frac{P^-[k]H^T}{HP^-[k]H^T+R} Kk=HP[k]HT+RP[k]HT

For the specific derivation process, please refer to the UP master DR_CAN of station B. The video link is at the end of the article.

where P − P^-P is the a priori error

P − = E ( e k − e k − T ) = A P k − 1 A T + Q P^- = E(e_k^-e_k^{-T}) = AP_{k-1}A^T+Q P=And ( ekekT)=APk1AT+Qek ⃗ − = x ⃗ k − x ⃗ ^ k − \vec{e_k}^- = \vec{x}_k-\hat{\vec{x}}^-_kek =x kx ^k

General steps of Kalman filtering

predict Correction
①A priori estimation x ⃗ ^ − [ k ] = A x ⃗ [ k − 1 ] + B u ⃗ [ k − 1 ] \hat{\vec{x}}^-[k] = A\vec{x}[k-1 ] + B \vec{u}[k-1]x ^[k]=Ax [k1]+Bu [k1] ③Calculate Kalman gain K k = P − [ k ] H T H P − [ k ] H T + R K_k= \frac{P^-[k]H^T}{HP^-[k]H^T+R} Kk=HP[k]HT+RP[k]HT
②Prior error covariance matrix P k − = E ( e k − e k − T ) = A P k − 1 A T + Q P_k^- = E(e_k^-e_k^{-T}) = AP_{k-1}A^T+Q Pk=And ( ekekT)=APk1AT+Q ④ posterior estimation x ⃗ ^ [ k ] = x ⃗ ^ − [ k ] + K k ( z ⃗ [ k ] − H x ⃗ ^ − [ k ] ) \hat{\vec{x}}[k] = \hat{\ vec{x}}^-[k] + K_k(\vec{z}[k] - H\hat{\vec{x}}^-[k])x ^[k]=x ^[k]+Kk(z [k]Hx ^[k])
⑤Update error covariance matrix P k = ( I − K k H ) P k − P_k = (I - K_kH)P_k^-Pk=(IKkH)Pk

Motor state equation

1. State vector

In the Lomberg observer above, we selected the state vector x ⃗ = [ i α i β u α u β ] T \vec{x} = \begin{bmatrix}i_α&i_β&u_α&u_β\end{bmatrix}^Tx =[iaibuaub]T , but in order to use the Kalman filter to predict ω and θ we choose the state vectorx ⃗ = [ i α i β w θ ] T \vec{x} = \begin{bmatrix}i_α&i_β&w&θ\end{bmatrix}^Tx =[iaibwi]T ,constraintz ⃗ = [ i α i β ] T \vec{z} = \begin{bmatrix}i_α&i_β\end{bmatrix}^Tz =[iaib]T , and modify the back electromotive force form as follows (the current lags the voltage by 90°, so the back electromotive force of the α axis is generated by the β axis voltage and is reversed)

{ e α = − kewesin ( wet ) e β = kewecos ( wet ) \left\{ \begin{array}{ll} e_α = -k_e w_e sin(w_et) \\ e_β = k_e w_e cos(w_et) \end{ array } \ right .{ ea=kewesin(wet)eb=kewecos(wet)

{ di α dt = − r LS i α + 1 LS kewesin ( θ ) + 1 LS u α di β dt = − r LS i β − 1 LS kewecos ( θ ) + 1 LS u β dwedt = 0 d θ dt = we \left\{ \begin{array}{ll} \frac{di_α}{dt} = -\frac{r}{L_S}i_α +\frac{1}{L_S}k_e w_e sin(θ) + \frac {1}{L_S}u_α \\ \frac{di_β}{dt} = -\frac{r}{L_S}i_β -\frac{1}{L_S}k_e w_e cos(θ) + \frac{1}{ L_S}u_β \\ \frac{dw_e}{dt} = 0 \\ \frac{dθ}{dt} = w_e \end{array} \right. dtd ia=LSria+LS1kewes in ( i )+LS1uadtd ib=LSribLS1kewecos ( θ )+LS1ubdtdwe=0dtd i=we

2. Linearization

At this time, the system is a nonlinear system, and the Gaussian noise of the nonlinear system no longer obeys the normal distribution. In order to use the Kalman filter, we must first linearize it. Use EKF to estimate the system state, let

G ( x ⃗ ) = [ − r LS i α + 1 LS kewesin ( θ ) − r LS i β − 1 LS kewecos ( θ ) 0 we ] G(\vec{x}) = \begin{bmatrix} -\ frac{r}{L_S}i_α +\frac{1}{L_S}k_e w_e sin(θ) \\ -\frac{r}{L_S}i_β -\frac{1}{L_S}k_e w_e cos(θ) \\ 0 \\ w_e \end{bmatrix}G(x )= LSria+LS1kewes in ( i )LSribLS1kewecos ( θ )0we

When calculating the prior estimate, the nonlinear function of the original system is used

Make the following corrections to the a priori estimate of the above general steps for Kalman filter calculation ①

x ⃗ ^ − [ k ] = x ⃗ ^ [ k − 1 ] + TG ( x ⃗ ) + B u ⃗ [ k − 1 ] \hat{\vec{x}}^-[k] = \hat{\ vec{x}}[k-1] + TG(\vec{x}) + B\vec{u}[k-1]x ^[k]=x ^[k1]+TG(x )+Bu [k1]

Jacobian matrix

J = ▽ x f = d f ( x ) d x = [ ∂ f ( x ) ∂ x 1 … ∂ f ( x ) ∂ x n ] J = \bigtriangledown_x f = \frac{df(x)}{dx} = \begin{bmatrix} \frac{\partial f(x)}{\partial x_1}& … &\frac{\partial f(x)}{\partial x_n} \end{bmatrix} J=xf=dxdf(x)=[x1f(x)xnf(x)]

= [ ∂ f 1 ( x ) ∂ x 1 … ∂ f 1 ( x ) ∂ x n ⋮ ⋮ ∂ f n ( x ) ∂ x 1 … ∂ f n ( x ) ∂ x n ] = \begin{bmatrix} \frac{\partial f_1(x)}{\partial x_1}& … &\frac{\partial f_1(x)}{\partial x_n} \\ \vdots&&\vdots \\ \frac{\partial f_n(x)}{\partial x_1}& … &\frac{\partial f_n(x)}{\partial x_n} \\ \end{bmatrix} = x1f1(x)x1fn(x)xnf1(x)xnfn(x)

then

∂ G ( x ⃗ ) x ⃗ = [ − r LS 0 case ( θ ) LS square ( θ ) LS 0 − r LS − square ( θ ) LS + square ( θ ) LS 0 0 0 0 0 1 0 ] \ frac{\partial G(\vec{x})} {\vec{x}} = \begin{bmatrix} -\frac{r}{L_S}&0&\frac{k_esin(θ)}{L_S}&\frac {k_echoes(θ)}{L_S} \\ 0&-\frac{r}{L_S}&-\frac{k_echoes(θ)}{L_S}&+\frac{k_echoes(θ)}{L_S} \\ 0&0&0&0 \\ 0&0&1&0\\ \ end{bmatrix}x G(x )= LSr0000LSr00LSkes in ( θ )LSkecos ( θ ).01LSkewecos ( θ ).+LSkewes in ( θ )00

3. Discretization

{ i α [ k ] = ( 1 − r TLS ) i α [ k − 1 ] − T kecos ( θ k − 1 ) LS we [ k − 1 ] + T kewe [ k − 1 ] sin ( θ k − 1 ) LS θ [ k − 1 ] + TLS u α [ k − 1 ] i β [ k ] = ( 1 − r TLS ) i β [ k − 1 ] − T kesin ( θ k − 1 ) LS we [ k − 1 ] − T kewe [ k − 1 ] cos ( θ k − 1 ) LS θ [ k − 1 ] + TLS u β [ k − 1 ] we [ k ] = we [ k − 1 ] θ [ k ] = θ [ k − 1 ] + T we [ k − 1 ] \left\{ \begin{array}{ll} i_α[k] = (1-\frac{rT}{L_S})i_α[k-1] -\ frac{Tk_e cos(θ_{k-1})}{L_S}w_e[k-1] +\frac{Tk_e w_e[k-1] sin(θ_{k-1})}{L_S}θ[k- 1]+ \frac{T}{L_S}u_α[k-1] \\ \\ i_β[k] = (1-\frac{rT}{L_S})i_β[k-1] -\frac{Tk_e sin (θ_{k-1})}{L_S}w_e[k-1] - \frac{Tk_e w_e[k-1] cos(θ_{k-1})}{L_S}θ[k-1]+ \ frac{T}{L_S}u_β[k-1] \\ \\ w_e[k] = w_e[k - 1] \\\\ θ[k] = θ[k - 1] + Tw_e[k-1] \end{array} \right. ia[k]=(1LSrT)ia[k1]LSTkecos ( ik1)we[k1]+LSTkewe[ k 1 ] s in ( ik1)i [ k1]+LSTua[k1]ib[k]=(1LSrT)ib[k1]LSTkes in ( ik1)we[k1]LSTkewe[ k 1 ] cos ( ik1)i [ k1]+LSTub[k1]we[k]=we[k1]θ [ k ]=i [ k1]+Twe[k1]

So the linearized discrete system matrix

A = [ 1 − r TLS 0 T case ( θ k − 1 ) LST key [ k − 1 ] cos ( θ k − 1 ) LS 0 1 − r TLS − T kecos ( θ k − 1 ) LST key [ k − 1 ] sin ( θ k − 1 ) LS 0 0 1 0 0 0 T 1 ] A = \begin{bmatrix} 1-\frac{rT}{L_S}&0&\frac{Tk_e sin(θ_{k-1}); }{L_S}&\frac{Tk_e w[k-1] cos(θ_{k-1})}{L_S} \\ 0 & 1-\frac{rT}{L_S}&-\frac{Tk_e cos( θ_{k-1})}{L_S}&\frac{Tk_e w_e[k-1] sin(θ_{k-1})}{L_S} \\ 0&0&1&0 \\ 0&0&T&1\\ \end{bmatrix}A= 1LSrT00001LSrT00LSTkes in ( ik1)LSTkecos ( ik1)1TLSTkewe[ k 1 ] cos ( ik1)LSTkewe[ k 1 ] s in ( ik1)01

B = [ T L S 0 0 T L S 0 0 0 0 ] B = \begin{bmatrix} \frac{T}{L_S}&0 \\ 0&\frac{T}{L_S} \\ 0&0 \\ 0&0\\ \end{bmatrix} B= LST0000LST00

H = [ 1 0 0 0 0 1 0 0 ] H = \begin{bmatrix} 1&0&0&0 \\ 0&1&0&0 \end{bmatrix} H=[10010000]

MATLAB simulation

Simulation models are open source and sourced (WeChat public account: BOBO’s experimental classroom)
insert image description here

Current loop and EKF position estimation

![Insert image description here](https://img-blog.csdnimg.cn/126bceb4c47b4b8d87af0f3e105b6063.png

EKF

It can be seen that the θ estimation output of the Kalman filter almost coincides with the actual position, but the velocity estimation ω is affected by the flux linkage constant, and the large flux given by the flux linkage will cause the ω estimate to be too small. On the contrary, the too small flux linkage will lead to the ω estimate If it is too large, the estimation of speed is not accurate. It is recommended to differentiate θ as the estimated output of ω.

insert image description here

insert image description here

The actual operating effect of the microcontroller

As shown in the figure, at a sampling frequency of 10KHz, the current and position are well tracked, where xd0 and xd3 are ialpha_hat and theta_hat respectively, foc.ialpha and foc.theta are actual values, and the observed waveform is CubeMonitor, the maximum The sampling rate is only 1KHz, so the curve is not very smooth.
insert image description here

Reference

[1] FOC control of PMSM 13 - build EKF observer
[2] Extended Kalman filter (EKF) theory explanation and examples (matlab, python and C++ code)
[3] Station B. DR_CAN. 3_ Kalman gain super detailed Mathematical derivation
[4] Kalman filter (KF) and extended Kalman filter (EKF) corresponding derivation
[5] Motor FOC non-inductive position estimation (1): Romberg observer

Guess you like

Origin blog.csdn.net/Kongbobo39/article/details/131270989