"Array Signal Processing and MATLAB Realization" Array Covariance Matrix Eigendecomposition, Source Number Estimation Algorithm

2.8 Eigendecomposition of Array Covariance Matrix

In actual processing, the data we usually get is a limited number of snapshots within a limited time range. During this period, it is assumed that the direction of the space source signal does not change, or although the envelope of the space source signal changes with time, it is generally considered to be a stationary random process, and its statistical properties do not change with time. In this way, the covariance matrix of the array output signal X(t) can be defined as:

R=E\begin{Bmatrix}[X(t)-m_x(t)][X(t)-m_x(t)]^H\end{}

Among them, m_x(t)=E[X(t)]=0, there are:

R=E\begin{Bmatrix}X(t)X(t)^H \end{}=E\begin{Bmatrix}[A(\theta)S(t)+N(t)][A(\theta)S(t)+N(t)]^H \end{}

In addition, the following conditions must be met.

(1) M>K, that is, the number of array elements M is greater than the number of spatial signals (number of signal sources) that the array system may receive

(2) Corresponding to different signal directions \theta_i(i=1,2,...,K), the direction vector of the signal \vec a(\theta_i)is linearly independent

(3) The noise N(t) process in the array has Gaussian distribution characteristics, and

E\begin{Bmatrix}N(t)\end{}=0

E\begin{Bmatrix}N(t)N^H(t)\end{}=\sigma^2I

E\begin{Bmatrix}N(t)N^T(t)\end{}=0

where \sigma^2the noise power

S(t)(4) The covariance matrix of the spatial source signal vector R_s=E\begin{Bmatrix}S(t)S^H(t) \end{}is a diagonal nonsingular matrix, which indicates that the spatial source signals are irrelevant.

From the above formulas, it can be drawn that: R=A(\theta)R_sA^H(\theta)+\sigma^2I, it can be proved that R is non-singular; and, R^H=Rtherefore, R is a positive definite Hermitain square matrix, if diagonalization is realized by unitary transformation, its similar diagonal matrix is ​​composed of M different positive real numbers , and the corresponding M eigenvectors are linearly independent. Therefore, the eigendecomposition of R can be written as:

R=U\Sigma U^H=\sum_{i=1}^M\lambda_i\vec u_i\vec u_i^H

Among them, \Sigma =diag(\lambda_1,\lambda_2,...,\lambda_M), and it can be proved that its eigenvalues ​​obey the order \lambda_1\geq ...\geq \lambda_K>\lambda_{K+1}=...=\lambda_M=\sigma^2. That is, the first K eigenvalues ​​are related to the signal, and their values ​​are greater than , the eigenvectors corresponding to \sigma^2these K larger eigenvalues ​​can be expressed as , they constitute the signal subspace , which is a diagonal matrix composed of K larger eigenvalues; Then the MK eigenvalues ​​depend entirely on the noise, and their values ​​are all equal to , and the corresponding eigenvectors constitute the noise subspace , which is a diagonal matrix composed of MK smaller eigenvalues.\lambda_1,\lambda_2,...,\lambda_K\thing u_1,\thing u_2,...,\thing u_KU_s\Sigma _s\sigma^2\lambda_{K+1},\lambda_{K+2},...,\lambda_MU_N\Sigma _N

Therefore, R can be decomposed into:

R=U_S\Sigma _SU_S^H+U_N\Sigma _NU_N^H

In the formula, \Sigma _Sis a diagonal matrix composed of larger eigenvalues, and \Sigma _Nis a diagonal matrix composed of smaller eigenvalues, namely

\Sigma _S=\begin{bmatrix} \lambda_1 & & & \\ & \lambda_2 & & \\ & & ...& \\ & & & \lambda_K\\ \end{bmatrix},\Sigma _N=\begin {bmatrix} \lambda_{K+1}&&&\\\lambda_{K+2}&&\\&&...&\\&&&\lambda_M\\\end{bmatrix}

Obviously when the spatial noise is white noise, there is\Sigma _N=\sigma^2I_{(M-K)\times(M-K)}

Some properties of the characteristic subspace under the condition of signal source independence are given below, which also prepares for the subsequent spatial spectrum estimation algorithm and its theoretical analysis.

Property 2.8.1   The space spanned by the eigenvector corresponding to the large eigenvalue of the covariance matrix is ​​the same space as the space spanned by the steering vector of the incident signal, namely:span\begin{Bmatrix}\vec u_1,\vec u_2,...,\vec u_K \end{}=span\begin{Bmatrix}\vec a_1,\vec a_2,...,\vec a_K \end{}

Property 2.8.2   The signal subspace is orthogonal U_Sto the noise subspace U_N, and there isA^H\vec u_i=0,i=K+1,...,M

Property 2.8.3   Signal subspace U_Sand noise subspace U_Nsatisfy:

U_SU_S^H+U_NU_N^H=I,U_S^HU_S=I,U_N^HU_N=I

Property 2.8.4   Signal subspace U_Sand noise subspace U_Nand array prevalence Asatisfy

U_SU_S^H=A(A^HA)^{-1}A^H,U_NU_N^H=I-A(A^HA)^{-1}A^H

  According to the definition of property 2.8.5\Sigma ^ {'}=\Sigma _S-\sigma^2I , the following formula holds

AR_SA^HU_S=U_S\Sigma ^{'}

It should be noted that in the specific implementation, the data covariance matrix is \hat{R}​​replaced by the sampling covariance matrix, namely

\hat{R}=\frac{1}{L}\sum_{l=1}^LX(t_l)X^H(t_l)

In the formula, L represents the number of snapshots of the data. The eigendecomposition can be calculated to \hat{R}obtain the noise subspace \hat{U}_N, the signal subspace \hat{U}_Sand the diagonal matrix composed of eigenvalues\hat{\Sigma }

2.9 Source number estimation algorithm

Most algorithms in array signal processing require knowledge of the number of incident signals. However, in practical applications, the number of signal sources is usually an unknown quantity, and it is often necessary to estimate the number of signal sources first or assume that the number of signal sources is known, and then estimate the direction of the signal sources. According to the analysis of the feature space, it can be known that under certain conditions, the number of large eigenvalues ​​of the data covariance matrix is ​​equal to the number of signal sources, while the number of other small eigenvalues ​​is equal (equal to noise power) . This means that the number of signal sources can be judged directly based on the large eigenvalue of the data covariance matrix.

However, in actual situations, due to the limitations of snapshot data and signal-to-noise ratio, it is impossible to obtain obvious large and small eigenvalues ​​after performing eigendecomposition on the actual data covariance matrix. Many scholars have proposed more effective methods in signal number estimation, including information theory method, smoothing rank method, matrix decomposition method, Geiger's circle method and regular correlation method, etc.

2.9.1 Eigenvalue decomposition method

When there is observation noise, the received signal model is X=AS+N, \hat{R}which represents the covariance matrix of the mixed signal when there is observation noise, namely:

\hat{R}=XX^H/L=R+R_N

Among them, R=AE[\vec x(t)\vec x(t)^H]A^H,R_N=\sigma^2I, \sigma^2is the noise power. It is easy to verify, if \lambda_1\geq \lambda_2\geq...\geq\lambda_K>\lambda_{K+1}=...=\lambda_M=0it is the M eigenvalues ​​of R, and

\mu_1\geq \mu_2\geq...\geq\mu_K\geq \mu_{K+1}\geq ...\geq \mu_M\geq 0is \hat{R}M eigenvalues, then there are

\mu_1\approx \lambda_1+\sigma^2,\mu_2\approx \lambda_2+\sigma^2,...,\mu_K\approx \lambda_K+\sigma^2,...,\mu_M\approx \lambda_M+\sigma^2

Therefore, in the case of a high signal-to-noise ratio, \hat{R}the number of main eigenvalues ​​and the number of sources of the covariance matrix are both K

Arrange the eigenvalues ​​of the obtained covariance matrix from large to small, namely\mu_1\geq \mu_2\geq...\geq\mu_K\geq \mu_{K+1}\geq ...\geq \mu_M

As \gamma _k=\mu_k/\mu_{k+1},k=1,2,...,M-1the main eigenvalue of the observed sample covariance matrix, the number of information sources K should take a value such \gamma _k=max(\gamma_1,\gamma_2,...,\gamma_{M-1})that The advantage of this method is that the calculation is simple and the estimation accuracy is high.

2.9.2 Information Theoretic Approach

 The information theory method has a unified expression:

J(k)=L(k)+P(k)

In the formula, L(k) is the logarithmic likelihood function, and P(k) is the penalty function. Different criteria can be obtained by different choices of the two functions.

EDC Information Theory Guidelines:

EDC(n)=L(Mk)ln\Lambda (k)+k(2M-k)C(L)

Among them, k is the number of signal sources (degrees of freedom) to be estimated, L is the number of samples, and \Lambda(k)is the likelihood function, and

\Lambda(k)=\frac{\frac{1}{M-K}\sum_{i=k+1}^M\lambda_i}{(\prod_{i=k+1}^{M}\lambda_i)^{\frac{1}{M-k}}}

In addition, C(L) in the above formula needs to meet the conditions:

lim_{L\rightarrow 0}(C(L)/L)=0

lim_{L\rightarrow \infty }(C(L)/lnlnL)=\infty

When C(L) satisfies the above conditions, the EDC criterion has estimation consistency.

Guess you like

Origin blog.csdn.net/APPLECHARLOTTE/article/details/127467932