2.8 Eigendecomposition of Array Covariance Matrix
In actual processing, the data we usually get is a limited number of snapshots within a limited time range. During this period, it is assumed that the direction of the space source signal does not change, or although the envelope of the space source signal changes with time, it is generally considered to be a stationary random process, and its statistical properties do not change with time. In this way, the covariance matrix of the array output signal X(t) can be defined as:
Among them, , there are:
In addition, the following conditions must be met.
(1) M>K, that is, the number of array elements M is greater than the number of spatial signals (number of signal sources) that the array system may receive
(2) Corresponding to different signal directions , the direction vector of the signal is linearly independent
(3) The noise N(t) process in the array has Gaussian distribution characteristics, and
where the noise power
(4) The covariance matrix of the spatial source signal vector is a diagonal nonsingular matrix, which indicates that the spatial source signals are irrelevant.
From the above formulas, it can be drawn that: , it can be proved that R is non-singular; and, therefore, R is a positive definite Hermitain square matrix, if diagonalization is realized by unitary transformation, its similar diagonal matrix is composed of M different positive real numbers , and the corresponding M eigenvectors are linearly independent. Therefore, the eigendecomposition of R can be written as:
Among them, , and it can be proved that its eigenvalues obey the order . That is, the first K eigenvalues are related to the signal, and their values are greater than , the eigenvectors corresponding to these K larger eigenvalues can be expressed as , they constitute the signal subspace , which is a diagonal matrix composed of K larger eigenvalues; Then the MK eigenvalues depend entirely on the noise, and their values are all equal to , and the corresponding eigenvectors constitute the noise subspace , which is a diagonal matrix composed of MK smaller eigenvalues.
Therefore, R can be decomposed into:
In the formula, is a diagonal matrix composed of larger eigenvalues, and is a diagonal matrix composed of smaller eigenvalues, namely
Obviously when the spatial noise is white noise, there is
Some properties of the characteristic subspace under the condition of signal source independence are given below, which also prepares for the subsequent spatial spectrum estimation algorithm and its theoretical analysis.
Property 2.8.1 The space spanned by the eigenvector corresponding to the large eigenvalue of the covariance matrix is the same space as the space spanned by the steering vector of the incident signal, namely:
Property 2.8.2 The signal subspace is orthogonal to the noise subspace , and there is
Property 2.8.3 Signal subspace and noise subspace satisfy:
Property 2.8.4 Signal subspace and noise subspace and array prevalence satisfy
According to the definition of property 2.8.5 , the following formula holds
It should be noted that in the specific implementation, the data covariance matrix is replaced by the sampling covariance matrix, namely
In the formula, L represents the number of snapshots of the data. The eigendecomposition can be calculated to obtain the noise subspace , the signal subspace and the diagonal matrix composed of eigenvalues
2.9 Source number estimation algorithm
Most algorithms in array signal processing require knowledge of the number of incident signals. However, in practical applications, the number of signal sources is usually an unknown quantity, and it is often necessary to estimate the number of signal sources first or assume that the number of signal sources is known, and then estimate the direction of the signal sources. According to the analysis of the feature space, it can be known that under certain conditions, the number of large eigenvalues of the data covariance matrix is equal to the number of signal sources, while the number of other small eigenvalues is equal (equal to noise power) . This means that the number of signal sources can be judged directly based on the large eigenvalue of the data covariance matrix.
However, in actual situations, due to the limitations of snapshot data and signal-to-noise ratio, it is impossible to obtain obvious large and small eigenvalues after performing eigendecomposition on the actual data covariance matrix. Many scholars have proposed more effective methods in signal number estimation, including information theory method, smoothing rank method, matrix decomposition method, Geiger's circle method and regular correlation method, etc.
2.9.1 Eigenvalue decomposition method
When there is observation noise, the received signal model is , which represents the covariance matrix of the mixed signal when there is observation noise, namely:
Among them, , is the noise power. It is easy to verify, if it is the M eigenvalues of R, and
is M eigenvalues, then there are
Therefore, in the case of a high signal-to-noise ratio, the number of main eigenvalues and the number of sources of the covariance matrix are both K
Arrange the eigenvalues of the obtained covariance matrix from large to small, namely
As the main eigenvalue of the observed sample covariance matrix, the number of information sources K should take a value such that The advantage of this method is that the calculation is simple and the estimation accuracy is high.
2.9.2 Information Theoretic Approach
The information theory method has a unified expression:
In the formula, L(k) is the logarithmic likelihood function, and P(k) is the penalty function. Different criteria can be obtained by different choices of the two functions.
EDC Information Theory Guidelines:
Among them, k is the number of signal sources (degrees of freedom) to be estimated, L is the number of samples, and is the likelihood function, and
In addition, C(L) in the above formula needs to meet the conditions:
When C(L) satisfies the above conditions, the EDC criterion has estimation consistency.