[Selected Papers | Comparison and Analysis of Capon Algorithm and MUSIC Algorithm Performance]

Editor of this article: Naughty brother's assistant

insert image description here

【text】

First of all, the conclusion:

When the signal-to-noise ratio (SNR) is large enough, the spatial spectra of the Capon algorithm and the MUSIC algorithm are very similar, so their performance is almost the same when the SNR is relatively large. When the incident angles of different signal sources are relatively close, the performance of the MUSIC algorithm is better. For Capon, this is why the MUSIC algorithm (or subspace algorithm) is called a high-resolution algorithm.

原文:On one hand, if the SNR is large enough, the spectrums of Capon and MUSIC are approximately the same, and hence their performances may be similar. On the other hand, MUSIC algorithm performs better than Capon algorithm when the separation angle of sources is quite small, and this is why MUSIC (or saying subspace-based methods) is called as high-resolution algorithm.

Remember this conclusion, remember that this question was asked by the interviewer before looking for a job.

Below we will use the content of the paper "The Difference Between Capon and MUSIC Algorithm" to discuss this conclusion and give a simulation example.

1. Capon algorithm principle

Capon is the name of a scientist, because he proposed the algorithm of Capon, which is named after him. We consider the data model as:

x ( t ) = A s ( t ) + n ( t ) \mathbf{x}(t)=\mathbf{A} \mathbf{s}(t)+\mathbf{n}(t) x(t)=As(t)+n(t)(1)

where x(t) is the observed data vector, A is the so-called steering matrix in array signal processing, s(t) and n(t) represent the signal and noise vectors, respectively, and t represents the time index. Putting a weight vector w onto the observation vector x(t), we get the output:

y ( t ) = w H x ( t ) y(t)=\mathbf{w}^H \mathbf{x}(t) y(t)=wHx(t)(2)

Therefore, the power delivered by the array can be formulated as follows:

R y = E { ∣ y ( t ) ∣ 2 } = w HR xw R_y=\mathrm{E}\left\{|y(t)|^2\right\}=\mathbf{w}^H \mathbf {R}_{\mathbf{x}} \mathbf{w}Ry=E{ y(t)2}=wHRxw(3)

where E{ } and H denote mathematical expectation and Hermitian transpose, respectively. Furthermore, R x = E { x ( t ) x H ( t ) } \mathbf{R}_{\mathbf{x}}=\mathrm{E}\left\{\mathbf{x}(t) \mathbf {x}^H(t)\right\}Rx=E{ x(t)xH (t)}is the covariance matrix of the observed data. The Capon algorithm [1] can be described as:minimizing the output power while maintaining unity gain in the line of sight direction, and its formula is as follows:

min ⁡ ww HR xw subject to w H a ( θ ) = 1. \begin{aligned} & \min _{\mathbf{w}} \quad \mathbf{w}^H \mathbf{R}_{\mathbf {x}} \mathbf{w} \\ & \text { subject to } \quad \mathbf{w}^H \mathbf{a}(\theta)=1 .\end{aligned}wminwHRxw subject to wH a(i)=1.

From my understanding, this is actually similar to beamforming (essentially), keeping the unit gain in the line of sight direction is the direction of maximum gain after beamforming . The above formula can be solved using the Lagrange multiplier method, and the solution is:

w L ag = R x − 1 a ( θ ) a H ( θ ) R x − 1 a ( θ ) \mathbf{w}_{L ag}=\frac{\mathbf{R}_{\mathbf{x }}^{-1} \mathbf{a}(\theta)}{\mathbf{a}^H(\theta) \mathbf{R}_{\mathbf{x}}^{-1} \mathbf{ a}(\theta)}wLag=aH (i)Rx1a ( i )Rx1a ( i )(4)

Substituting the above formula into equation (3), the output power related to the direction can be obtained, such as:

P Capon ( θ ) = 1 a H ( θ ) R x − 1 a ( θ ) P_{\text {Capon}}(\theta)=\frac{1}{\mathbf{a}^H(\theta) \mathbf{R}_{\mathbf{x}}^{-1}\mathbf{a}(\theta)}PCapon ( i )=aH (i)Rx1a ( i )1(5)

After angle search, the angle measurement result of Capon algorithm can be obtained.

2. Principle of MUSIC algorithm

MUSIC is the English abbreviation of Multiple Signal Classification (MUltiple SIgnal Classification). The signal model is as mentioned above. Once we get the covariance matrix Rx of the observed data, we perform eigenvalue decomposition on it and obtain the signal and noise components, as follows:

R x = U s Σ s U s H + U n Σ n U n H = ∑ σ measure H + ∑ σ measure H \begin{aligned} \mathbf{R}_{\mathbf{x}} & =\mathbf {U}_{\mathbf{s}} \ball symbol{\Sigma}_{\mathbf{s}} \mathbf{U}_{\mathbf{s}}{ }^H+\mathbf{U}_{\ mathbf{n}} \ball symbol{\sigma}_{\mathbf{n}} \mathbf{U}_{\mathbf{n}}{ }^H \\ & =\sum \sigma_s \mathbf{u}_ {\mathbf{s}} \mathbf{u}_{\mathbf{s}}{ }^H+\sum \sigma_n \mathbf{u}_{\mathbf{n}} \mathbf{u}_{\mathbf {n}}{ }^H\end{aligned}Rx=UsSsUsH+UnSnUnH=psususH+pnununH(6)

According to the orthogonality between signal and noise subspaces [2], we can form the MUSIC space spectrum as follows:

P MUSIC ( θ ) = 1 a H ( θ ) U n U n H a ( θ ) P_{\text {MUSIC }}(\theta)=\frac{1}{\mathbf{a}^H(\theta ) \mathbf{U}_{\mathbf{n}} \mathbf{U}_{\mathbf{n}}{ }^H \mathbf{a}(\theta)}PMUSIC ( i )=aH (i)UnUnH a(i)1(7)

3. Algorithm comparison and analysis

It is easy to find that R x − 1 R_x^{-1} in equation (5)Rx1can be written as:

R x − 1 = ( U s Σ s U s H + U n Σ n U n H ) − 1 = U s Σ s − 1 U s H + U n Σ n − 1 U n H = ∑ 1 σ measure H + ∑ 1 σ nunun H \begin{aligned} \mathbf{R}_{\mathbf{x}}^{-1} & =\left(\mathbf{U}_{\mathbf{s}} \ball symbol{ \Sigma}_{\mathbf{s}} \mathbf{U}_{\mathbf{s}}{ }^H+\mathbf{U}_{\mathbf{n}} \ball symbol{\Sigma}_{\ mathbf{n}} \mathbf{U}_{\mathbf{n}}{ }^H\right)^{-1} \\ & =\mathbf{U}_{\mathbf{s}} \ball symbol{ \Sigma}_{\mathbf{s}}{ }^{-1} \mathbf{U}_{\mathbf{s}}{ }^H+\mathbf{U}_{\mathbf{n}} \ball symbol {\Sigma}_{\mathbf{n}}{ }^{-1} \mathbf{U}_{\mathbf{n}}{ }^H \\ & =\sum \frac{1}{\sigma_s } \mathbf{u}_{\mathbf{s}} \mathbf{u}_{\mathbf{s}}{ }^H+\sum \frac{1}{\sigma_n} \mathbf{u}_{\ mathbf{n}} \mathbf{u}_{\mathbf{n}}{ }^H\end{aligned}Rx1=(UsSsUsH+UnSnUnH)1=UsSs1 UsH+UnSn1 UnH=ps1ususH+pn1ununH

That is equal to the "signal" term + "noise" term. When the SNR is large enough, that is, σs/σn is large enough, the noise term can be ignored. The above formula (5) can be approximately rewritten as:

P Capon ( θ ) ≃ 1 a H ( θ ) U n Σ n − 1 U n H a ( θ ) P_{\text {Capon}}(\theta) \simeq \frac{1}{\mathbf{a} ^H(\theta) \mathbf{U}_{\mathbf{n}} \ballsymbol{\Sigma}_{\mathbf{n}}{ }^{-\mathbf{1}} \mathbf{U}_ {\mathbf{n}}{ }^H \mathbf{a}(\theta)}PCapon ( i )aH (i)UnSn1 UnH a(i)1

Since the summation sign does not change the spectrum, there exists:

P Capon ( θ ) ≃ 1 a H ( θ ) U n U n H a ( θ ) = P MUSIC ( θ ) P_{\text {Capon }}(\theta) \simeq \frac{1}{\mathbf{ a}^H(\theta) \mathbf{U}_{\mathbf{n}} \mathbf{U}_{\mathbf{n}}{ }^H \mathbf{a}(\theta)}=P_ {\text {MUSIC}}(\theta)PCapon ( i )aH (i)UnUnH a(i)1=PMUSIC ( i )

That is, the performance of the Caopn algorithm is approximately equal to that of the MUSIC algorithm, which is proved mathematically. Therefore, we conclude that if the SNR is large enough, the spectrum of Capon and MUSIC is roughly the same, so their performance may be similar.

The relationship between the DOA RMSE and SNR of the two algorithms at 10° and 20°:
insert image description here

When the two algorithms are at SNR=10dB, the relationship between the target separation angle of the Capon and MUSIC algorithms and the DOA RMSE:

picture

4. MATLAB simulation

Set the number of array elements to 10, the interval between array elements to half a wavelength, the number of sources to 3 (-10 degrees, 0 degrees, 20 degrees), and the number of snapshots to 1024. The figure below is the estimated signal spectrum, low signal-to-noise ratio The setting is -8dB and the high SNR setting is 10dB.

Low SNR:

picture

High SNR:

picture

It can be seen from the above figure that the performance of the two estimation algorithms drops sharply when the signal-to-noise ratio is low, but the MUSIC algorithm is slightly better than Capon, and when the signal-to-noise ratio is large, the two algorithms are basically the same. The MUSIC spectral peak only reflects the orthogonality between the array manifold vector and the noise subspace, and has nothing to do with the signal-to-noise ratio; the Capon spectral peak is the real output power, which is related to the signal-to-noise ratio. This is what I said earlier that Capon is actually a beam in essence. form.

Simulation code:

%MUSIC ALOGRITHM
%DOA ESTIMATION BY CLASSICAL_MUSIC
% 运行环境:MATLAB2022b
clear all;
%close all;
clc;
source_number=3;%信元数
sensor_number=10;%阵元数
N_x=1024; %信号长度
snapshot_number=N_x;%快拍数
w=[pi/4 pi/6 pi/3].';%信号频率
l=sum(2*pi*3e8./w)/3;%信号波长  
d=0.5*l;%阵元间距
snr=10;%信噪比

source_doa=[-10 0 20];%两个信号的入射角度

A=[exp(-1j*(0:sensor_number-1)*d*2*pi*sin(source_doa(1)*pi/180)/l);exp(-1j*(0:sensor_number-1)*d*2*pi*sin(source_doa(2)*pi/180)/l);exp(-1j*(0:sensor_number-1)*d*2*pi*sin(source_doa(3)*pi/180)/l)].';%阵列流型

s=sqrt(10.^(snr/10))*exp(1j*w*[0:N_x-1]);%仿真信号
%x=awgn(s,snr);
x=A*s+(1/sqrt(2))*(randn(sensor_number,N_x)+1j*randn(sensor_number,N_x));%加了高斯白噪声后的阵列接收信号

R=x*x'/snapshot_number;
iR=inv(R);
%[V,D]=eig(R);
%Un=V(:,1:sensor_number-source_number);
%Gn=Un*Un';
[U,S,V]=svd(R);
Un=U(:,source_number+1:sensor_number);
Gn=Un*Un';

searching_doa=-90:0.1:90;%线阵的搜索范围为-90~90for i=1:length(searching_doa)
   a_theta=exp(-1j*(0:sensor_number-1)'*2*pi*d*sin(pi*searching_doa(i)/180)/l);
   Pmusic(i)=a_theta'*a_theta./abs((a_theta)'*Gn*a_theta);
   Pcapon(i)=1./abs((a_theta)'*iR*a_theta);
 end
plot(searching_doa,10*log10(Pmusic),'k-',searching_doa,10*log10(Pcapon),'b--');
%axis([-90 90 -90 15]);
xlabel('DOAs/degree');
ylabel('Normalized Spectrum/dB');
legend('Music Spectrum','Capon Spectrum');
title('Comparation of MUSIC and Capon for DOA Estimation');
grid on;

5. Angular resolution/accuracy

Set the number of array elements to 10, the interval between array elements to half a wavelength, the number of sources to 3 (-0.5°, 0°, 0.5°), the number of snapshots to 1024, and the signal-to-noise ratio to 20dB. The figure below is the estimated result The signal spectrum was normalized for the convenience of observation.

picture

It can be seen that in this case, the resolution of MUSIC is better than the Capon method. See https://MLiyPUV6F for details on the code .

Guess you like

Origin blog.csdn.net/qq_35844208/article/details/129043319