Knowledge of MUSIC algorithm related principles (physical interpretation + mathematical derivation + Matlab code implementation)

Part of it comes from online tutorials, if there is any infringement, please contact me to delete 

Tutorial link: Intuitive explanation of MUSIC algorithm: 1. Background and basic knowledge of MUSIC algorithm_哔哩哔哩_bilibili

 Intuitive explanation of the MUSIC algorithm: 2. My understanding of the MUSIC algorithm_哔哩哔哩_bilibili

https://blog.csdn.net/zhangziju/article/details/100730081

 1. Function of MUSIC algorithm

MUSIC (Multiple Signal Classification), multiple signal classification, is a type of spatial spectrum estimation algorithm. The idea is to use the covariance matrix (Rx) of the received data to perform eigendecomposition , separate the signal subspace and the noise subspace, use the orthogonality between the signal direction vector and the noise subspace to form a spatial scanning spectrum, and perform a global search for spectral peaks. , so as to realize the parameter estimation of the signal .

The MUSIC algorithm is commonly used for sound source localization using microphone arrays.

For example, when the microphone array is placed in a room, there is a sound source in the room. When the sound source is sounding, the array will receive the signal from the direction of the target, but will also receive the reflected signal from a different direction. The MUSIC algorithm can eliminate the remaining reflected signals, select the signal from the direction of the target, and obtain the direction of the target.

Sound waves are mechanical waves that are usually received by a microphone array and converted into electrical signals for processing. When the signal is an electromagnetic wave, such as a wifi signal, we use an antenna array to receive it. At this time, we can still use the MUSIC algorithm to calculate the angle of signals in different directions.

2. Principle of MUSIC algorithm

The MUSIC algorithm is suitable for incoming waves that are parallel waves, that is, the distance L between the target and the microphone array is much greater than the distance d between the array elements. At this time, the azimuth angle of the signal from the target relative to each array element can basically be regarded as the same. Specifically as shown in the figure below:

figure 1

1. Relationship between time delay, phase difference and target azimuth

Suppose the signal source transmits a signal asx=e^{jf}

When the signal propagates from the sound source target S to the array element 1, the signal travels L_{1}a distance, assuming that the sound speed is c, it takes timeL_{1}/c

This will cause the phase of the received signal of array element 1 to be inconsistent y_{1}with the transmitted signal x, which will delaye^{j 2\pi f(L_{1}/c)}

Then the final received signal of array element 1 is:y_{1}=e^{jf}*e^{j2\pi f(L_{1}/c)}=x*e^{j2\pi f(L_{1}/c)}=x*\beta(\beta=e^{j2\pi f(L_{1}/c)})

 Array element 2 has moved more than array element 1 dcos\theta, then the signal received by array element 2 is:

y_{2}=e^{jf}*e^{j2\pi f((L_{1}+dcos\theta)/c)}=x*e^{j2\pi f((L_{1}+ dcos\theta)/c)}=y_{1}*\phi_{1}=x*\beta*\phi_{1}

Among them \phi_{1}=e^{j2\pi f(dcos\theta/c)}, represents the phase difference between the received signal of array element 2 and the received signal of array element 1

Array element 3 has moved more than array element 1 2dcos\theta, then the corresponding phase difference is:\phi_{2}=e^{j2\pi f(2dcos\theta/c)}=\phi_{1}^{2}

Then the received signal of array element 3 is:y_{3}=y_{1}*\phi_{1}^{2}=x*\beta*\phi_{1}^{2}

PS: If you still don’t understand how to calculate the phase difference caused by walking a longer distance, you can understand it like this (take array elements 1 and 2 as an example):

Suppose the received signal of array element 1 isy_{1}=sin(wt)

Because the signal arrives at array element 2 and travels a longer distance, the time for the signal to arrive at array element 2 will always be delayed compared to array element 1 \Delta t(we usually call it time delay, in fact, the phase difference is caused by time delay)

Then the received signal of array element 2 isy_{2}=sin(w(t+\Delta t))

Obviously, the phase difference of y_{1}and isy_{2}\phi=w(t+\Delta t)-wt=w\Delta t

It can be known from the knowledge of digital signal processing w=2\pi fthat the time delay of array element 1 and array element 2 in Figure 1\Delta t=(dcos\theta)/c

Then the phase difference can be obtained\phi=w\Delta t=2\pi f*(dcos\theta/c)

2. The core principle of the MUSIC algorithm (source of ideas)

The ultimate goal of the MUSIC algorithm: calculation\theta

It can be known from the above derivation \thetathat it is closely related to the phase difference of the received signals of the two array elements \phi. If you can ask for it \phi, you can ask for it \theta.

y_{2}/y_{1}Then under ideal conditions, that is, there is no reflection and refraction, and there is only one sound source. At this time, the phase difference can be obtained by directly dividing the received signals of the two array elements, and thus the \phitarget azimuth angle can be obtained \theta.

But in fact, there will be many catadioptric signals received by the microphone array, and there may be more than one sound source. What should we do at this time? This is the problem that the MUSIC algorithm needs to solve.

Ok, then we assume that there are two sound sources A and B, and the transmitted signals are x_{1}and x_{2}(reflection and refraction are not considered for the time being)

Then at a certain time t, the received signals of the three array elements are:

y_{1}[t]=\beta _{1}x_{1}[t]+ \beta _{2}x_{2}[t]

y_{2}[t]=\beta _{1}x_{1}[t]\phi_{1}+ \beta _{2}x_{2}[t]\phi_{2}

y_{3}[t]=\beta _{1}x_{1}[t]\phi_{1}^{2}+ \beta _{2}x_{2}[t]\phi_{2}^{2}

Then within a certain period of time, the signal received by the microphone array is:

\begin{bmatrix} y_{1}[1]&y_{1}[2]&...&y_{1}[n]\\ y_{2}[1]&y_{2}[2]&. &y_{2}[n]\\ y_{3}[1]&y_{3}[2]&...&y_{3}[n] \end{bmatrix} = \begin{bmatrix} 1 &1\\ \ phi_{1} & \phi_{2}\\ \phi_{1}^{2} & \phi_{2}^{2} \end{bmatrix}* \begin{bmatrix}\beta_{1}x_{1 }[1]&\beta_{1}x_{1}[2]&...&\beta_{1}x_{1}[n]\\\beta_{2}x_{2}[1]&\ beta_{2}x_{2}[2]&...&\beta_{2}x_{2}[n]\end{bmatrix}

The correspondence can be written as:Y=\Phi*X

Among them Y, what we know \Phiis what needs to be obtained, Xmay be known, may not be known (when Xknown and reversible, you can directly use the inverse matrix to find \Phi=Y*X^{-1}, but such cases are rare)

Can you eliminate Ythe right side of the equation through processing ? X(the core of the MUSIC algorithm)

How to deal with it? ? ?

If three complex numbers c_{1}, c_{2}and c_{3}can be found, respectively perform amplitude and phase transformations on the received signals of the three array elements (multiplying a signal by a complex number means that the amplitude and phase transformations are performed on the signal), and the received signal after transformation There is a complete cancellation between the signals, namely:

y_{1}c_{1}+y_{2}c_{2}+y_{3}c_{3}=0

or in matrix representation:

\begin{bmatrix} y_{1}[1]&y_{2}[1]&y_{3}[1]\\ y_{1}[2]&y_{2}[2]&y_{3}[2]\ \ ...&...&... \\y_{1}[n]&y_{2}[n]&y_{3}[n] \end{bmatrix}* \begin{bmatrix}c_{1} \\c_{2}\\c_{3} \end{bmatrix}=\vec0

Substituting y_{1}, y_{2}and y_{3}into the above formula respectively, we can get:

(\beta _{1}x_{1}+ \beta _{2}x_{2})c_{1}+(\beta _{1}x_{1}\phi_{1}+ \beta _{2}x_{2}\phi_{2})c_{2}+(\beta _{1}x_{1}\phi_{1}^{2}+ \beta _{2}x_{2}\phi_{2}^{2})c_{3}=0

Right now:

(\beta_{1}c_{1}+\beta_{1}\phi_{1}c_{2}+\beta_{1}\phi_{1}^{2}c_{3})x_{1}+(\beta_{2}c_{1}+\beta_{2}\phi_{2}c_{2}+\beta_{2}\phi_{2}^{2}c_{3})x_{2}=0

The MUSIC algorithm makes an assumption at this time, that is, it is assumed that the signal x_{1}and the signal x_{2}are uncorrelated (assumption 1 of the MUSIC algorithm) (when the signal x_{1}and the signal x_{2}are linearly correlated, a non-zero complex number can be found csuch that x_{1}=c*x_{2})

Then the coefficients in the above formula must all be 0 at this time x_{1}, x_{2}that is:

\begin{Bmatrix} \beta_{1}c_{1}+\beta_{1}\phi_{1}c_{2}+\beta_{1}\phi_{1}^{2}c_{3}=0\\ \beta_{2}c_{1}+\beta_{2}\phi_{2}c_{2}+\beta_{2}\phi_{2}^{2}c_{3}=0 \end{matrix}

\beta_{1}The sum of the coefficients in the above formula \beta_{2}can be eliminated directly, then it can be seen that the sum can be obtained as long as the sum andc_{1}c_{2}c_{3}\phi_{1}\phi_{2}

So now the question is transformed into, how to find this set of complex numbers c_{1}, c_{2}and c_{3}? ? ?

To be able to find this set of complex numbers, it must be satisfied: the number of array elements > the number of sound source signals (Assumption 2 of the MUSIC algorithm)

In fact, the final solution is:

(c_{1},c_{2},c_{3})*(1,\phi_{1},\phi_{1}^{2})^{T}=0and(c_{1},c_{2},c_{3})*(1,\phi_{2},\phi_{2}^{2})^{T}=0

That is , the MUSIC algorithm finds the corresponding solution \begin{vmatrix} \vec c*\vec a \end{vmatirx}=0by finding the maximum value (spectral peak search) , which corresponds to the corresponding target azimuth .1/\begin{vmatrix} \vec c*\vec a \end{vmatirx}\phi\theta

3. Summary of MUSIC algorithm steps

The DOA mathematical model of the narrowband far-field signal is:

X(t) = A(\theta)s(t)+N(t)

Among them, X is the signal matrix received by the array, and the two dimensions represent: the number of array elements and the number of sampling points (snapshots); A is the array direction matrix, and the two dimensions represent: the number of array elements, The direction vector of the signal direction; s is the signal source emission matrix, and the two dimensions respectively represent: the number of signal sources and the number of sampling points; N is the noise matrix, and the two dimensions are the number of array elements and the number of sampling points respectively.

Then the covariance matrix of the array receiving data is:

R = E[XX^H]=AE[SS^H]A^H+\sigma^2I=AR_{s}A^H+\sigma^2I

Since the signal and noise are independent of each other, the data covariance matrix can be decomposed into two parts related to the signal and noise, where Rs is the covariance matrix of the signal, and ARsA^H is the signal part.

The eigendecomposition of R is:

R = U_{s}\Lambda_{s}U_{s}^{H}+U_{N}\Lambda_{N}U_{N}^{H}

In the formula, Us is the subspace composed of the larger (number of signal sources) eigenvectors among all the eigenvalues ​​of R, which is called the signal subspace; The subspace composed of eigenvectors is called the noise subspace.

According to the conditions of the MUSIC algorithm we derived before, ideally the signal subspace and the noise subspace are required to be orthogonal, that is, the direction vector a(theta) in the signal subspace is orthogonal to the noise subspace:

a^{H}(\theta)U_{N}=0

Due to the existence of noise, in fact a(theta) and Un are not completely orthogonal. So it's actually done by doing a minimal optimization search:

\theta_{MUSIC} = argmin_{\theta}\ a^{H}(\theta)U_{N}U_{N}^{H}a(\theta)

As we said above, MUSIC actually finds the optimal solution theta through peak search:

P_{MUSIC}=\frac{1}{a^{H}(\theta)U_{N}U_{N}^{H}a(\theta)}

PS: Since the data accepted by the array is limited in practice, the covariance matrix is ​​usually replaced by the maximum likelihood estimation of the covariance matrix:

\hat{R} = \frac{1}{L}\sum_{i=1}^{L}XX^H

Summarizing the above algorithm principles, the steps of the MUSIC algorithm are:

1. Obtain the estimated value of the following covariance matrix according to the N received signal vectors:

R_{x}=\frac{1}{N}\sum_{i=1}^{N}X(i)X^{H}(i)

2. Perform eigendecomposition on the covariance matrix obtained in step 1

R_{x}=AR_{s}A^{H}+\sigma^{2}I

3. The matrix R_{x}will have M eigenvalues. Arrange them from largest to smallest:\lambda_{1}>\lambda_{2}>...>\lambda_{M}>0

Among them, D (D=number of signal sources) larger eigenvalues ​​correspond to signals, and the corresponding eigenvectors are regarded as part of the signal space.

MD (M=number of array elements) smaller eigenvalues ​​correspond to noise, and the corresponding eigenvectors are regarded as part of the signal space, and the noise matrix is ​​obtainedE_{n}

4. Make \thetaconstant changes and calculate the spectral function:

P(\theta)=\frac{1}{a^{H}(\theta)E_{n}E_{n}^{H}a(\theta)}An estimate of the direction of arrival is calculated by finding spectral peaks. Here is a(\theta)the direction corresponding vector of the array element.

a(\theta_{k})=[1,e^{-j\phi_{k}},...,e^{-j(M-1)\phi_{k}}]\phi_{k}=\frac{2\pi d}{\lambda}sin\theta_{k}

4. Matlab code implementation

clear all
close all
clc
%----------------均匀线列阵实现MUSIC算法------------------%
ang2rad = pi/180;                   % 角度转弧度系数
N = 10;                             % 阵元个数
M = 3;                              % 信源个数
theta = [-65,0,45];                 % 来波方向(角度)
snr = 10;                           % 信号信噪比dB
K = 512;                            % 总采样点
delta_d = 0.05;                     % 阵元间距
f = 2400;                           % 信号源频率
c = 340;                            % 声速

d = 0:delta_d:(N-1)*delta_d;
A = exp(-1i*2*pi*(f/c)*d.'*sin(theta*ang2rad));   % 接收信号方向向量
S = randn(M,K);                     % 阵列接收到来自声源的信号
X = A*S;                            % 最终接收信号,是带有方向向量的信号
X1 = awgn(X,snr,'measured');        % 在信号中添加高斯噪声
Rx = X1*X1'/K;                      % 协方差矩阵
[Ev,D] = eig(Rx);                   % 特征值分解
% [V,D] = eig(A) 返回特征值的对角矩阵 D 和矩阵 V
% 其列是对应的右特征向量,使得 AV = VD
EVA = diag(D)';                     % 将特征值提取为1行
[EVA,I] = sort(EVA);                % 对特征值排序,从小到大。其中I为index:1,2,...,10
EV = fliplr(Ev(:,I));               % 对应特征矢量排序
 En = EV(:,M+1:N);                  % 取特征向量矩阵的第M+1到N列特征向量组成噪声子空间
 
% 遍历所有角度,计算空间谱
for i = 1:361
    angle(i) = (i-181)/2;           % 映射到-90度到90度
    theta_m = angle(i)*ang2rad;
    a = exp(-1i*2*pi*(f/c)*d*sin(theta_m)).';
    p_music(i) = abs(1/(a'*En*En'*a));
end
p_max = max(p_music);
p_music = 10*log10(p_music/p_max);  % 归一化处理
figure()
plot(angle,p_music,'b-')
grid on
xlabel('入射角/度')
ylabel('空间谱/dB')

Guess you like

Origin blog.csdn.net/APPLECHARLOTTE/article/details/127215848