Digital Signal Processing (4)-Adaptive Filter

Introduction to adaptive filters

In order to ensure absolute stability in engineering applications, FIR filters are generally used to design adaptive filters.
The principle block diagram of the adaptive filter is shown in the figure below, where x(n) is the input signal, w n is the coefficient of the FIR filter, d(n) is the desired signal, and e(n) is the error signal.
Insert picture description here

The operation of the adaptive filter involves two basic processes: the filtering process and the adaptive process . The filtering process is the convolution process of the input signal and the filter coefficients, which is used to generate an output response to a series of input data; the adaptive process is achieved through a specific algorithm to continuously reduce the mean square error of the response signal and the desired signal. Adaptive adjustment of filter parameters. The specific operation steps are as follows:
(1) Before the adaptive filter is running, the input signal x(n) and the desired signal d(n) are known;
(2) When the adaptive filter is running, each time the FIR filter is The difference between the product d^(n) and d(n) is an error signal e(n);
(3) The update algorithm uses x(n) and e(n) as input, and outputs Δw n to correct FIR The parameters of the filter.
(4) Repeat steps b and c until the mean square value of the error signal is less than a certain set value, which shows that the obtained FIR filter can be applied to a real-time processing system.

Algorithms related to adaptive filters include least mean square (LMS) algorithm, normalized least mean square (NLMS) algorithm, recursive least square (RLS) algorithm, etc.

LMS

The weight update formula of the LMS algorithm is: (u(n) is equivalent to x(n) above)
Insert picture description here

The initialization parameters are: the weight vector w can be initialized to a zero vector, and the step size μ can be adjusted according to the operating conditions.
The advantage of the LMS algorithm is that the amount of calculation is small, and its disadvantage is that the instantaneous tracking ability is weak and the anti-interference ability is poor.

NLMS

The weight update formula of the NLMS algorithm is:
Insert picture description here

The initialization parameters are: the weight vector w can be initialized to a zero vector, and the step size μ can be adjusted according to the operating conditions.
The advantage of the NLMS algorithm is that it has faster convergence and better stationarity for signals with larger energy and not very stable. The disadvantage is that it adds additional multiplication calculations compared to LMS.

RLS

The weight update formula of the RLS algorithm is:
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

The initialization parameter is: the weight vector w can be initialized to a zero vector, P(0)=δ -1 I ; the value of δ is adjusted according to the difference in the signal-to-noise ratio of the input signal or the actual operating conditions.
The advantage of the RLS algorithm is that the convergence rate is an order of magnitude faster than the general LMS algorithm, and the algorithm's anti-interference effect is particularly good. It converges well under noisy conditions. Its disadvantage is that the iteration method is more complicated and the calculation amount is large.

MATLAB program

The MATLAB program demonstrates the LMS and RLS algorithms, and compares the convergence speed and stability.

The parameter of the reference filter wo set in the program is:
wo=[-0.1 0.2 0.7 0.4 -0.2 -0.1 0.12 -0.25 0 0 0 0];

In the process of using the LMS or RLS algorithm to gradually converge, the filter parameters gradually approach the wo vector from the initial zero vector.

clc
clear
close all
 
dotnumber=1000000;% 数据点数
 
%构造原始数据
u=wgn(dotnumber,1,0);% wgn(m,n,p)产生一个m行n列的高斯白噪声的矩阵,p以dBW为单位指定输出噪声的强度
noise=wgn(dotnumber,1,-60);
b=1;a=[1 -0.9];
u=filter(b,a,u);% filter(b,a,X) X为输入数据,其中b是分子系数向量,a是分母系数向量  a=1是FIR滤波器
u(dotnumber+1:end)=[];% end是数组的结尾,将从dotnumber+1:end的数据删除
h=[-0.1 0.2 0.7 0.4 -0.2 -0.1 0.12 -0.25].';% 取非共轭转置,下面加点代表非共轭
wo=[h;zeros(4,1)];% 抽头权向量是12阶,后面补零
 
%构造参考输出
d=filter(h,1,u);% filter(b,a,X) X为输入数据,其中b是分子系数向量,a是分母系数向量  a=1是FIR滤波器
d(dotnumber+1:end)=[];% end是数组的结尾,将从dotnumber+1:end的数据删除
d=d+noise;% 加上测量误差
 
%初始化
len=12;
w=zeros(len,1);% 设置权值初始值为0
w_error=zeros(dotnumber,1);% 用于记录误差
e=zeros(dotnumber,1);
vector_u=zeros(len,1);% 抽头输入向量为12行的列向量
 
%% RLS_1
%初始化RLS参数
delta=0.001;% 高SNR时取较小的正常数  小数值收敛快   % 低SNR时取较大的正常数
P=1/delta*eye(len);% eye(n):返回n*n单位矩阵
ramda=1;% ramda=1  无限记忆   当ramda小于1时最终达不到稳态,而且整体误差较大
p_i=eye(1,12);% eye(m,n):返回m*n单位矩阵
k=eye(1,12);
for n=1:dotnumber
    vector_u=[u(n);vector_u(1:end-1)];% 抽头输入向量
    p_i=P*vector_u;% 先计算pi用以减少后面的计算
    k=p_i/(ramda+vector_u.'*p_i);% 更新增益向量k
    sai(n)=d(n)-w'*vector_u;% 更新先验估计误差sai
    w=w+k*sai(n);% rls 更新抽头权值w
    P=(1/ramda).*P-(1/ramda)*k*vector_u.'*P;% 更新逆相关矩阵P
    w_error(n)=norm(w-wo)^2;% 计算误差并记录
    % norm(A)  返回向量A的2范数,即等价于norm(A,2)。即各元素平方和开根号
end
figure;
plot(10*log10(w_error));
 
%% RLS_2
%清空上次的计算
w=zeros(len,1);
w_error=zeros(dotnumber,1);
e=zeros(dotnumber,1);
vector_u=zeros(len,1);
 
%清空上次的RLS参数
delta=2;
P=1/delta*eye(len);
ramda=1;
p_i=eye(1,12);
k=eye(1,12);
for n=1:dotnumber
    vector_u=[u(n);vector_u(1:end-1)];
    p_i=P*vector_u;
    k=p_i/(ramda+vector_u.'*p_i);
    sai(n)=d(n)-w'*vector_u;
    w=w+k*sai(n);%rls
    P=(1/ramda).*P-(1/ramda)*k*vector_u.'*P;
    w_error(n)=norm(w-wo)^2;
end
hold on;
plot(10*log10(w_error));
 
%% LMS_1
%清空上次的计算
w=zeros(len,1);
w_error=zeros(dotnumber,1);
e=zeros(dotnumber,1);
vector_u=zeros(len,1);
 
%初始化LMS参数
mu=0.003;
for n=1:dotnumber
    vector_u=[u(n);vector_u(1:end-1)];
    y=vector_u.'*w;
    e(n)=d(n)-y;
    w=w+mu*e(n)*vector_u;%lms
    w_error(n)=norm(w-wo)^2;
end
hold on;
plot(10*log10(w_error));
 
%% LMS_2
%清空上次的计算
w=zeros(len,1);
w_error=zeros(dotnumber,1);
e=zeros(dotnumber,1);
vector_u=zeros(len,1);
 
%初始化LMS参数
mu=0.001;
for n=1:dotnumber
    vector_u=[u(n);vector_u(1:end-1)];
    y=vector_u.'*w;
    e(n)=d(n)-y;
    w=w+mu*e(n)*vector_u;%lms
    w_error(n)=norm(w-wo)^2;
end
hold on;
plot(10*log10(w_error));
legend('RLS 0.001','RLS 2.000','LMS 0.003','LMS 0.001');%改变delta
xlabel('采样点数(迭代次数)');ylabel('系统误差 /dB');
title('自适应滤波器仿真');

operation result

Insert picture description here

It can be seen from the result graph that for LMS, the larger the step size μ, the faster the convergence speed, but at the same time, it will cause larger system errors when reaching the limit of cheeky; RLS is compared with LMS, the system obtained after convergence The error is smaller and the volatility is smaller.

Insert picture description here

Guess you like

Origin blog.csdn.net/meng1506789/article/details/113093912