Adaptive echo cancellation algorithm development in

  Traditional IIR filter and the FIR filter parameters fixed during the processing of the input signal, when the environment changes, the filter can not achieve the pre-set objectives. The adaptive filter can be based on its own status and changes in the environment adjustment filter weights.

Adaptive Filter Theory

  $ x (n) $ is the input signal, $ y (n) $ is the output signal, $ d (n) $ is a desired signal or a reference signal, $ e (n) = d (n) -y (n) $ is error signal. The adaptive algorithm and the error signal (n-) adjusting the filter coefficients $ $ e.

  Adaptive filter type. It can be divided into two categories: the nonlinear adaptive filter , a linear adaptive filter . Nonlinear adaptive filter comprises an adaptive filter based on neural networks and Volterra filter. Nonlinear adaptive signal-processing filter stronger, but higher computational complexity. So in practice, the linear adaptive filter to use more .

Adaptive filter structure

We divided into two categories FIR filters, IIR filters.

  1. Non-recursive FIR filter system, i.e., the current output sample is only a function of present and past input samples, the system impulse response h (n) is a finite sequence length. It has a good linear phase, without phase distortion, good stability .
  2. When the recursive IIR filter system, i.e. the output of the current output sample is the last sample of the input function and the past, the system impulse h (n) is an infinite sequence. Phase-frequency characteristics of the IIR system is non-linear, stability can not be guaranteed. Benefits are realized lower order, small amount of calculation .

  Adaptive filter algorithm according to different optimization criteria, adaptive filtering algorithms are common: recursive least squares algorithm (the RLS), least mean square error algorithm (the LMS), normalized mean square error algorithm (the NLMS), quickly and accurately minimum mean square error algorithm, sub-band filtering, frequency domain adaptive filtering and the like.

Performance

  • convergence speed
  • stability
  • Computational complexity

Full-band adaptive Sparse

The minimum mean square LMS algorithm

Adaptive AEC problem is the most widely used adaptive filtering algorithm, which was first proposed by the Widrow and Hoff in 1959 year of the minimum mean square (Least Mean Square, LMS) algorithm

Wiener filter theory based on the LMS algorithm, using the steepest descent algorithm to update the coefficients of the adaptive filter weight values ​​by minimizing the energy of the error signal.

  • Advantage : side lobe suppression effect
  • Disadvantage : LMS computational complexity is not high, but its convergence rate is slow , and as the number of filter (step size parameter) order increases stability of the system decreases, to ensure that the smallest step size parameter, ensure minimum disorders, may not be able to meet the convergence criteria

Normalized least mean square algorithm NLMS

  Normalized Least Mean Square (Normalized Least Mean Squares, == NLMS) == algorithm is a modified LMS algorithm, the error signal from the product of the distal end of the original input signal in the LMS algorithm, the distal end of the square of the input signal European Norm normalizing process, the fixing step factor LMS algorithm becomes variable step size in accordance with an input signal becomes NLMS algorithm,

  • Advantages : improve the shortcomings of slow convergence of the LMS algorithm. Calculation is simple, fast convergence characteristics
  • Drawback : slow convergence

Variable step size LMS

Advantages: fast convergence,

Disadvantages: the impact of input noise are more prone to stability and tracking ability of the algorithm

Affine projection algorithm

 

 

 

 

 

Sparse class adaptive algorithm

By analyzing the echo path model, it was found more active echo energy coefficients gathered in the time domain, and a very small proportion, which only a few non-zero value effective values mostly zero values or near zero values, which is the echo path having sparse characteristics

PNLMS

The sparseness of the echo path, the ratio of the adaptive Duttweiler introduced the idea proposed ratio normalized least mean square algorithm PNLMS , proportionally distributing filter weight vector magnitude, the algorithm has a very important development for the echo cancellation significance. 

  The algorithm uses the variable step-size parameter proportional to the sparse tap filter to adjust the convergence rate, the ratio of the value of its sparse taps determines the active state of the current sparse weights belongs, depending on the state of the assigned step size also vary, active dispensing tap coefficient larger step size parameter, which can accelerate the convergence of the tap coefficients and the inactive on the contrary, to increase the steady-state error of the algorithm by assigning its smaller step size parameter. Each filter tap are respectively given a different estimate of the steady state convergence of the algorithm has been significantly improved. However, PNLMS there is a significant disadvantage, i.e. the ratio selected step parameter introduces value, which can lead to accumulation of estimation error, and finally makes the algorithm convergence speed slows down later.

Advantages : makes the algorithm for sparse echo path, has a fast rate of convergence at an initial stage, at the same time reduces the steady state error,
disadvantages :

  1. Because too much emphasis on convergence algorithm PNLMS big factor, and when the coefficient is small, P steps also will be smaller, but with the operation of the algorithm, the algorithm may appear late convergence is slow , can not be timely convergence of circumstances
  2. Compared to the NLMS algorithm increases the complexity of the algorithm
  3. Also in the case where the echo path is not sparse , the convergence rate becomes slower than the NLMS algorithm

PNLMS++

PNLMS ++ algorithm in each sampling period, performed by the NLMS algorithm between the algorithm and the PNLMS alternately be implemented convergence speed lift aspect. But PNLMS ++ algorithm only in the echo path at the height of sparse or non-sparse good performance.

CPNLMS

CPNLMS algorithm (Composite PNLMS) by comparing the error signal power and the threshold size has been set, to determine the algorithm or the NLMS algorithm PNLMS

  • Disadvantage : threshold selection is often related to the actual environment, so the algorithm is not common in practical applications

$ \ Mu $ criteria MPNLMS

$ \ MU $ criteria MPNLMS algorithm: In order to resolve the problem PNLMS algorithm process slow convergence, the steepest descent method applied to PNLMS algorithm, using $ \ MU $ criteria scale factor is calculated (using a logarithmic function PNLMS algorithm instead of absolute value function)

  • Advantages: updating filter weights so that more balanced vectors, to enhance the rate of convergence during the near steady state algorithm PNLMS
  • Disadvantages: increased complexity

Improvement ratio of a normalized least mean square IPNLMS

  IPNLMS algorithm, based on the L1 norm of the estimated echo path vector of the mean weight ratio of the sum of the step size parameter as the ratio of the diagonal elements of the matrix (ratio by adjusting the filter parameters of steps), and such that IPNLMS with similar initial PNLMS rate of convergence, and, in the echo path non-sparse conditions, the rate of convergence improved as compared IPNLMS PNLMS, improved performance but also increases computational complexity.

Improved IPNLMS

  In the process of iteratively updating the adaptive algorithm PNLMS class, the tap weight vector has a large step length factor, which enhance the rate of convergence, but the adaptive filter to converge when the near steady state, the large tap weight vector will produce more large steady-state error. To solve this problem, PANaylor proposes an improved IPNLMS (Improved IPNLMS, IIPNLMS) algorithm. IIPNLMS IPNLMS algorithm based on the algorithm, a large value for the weight vectors, such that a smaller step size parameter proportional to the reduced, thereby reducing the noise factor.

MPNLMS

MPNLMS optimal step size control algorithm is introduced matrix, update the equalizer coefficients between the filter size, corrected post PNLMS slow convergence drawbacks,

  • Disadvantage : Operation MPNLMS algorithm comprises calculating logarithmic, so the computational complexity of the algorithm is relatively high.

SPNLMS

SPNLMS algorithm, the filter update MPNLMS logarithm calculation process reduces to a simple form of two functions.

  • Advantages : relative MPNLMS to reduce the complexity of the Method

Improved MPNLMS algorithm, a plurality of piecewise function logarithm function approximated MPNLMS, thereby reducing the complexity of the algorithm.

Improved SPNLMS algorithm, reducing the complexity of the algorithm by controlling the frequency step size control matrix iteration. The convergence rate also declined

Above improved algorithm MPNLMS damage the stability and convergence in the case of reducing the computational complexity of the algorithm.

Sparse control (Sparse Control, SC)

Sometimes the sparsity of the echo path can produce varies depending on temperature, pressure and room wall sound absorption coefficient, among other factors, which need a way to adapt to different AEC algorithm sparsity change.

Controlling the ratio sparse echo cancellation algorithm ( SC-PNLMS , SC-MPLNMS , SC-IPNLMS ), using a new method of controlling the sparse sparse dynamically adapt the extent of the echo path, such that the algorithm has a good and non-sparse sparse echo path condition performance, reflecting the sparse adaptive filtering algorithm-based control algorithm can improve the robustness of the degree of thinning of the echo path.

 

Small summary : Through the above analysis of PNLMS algorithm, we can see the overall results in slow convergence rate PNLMS algorithm main reason is the convergence between large and small coefficient coefficient is not balanced . Although many scholars have put forward for this defect correction algorithm, such as PNLMS ++, CPNLMS, etc., but PNLMS ignore small coefficient convergence defect has not been fundamentally improved, so these improved effect of the algorithm is not very satisfactory. Deng filter coefficients by quantitative analysis of the convergence process, derivation of a weight coefficient update process optimum length calculation step, an improved algorithm --MPNLMS algorithm. MPNLMS in step P is calculated overcome PNLMS excessive focus on large coefficient convergence ignore the shortcomings of small coefficient convergence, convergence correcting defects PNLMS late slow.

  A new improved algorithm PNLMS, because PNLMS focus only on large coefficient update algorithm, ignoring small coefficient of convergence, resulting algorithm decline in the late convergence speed, attention must be updated at the same time small coefficient P step introduction. MPNLMS algorithm by establishing a function of P step with the current filter weights, to a certain extent, solve the PNLMS algorithm late slow convergence problem, but the filtering process involves real-time implementation of the operand is not conducive to the system.

By quantitative analysis filtering process, and taking into account the large and small convergence factor, to establish a mapping relationship between the new and the current step size P filter coefficients, to reduce the computational complexity of the algorithm. The improved algorithm PNLMS algorithm, by changing the algorithm to overcome PNLMS convergence defects

Subband adaptive filters

  In the acoustic echo cancellation applications, the distal end of the input speech signal is highly correlated , however, the traditional method is based on the " independent of the signal " hypothesis stochastic gradient algorithm, the conventional NLMS and other LMS full band low computational complexity convergence rate decreased significantly.

Far-end voice signal correlation has two meanings:

  • A time domain: it is characterized by proliferation of the speech signal characteristic values ​​of the correlation matrix of the
  • The frequency domain: the distal end of the dynamic range of the spectrum which characterizes the voice signal

  In general, speech signals compared to the white signal, the former was significantly greater dynamic range of the spectrum, i.e., greater signal correlation. Thus, it can be reduced to speed up the correlation of the input signals the convergence rate , but a method is effective to combine the adaptive filter and the filter bank theory, the (subband adaptive filter sub-band adaptive filtering, SAF) algorithm,

Subband adaptive filter : SAF correlation signal by the filter algorithm is approximately independent of each group is divided into independent sub-band signals ( sub-band division ). Then the sub-band signal to obtain a multi-rate decimation sampling signal, and then performs adaptive signal processing. Sub-band adaptive filters for the study, first need to understand the multi-rate signal extraction systems and filter banks.

Multirate system [1]

  Extraction system for multi-rate sub-band adaptive filters are down-sampling and up-sampling two kinds, mainly through the extraction and interpolation methods to achieve different sampling rates of the system. N filter input signal through the total number of samples divided frequency is N times the original signal, the substantial increase in the number of samples increases the amount of calculation.

Filter banks [1]

  Signal is achieved by sub-band division filter bank. Filter banks by the analysis filter and synthesis filter composed. The essence of the filter bank are a series of bandpass filters.

  After the analysis filter bank to extract a digital signal is divided into a plurality of sub-band signals, after signal processing, then the sub-band synthesis filter bank and interpolated signals by adding filters restored to the original signal.

Subband adaptive algorithm structure [1]

   In the conventional SAF, the subband adaptive algorithm is to minimize the error signal Subband target, so that the objective function based on the local minimization of error is not necessarily the global error energy minimization. When the analysis filter bank and the subband synthesis filter bank cutting full band signals are reconstructed introduces delay in AEC applications, such a delay would contain the full-band error signal transmitted to the distal end of the near-end speech, to eliminate the influence of the delay, no delay subband system of closed loop configuration globally minimize the error energy constraint to adjust the filter coefficients. Finally, to ensure that adaptive filtering algorithm can converge to optimal filter coefficients.

  • Advantages: improved convergence rate of the adaptive filtering algorithm in the full-band signal related conditions
  • Disadvantages:
    • But the steady-state error due to the aliasing components present in the output significantly increased
    • When using QMF, although aliasing portion by subband system offset each other, but in reality it can not be achieved

Follow-up development of the sub-band adaptive algorithm

Question: SAF higher algorithm for steady-state error problem

Resolution: The proposed minimum disturbance is proposed based on the principle of normalization of SAF (normalized SAF, NSAF) algorithm.

Advantages: Since the SAF class properties inherent decorrelation algorithm, NSAF than while processing the input signal of the full-band NLMS fast convergence, but the computational cost comparable with the NLMS

In recent years, researchers to be able to improve the convergence performance and steady state performance of the AEC algorithm, full-band adaptive filtering algorithm in combination on the basis of the proportional NSAF theory, several improved NSAF algorithms, such as different forms of variable step NSAF growth factor and variants NSAF regularization parameter . For fast convergence in the echo path identifying sparse literature [22,23] Thought the proportional NLMS algorithm in analogy to the manner NSAF fusion algorithm proposed ratio NASF (proportionate NASF, PNSAF) algorithm and μ guidelines PNSAF ( μ-law PNASF, MPNSAF) algorithm.

Because the subband aliasing components present in the structure of the problem

  1. Sampling technique using filter banks Keermann eliminates aliasing in 1988, but increases the complexity of the algorithm move.
  2. Left between adjacent subbands safety band, disadvantages: the introduction of the blank band, reduces the signal quality.
  3. Overlapping sub-filter compensation method, disadvantages: because the cross-term increase in the amount of operation also reduces the convergence rate.

In 2004 KA Lee WS Gan and minimal disturbance theory is proposed based on a multi-band structure of formula (Multiband Structured SAF, MSAF) adaptive filter algorithm and the adaptive filter update equations given tap coefficients. The output of the filter structure is completely aliasing components is not a problem.

Multi-band adaptive filters

  Sub-band adaptive filters in each subband using a separate sub-filters. This structure leads to a conventional method aliasing components, to solve this problem in order to reduce the output of generating multiple signal quality or increase the cost of the steady-state error, and Gan Lee propose a new multi-band structure in 2004. Each sub-subband filter is different from using a different filter for each sub-band, multi-band structure with the same full band filter, which well overcome the aliasing component of the output-side problems.

 

 

 

 

 

Frequency domain adaptive filtering

Problems : a long and complex for the echo path delay and high recovery, the time-domain adaptive filtering algorithm for the high computational complexity of the problem,

Solution : [12 to 14] == proposed multi-block delay filter == frequency domain (the MDF) algorithm, MDF algorithm adaptive filter length L FFT length into
an integer multiple of the sub-blocks, each of the input signal sub-block LMS algorithm in the frequency domain.

  • Advantage: When the echo path is long and complicated small amount of calculation, and in a slight lifting ** ** aspect rate of convergence.

 

In summary

"Subband acoustic echo cancellation and adaptive algorithm block sparse _ Weidan Dan Study"

Impact of the distal end of the input speech signal highly correlated, and the acoustic response of the echo channel to only a small number of nonzero coefficients, and therefore is a sparse channel.

We studied and used for improved subband acoustic echo cancellation and the block sparse algorithms, algorithms to achieve improved tracking resistance and impulse robustness purposes. The main contribution of this paper is as follows:

First, different from the traditional literature normalized subband echo cancellation method of adaptive algorithm,

I propose a new acoustic echo cancellation for a subband adaptive filtering handover normalization algorithm (LMS-NSAF).

The core idea of ​​the algorithm is different according to the state of the speech signal, using the VAD algorithm speed envelope technology handover, the far end signal when the instantaneous value of the input energy is large, the use of fast convergence subband NLMS algorithm, when the instantaneous energy of the input signal when the value is small, the use of low computational complexity formula weight vector updating, so that the improved subband NLMS algorithm can reduce the computational complexity of the algorithm while improving convergence

Based on the handover algorithm NLMS-NSAF improved multi-band adaptive filter structure

First, the distal end of the speech signal using the envelope method determines the presence or absence of voice segments,

Then the state of the output signal to the adaptive algorithm module among the plurality belt structure.

If a large short-term energy input speech signal area, the adaptive filtering algorithm (the NLMS) fast convergence is used;

If the speech region of the input signal is small short-term energy, to be considered a low amount of calculation algorithm (NSAF),

Of course, the voice iterative algorithm stops when there is no voice zone.

Determining the energy level of the input speech signal and the threshold value by comparing obtained. In the case of fully consider the characteristics of the voice switching algorithm algorithm convergence speed advantage, while the completion of the optimization algorithm to select the same complexity. Finally, to achieve the purpose of improving the filtering performance of the algorithm, the calculation amount reduced.

Some people say: LMS algorithm is developed based on the Wiener filter because of its simple structure, a small amount of computation, stability and other characteristics, is still an adaptive filtering algorithm now widely applied.

 

 

 

Classic sparse echo path is the path, and the voice signal as the correlation is strong remote input,

One of the most typical types of the most representative algorithm is adaptive: minimum mean error (LMS) algorithm, recursive least square (RLS) algorithm. [Reference] (https://blog.csdn.net/tianwenzhe00/article/details/88192783)

reference

Subband acoustic echo cancellation and the block sparse _ Adaptive Algorithm Weidan Dan

"Hands-free car system noise reduction algorithms and hardware implementation of research" __ Zhang Xue

 

Guess you like

Origin www.cnblogs.com/LXP-Never/p/11773190.html