Signal parameter estimation

I. Estimated evaluation criteria

Suppose a is a characteristic quantity of a generalized stationary random signal x(n), \hat arepresenting an estimator of a. The estimated deviation can reflect the closeness of the estimator to the true value, which is defined as follows:

B=E[a-\hat a]=a-E[\hat a]

Intuitively, the smaller B is, \hat athe better the estimate of a will be. In theory, when the number of samples N tends to infinity, a gradually unbiased estimate will be formed, as follows:

\lim_{N\to \infty}B=a-\lim_{N\to\infty}E[\hat a]=0

The variance of the estimate can indicate the degree of dispersion of the estimated values ​​relative to the estimated mean. The estimated variance is defined as follows:

var[\hat a]=\sigma^2_{\hat a}=E\lbrace [\hat a-E(\hat a)]^2\rbrace=E[\hat a^2]-[E(\hat a)]^2

The estimated mean square error can comprehensively reflect the characteristics of the estimate, which is defined as follows:

E[\tilde a^2]=E[(\hat aa)^2]

If the mean square error satisfies the following conditions, it is called a consistent estimate, as follows:

  1. Number of samplesN\to \infty
  2. \lim_{N\to\infty}E[\tilde a^2]=0

2. Consensus estimates

Given consistent estimates, show that both bias and variance tend to zero.

The given mean square error simplifies as follows:

Obtained by the condition:

E[\tilde a^2]=0

So you can get:

B^2+\sigma_{\hat a}^2=0

The final available bias and variance are both 0

Using \hat aestimates representing one algorithm, estimates for other algorithms are expressed as follows:

\hat a_1,\hat a_2,\cdots,\hat a_k,\cdots

If the following inequalities hold constant:

E(\hat a-a)^2\leq E(\hat a_k-a)^2

The estimate is called an effective estimate.

3. Estimated mean

Using N to represent the number of observations, the observation samples of the stationary signal sequence x(n) are as follows:

x_0,x_1,\cdots,x_{N-1}

From this an estimate of the mean can be calculated:

\hat m_x=\frac{1}{N}\sum_{i=0}^{N-1}x_i

3.1 Bias

First available:

E[\hat m_x]=E(\frac{1}{N}\sum_{i=0}^{N-1}x_i)=\frac{1}{N}\sum_{i=0}^{N-1}E[x_i]=m_x

From this the deviation can be calculated:

B=m_x-E[\hat m_x]=0

So this method is an unbiased estimate.

3.2 Variance

According to the definition, it can be obtained:

E(\hat m_x^2)=E[(\frac{1}{N}\sum_{i=0}^{N-1}x_i)(\frac{1}{N}\sum_{j=0}^{N-1}x_j)]=\frac{1}{N^2}\sum_{j=0}^{N-1}\sum_{i=0}^{N-1}E[x_ix_j]

(1) When x_iand x_jare not correlated with each other, there are:

E[x_ix_j]=E[x_i]E[x_j]=m_x^2

Substituting the original formula for simplification:

So the variance of this estimate is:

\sigma_{m_x}^2=E[\hat m_x^2]-m_x^2=\frac{1}{N}E[x_i^2]-\frac{1}{N}m_x^2=\frac{1}{N}\sigma_x^2

The following limit can be obtained:

\lim_{N\to\infty}\sigma_{m_x}^2=0

Therefore, the estimate is unbiased and consistent. 

(2) When x_iand x_jare relevant, there are:

\sigma_{m_x}^2=E\lbrace [\hat m_x-E(\hat m_x)]^2\rbrace

Further simplification can be obtained:
 

When the difference between i and j is m, we can get:

E[(x-m_x)(x_j-m_x)]=cov(m)

Since there are Nm pairs of data samples separated by m points in the N data, it can be obtained:

 When there is correlation in the signal data, the variance of the estimated value is related to the covariance, which is not a consistent estimate. Of course, changing the value of N can improve the estimated variance.

4. Estimated variance

4.1 The mean is known

When the signal mean m_xis known, the variance estimate can be calculated as:

\hat \sigma_x^2=\frac{1}{N}\sum_{i=0}^{N-1}(x_i-m_x)^2

Prove that this formula is an unbiased consistent estimate.

untie:

(1) First verify the deviation:

(2) Then verify the consistency: 

So, the estimated variance is calculated as:

 4.2 Unknown mean

When the estimated mean is unknown, m_xthe estimated value is \hat m_xused instead, and the variance can be estimated as follows:

\hat \sigma_x^2=\frac{1}{N}\sum_{i=0}^{N-1}(x_i-\hat m_x)^2

   (1) Prove that the deviation is a biased estimate

(2) Modify the original formula to form an unbiased estimate

untie:

(1)

Obviously this is a biased estimate.

(2) The form of unbiased estimation is as follows:

\hat \sigma_x^{'2}=\frac{1}{N-1}\sum_{i=0}^{N-1}(x_i-\hat m_x^2)^2

The following proves that this formula is an unbiased estimate:

Obviously available:

\hat \sigma_x^{'2}=\frac{N}{N-1}\hat \sigma_x^2

Taking the mean of both sides of the above formula, we can get:

E(\hat \sigma_x^{'2})=\frac{N}{N-1}E(\hat \sigma_x^2)=\sigma_x^2

So B=0, this is an unbiased estimate. 

5. Estimate autocorrelation function

5.1 Unbiased Autocorrelation Function Estimation

The estimation formula is:

\hat r_{xx}(m)=\frac{1}{N-|m|}\sum_{n=0}^{N-|m|-1}x(n)x(n+m)

First it can be calculated:

E[\hat r_{xx}(m)]=\frac{1}{N-|m|}\sum_{n=0}^{N-|m|-1}E[x(n)x(n+m)]=r_{xx}(m)

From this the deviation can be calculated as:

B=r_{xx}(m)-E[\hat r_{xx}(m)]=0

So this estimate is an unbiased estimate.

The calculation of the estimated variance is more complicated, and it can be approximated as follows:

When N satisfies the following, the variance tends to 0:

N>>m,\quad N\to\infty

5.2 Estimation of partial autocorrelation function

The estimation formula is as follows:

\hat r_{xx}(m)=\frac{1}{N}\sum_{n=0}^{N-|m|-1}x(n)x(n+m)

First it can be calculated:

So the estimated bias is:

B=r_{xx}(m)-E[\hat r_{xx}(m)]=\frac{|m|}{N}r_{xx}(m)

Then it can be calculated:

If x(n) is a real Gaussian signal with zero mean, the estimated variance is:

 Obviously the following limit can be obtained:

 So for a fixed m, \hat r_{xx}(m)y is r_{xx}(m)a consistent estimate of .

The partial autocorrelation function estimation formula finds the Fourier transform:

In order to use FFT to calculate linear convolution, x(n) can be extended to a sequence of 2N-1 points, as follows:

 Let l=n+m, can get:

 The above formula |X_{2N}(e^{j\omega})|^2represents the energy spectrum of a finite signal, and after dividing by N, represents the power spectrum.

Guess you like

Origin blog.csdn.net/forest_LL/article/details/124802708