Estimation Theory (5): Definition of BLUE (6.3)

Excerpt from Steven M. Kay, "Fundamentals of Statistical Signal Processing: Estimation Theory".
  In practical applications, it often happens that the MVU estimate does not exist. For example, we may not know the PDF of the data, or even hypothesize a model for it. In this case, our previous methods of relying on CRLB and sufficient statistical theory are no longer applicable. Even if the PDF is known, the latter method cannot guarantee that the MVU estimate can be obtained.
  Since we cannot determine the optimal MVU estimator, we have reason to turn to the sub-optimal estimator. But when we do this, we never know how much performance we might have lost (because the minimum variance of the MVU estimator is unknown). However, if the variance of the suboptimal estimate can be determined and it meets our system specifications, we can consider it sufficient to solve the current problem. If its variance is too large, then we will need to study other sub-optimal estimates, hoping to find an estimate that meets our specifications.
  A common method is to limit the estimation to be linear in the data, and then find the unbiased linear estimate with the smallest variance, called the best linear unbiased estimate (BLUE). As described below, you only need to know the first and second moments of the PDF to determine BLUE. Because there is no need to fully understand PDF, BLUE is usually easier to implement.

This chapter introduces how to determine BLUE according to the first and second moments of PDF as the sub-optimal solution. BLUE here is linear for the data samples.

6.3 Definition of BLUE

  For the data set x [0], x [1],…, x [N − 1] x[0],x[1],\ldots,x[N-1]x[0],x[1],,x[N1 ] , its PDF isp (x; θ) p({\bf x};\theta)p(x;θ ) and the unknown parameterθ \thetaθ is related. The BLUE estimate is expressed as follows:
θ ^ = ∑ n = 0 N − 1 anx [n] (6.1) \tag{6.1} \hat \theta=\sum_{n=0}^{N-1}a_nx[n]θ^=n=0N1anx[n]( . 6 . . 1 ) clearly whereθ ^ \ hat \ thetaθ^ Is linear, which isx \bf xLinear combination of x . By choosing different coefficientsan a_nan, You can get different estimates. The BLUE estimation should be unbiased and have the smallest variance.
  Let's discuss BLUE optimality below.

  • Only when the MVU estimation is linear, the BLUE is optimal (that is, the MVU estimation).
      As shown in Figure 6.1(a), for the direct current estimation in WGN (Example 3.3), since the MVU estimation is linear, BLUE is the MVU estimation. In Figure 6.1(b), for the estimation of uniformly distributed noise (Example 6.8), since the MVU estimation is not linear, the BLUE estimation (sample mean) is not an MVU estimation.

Insert picture description here

  • BLUE may be completely unavailable.
      For example, we consider the estimation of WGN power. We know MVU estimation (Example 3.6)
    σ ^ 2 = 1 N ∑ n = 0 N − 1 x 2 [n] \hat \sigma^2=\frac{1}{N}\sum_{n=0}^{ N-1}x^2[n]σ^2=N1n=0N1x2 [n]is non-linear to the data. If we force the estimation to be linear (prob 6.1), then
    σ ^ 2 = 1 N ∑ n = 0 N − 1 anx [n] \hat \sigma^2=\frac{1}{N}\sum_{n =0}^{N-1}a_nx[n]σ^2=N1n=0N1anThe expected value of x [ n ] is
    E (σ ^ 2) = 1 N ∑ n = 0 N − 1 an E (x [n]) = 0 {\rm E}(\hat \sigma^2)=\frac{ 1}{N}\sum_{n=0}^{N-1}a_n{\rm E}(x[n])=0E(σ^2)=N1n=0N1anE(x[n])=0 Obviously we cannot find an unbiased estimate. Although BLUE is not suitable for this problem, if we transform the data intoy [n] = x 2 [n] y[n]=x^2[n]and [ n ]=x2 [n], then it can be estimated as
    σ ^ 2 = 1 N ∑ n = 0 N − 1 any [n] = 1 N ∑ n = 0 N − 1 anx 2 [n] \hat \sigma^2=\ frac{1}{N}\sum_{n=0}^{N-1}a_ny[n]=\frac{1}{N}\sum_{n=0}^{N-1}a_nx^2[ n]σ^2=N1n=0N1anand [ n ]=N1n=0N1anx2 [n]According to the requirement of unbiased estimation,
    E (σ ^ 2) = 1 N ∑ n = 0 N − 1 an σ 2 = σ 2. {\Rm E}(\hat \sigma^2)=\ frac{1}{N}\sum_{n=0}^{N-1}a_n\sigma^2=\sigma^2.E(σ^2)=N1n=0N1anσ2=σ2.

Guess you like

Origin blog.csdn.net/tanghonghanhaoli/article/details/108194516