Squared coherence MSC (probability density estimated by MSC)

Introduction to coherence

The coherence function between two generalized stationary random processes and is equal to the square root of the product of the cross power spectrum divided by the autopower spectrum. Specifically, complex coherence is defined as:

where the complex cross power spectrum is

is the Fourier transform of the cross-correlation function

Here x and y are real numbers, and E represents the mathematical expectation (for ergodic stochastic processes, the set average can be replaced by the time average). Coherence is a normalized cross-power spectral density function, squared coherence (MSC) is defined as

The coherence function has applications in many fields, including system identification, signal-to-noise ratio (SNR) measurement, and time delay determination. Coherence, especially squared coherence (MSC), only works if its value can be accurately estimated. In fact, understanding evaluator statistics is highly desirable. Therefore, this section will explain the coherence function. The following sections describe the process of correctly estimating the statistics of MSC and estimators.

An interesting interpretation of coherence - and MSC in particular - is a measure of the relative linearity of the two processes. To illustrate this, consider Figure 1:

The sample function y(t) of any stationary random process consists of the response of the linear filter plus the error component e(t). When a linear filter is chosen to minimize the mean square value of e(t), that is, the area under the error spectrum, it becomes the part of y(t) that is linearly related to x(t). The spectral characteristics of e(t) are given by:

                              

In the formula, * represents complex conjugate, and H(f) is the filter transfer function. The error spectrum is:

Therefore, the optimal filter is given by

Note that coherence is associated with optimal linear filters

These results hold regardless of the source of y(t). When the linear filter is optimal in the mean square sense, the error has nothing to do with x(t), that is:

Furthermore, the minimum value of is given by:

As can be seen

Represents MSC as the ratio in the linear component of y(t) , which is the ratio in the error , that is, the nonlinear component of y(t). These results can be applied to the configurations shown in Figures 2 and 3.

 

Statistical Analysis of MSC Estimation by Welch Overlap Segmented Average (WOSA)

A.Introduction _

Much of the historical work on MSC estimation statistics centers on the WOSA method; with a reasonable interpretation of the variables, these results are also applicable to the lager reshaping method. Recall that the WOSA method consists of obtaining two finite time series from the stochastic process under investigation. Each time series is split into segments of equal length and sampled at equally spaced data points. Fragments are overlapping. However, statistics are a non-overlapping part of analytical development. Empirical results for overlapping segments are given. The samples of each segment are multiplied by a weighting function and then an FFT of the weighted sequence is performed. The Fourier coefficients of each weighted segment are then used to estimate the automatic power spectral density and the cross-power spectral density. The resulting spectral density estimate is used to form the MSC estimate.

The spectral resolution of the estimate varies inversely with segment length t. Appropriate weighting or "windowing" of t seconds can also help achieve good side lobe reduction. On the other hand, for independent segments with ideal windows, the bias and variance of MSC estimates are inversely proportional to the number of segments n. Therefore, to produce good estimates with limited data, segment overlap can be increased for n and t. When the segments do not intersect, that is, do not overlap, we call the number of segments . However, as the overlap percentage increases, the computational requirements increase rapidly, while the improvement levels off due to the increased correlation between data segments.

B. Probability density estimated by MSC

The estimated first-order probability density and distribution functions of MSC are given in Table 1, given the true value of MSC, and the number of independent segments nd . Symbolically, recall . Equations (1b) and (1c) in this table are useful because the “ ” hypergeometric function is a polynomial of order ( ).

Figures 5 and 6 show the probability density and distribution functions in several cases, calculated from (1b) and (1d) in Table 1. It is obvious from Figure 6 that when increasing, the variance of MSC estimation decreases.

The obtained bias and variance expressions are shown in Table 2. Approximations (2c) and (2d) are the results of truncating the sequences (2a) and (2b). Equations (2e) to (2g) apply to large ones ; they show that the MSC estimate is asymptotically unbiased, and the following conclusions can be drawn for large ones.

1) When MSC is equal to 0, the deviation is the largest, and when MSC is equal to 1, the deviation is the smallest, which is 0.

2) When MSC is equal to 1, the variance is 0. When MSC is equal to 1/3, the deviation is the largest .

3) If MSC is not zero, the mean square error from the true value is equal to the variance.

 

 Related code

clear
%实际的MSC分别为0,0.3,0.6,0.9时的概率密度函数(PDF)和累积分布函数(CDF)
nd=32;C_list=[0 0.3 0.6 0.9]; 
estimate_C=0:0.01:1;%估计的MSC
figure(1)
for i=1:length(C_list)
    C=C_list(i);
    PDF=(nd-1)*((1-C).^nd)*((1-estimate_C).^(nd-2))...
        .*((1-C.*estimate_C).^(1-2*nd))...
        .*(hypergeom([1-nd,1-nd],1,C*estimate_C));
    PDF=PDF/max(PDF);
    plot(estimate_C,PDF);xlabel('estimate C');ylabel('PDF')
    legend('C=0', 'C=0.3', 'C=0.6', 'C=0.9');
    hold on
end

figure(2)
for j=1:length(C_list)
    C=C_list(j);
    CDF1=hypergeom([0,1-nd],1,C*estimate_C);%当k=0时
    for k=1:nd-2
        CDF1=CDF1+(((1-estimate_C)./(1-C*estimate_C)).^k)...
            .*hypergeom([(-k),1-nd],1,C*estimate_C);
    end
    CDF=estimate_C.*(((1-C)./(1-C*estimate_C)).^nd).*CDF1;
    CDF=CDF/max(CDF);
    plot(estimate_C,CDF);xlabel('estimate C');ylabel('CDF')
    legend('C=0', 'C=0.3', 'C=0.6', 'C=0.9');
    hold on
end

%C=0.3;nd分别为32和64时的PDF和CDF
estimate_C=0:0.01:1;
C=0.3;nd_list=[32 64];
figure(3)
for i=1:length(nd_list)
    nd=nd_list(i);
    PDF=(nd-1)*((1-C).^nd)*((1-estimate_C).^(nd-2))...
        .*((1-C.*estimate_C).^(1-2*nd))...
        .*(hypergeom([1-nd,1-nd],1,C*estimate_C));
    plot(estimate_C,PDF);xlabel('estimate C');ylabel('PDF')
    legend('nd=32', 'nd=64');
    hold on
end

figure(4)
for j=1:length(nd_list)
    nd=nd_list(j);
    CDF1=hypergeom([0,1-nd],1,C*estimate_C);%当k=0时
    for k=1:nd-2
        CDF1=CDF1+(((1-estimate_C)./(1-C*estimate_C)).^k)...
            .*hypergeom([(-k),1-nd],1,C*estimate_C);
    end
    CDF=estimate_C.*(((1-C)./(1-C*estimate_C)).^nd).*CDF1;
    CDF=CDF/max(CDF);
    plot(estimate_C,CDF);xlabel('estimate C');ylabel('CDF')
    legend('nd=32', 'nd=64');
    hold on
end

 Simulation results

 

If you have any questions, please leave a message~~ 

 

 

Guess you like

Origin blog.csdn.net/qq_46035929/article/details/132538238