Summary of the course formulas of "Modern Signal Processing" of the University of Chinese Academy of Sciences (1)

1. Discrete-time signals and systems

Unit sample sequence:

\delta (n)=\begin{cases}1&n=0\\0&n\neq0 \end{}

Unit step sequence:

u(n)=\begin{cases}1&n\geq0\\0&n<0 \end{}

In addition to the above two discrete sequences, there are also commonly used discrete sequences sin nw,R(n).

A sequence of real-valued indices:

x(n)=a^n,\quad a\in R

Real-valued exponential sequences and unit sample sequences can form general sequences:

x(n)*\delta (n)

2. Sampling theorem

The sampling theorem needs to satisfy the condition that the spectrum does not overlap, as follows:

\Omega_s\geq 2\Omega_{max}

(Occupy the pit first, explain later)

3. Fourier transform

Any periodic signal can be decomposed into an infinite number of sinusoidal signals with different frequencies, namely the Fourier series.

The positive transformation is as follows:

X(j\Omega)=\int_{-\infty}^\infty x(t)e^{-j\Omega t}dt

The inverse transformation is as follows:

x(t)=\frac{1}{2\pi}\int_{-\infty}^\infty X(j\Omega)e^{j\Omega t}d\Omega

The Fourier transform lacks time and frequency positioning functions and is only suitable for time-invariant signals.

4. Wavelet transform

Wavelet transform provides an adjustable analysis window on the time and frequency plane, as follows:

WT(a,b)=\frac{1}{\sqrt{a}}\int x(t)\phi^*_{a,b}(\frac{t-b}{a})dt

Wavelet transform expands the concept of signal time-frequency analysis, and has adaptability to signal characteristics in terms of signal resolution.

5. Bilinear transformation

The bilinear transformation formula is derived from the digital simulation of the basic unit 1/S of the analog filter:

s=\frac{2(1-z^{-1})}{T(1+z^{-1})}

Transforming this formula a little bit, we can get:

z=\frac{2/T+s}{2/T-s}

6. Energy signal and power signal

6.1 Energy Signals

If the energy of the signal x(n) satisfies:

E=\sum_{n=-\infty}^\infty|x(n)|^2<\infty

Then the signal x(n) is called the energy-limited signal, referred to as the energy signal.

6.2 Power Signals

For periodic signals, random signals, and step signals, E is infinite, so such signals are usually studied by studying their average power. The average power of a signal x(n) is defined as follows:

P=\lim_{N\to \infty}\frac{1}{2N+1}\sum_{n=-N}^N x^2(n)

If P<\infty, then x(n) is called a power-limited signal, referred to as a power signal.

7. Random signals

If the value of the signal at each moment is a random variable, it is called a random signal, and it can also be called a random process, a random function, or a random sequence.

Random signals satisfy the following two properties:

  1. The value of a random signal at any time is a random variable that cannot be determined a priori;
  2. The value of a random signal can obey a certain statistical law and can be described statistically by the characteristics of a probability distribution;

example

s(t)=cos(\omega_0t+\theta), the random variable in the formula \thetaobeys [-\pee]the uniform distribution, and calculates its probability density function.

untie:

The probability density function is as follows:

f(\theta)=\begin{cases}\frac{1}{2\pi},&-\pi\leq\theta\leq\pi\\0,&others \end{}

8. Error energy

8.1 Minimum Error Energy

Let x(n) and y(n) be the energy signals, and y(n) should be appropriately amplified or reduced by a times to make the two correspond as much as possible, and their error energy can be used to measure their similarity. The error energy is defined as follows:

\epsilon^2=\sum_{n=-\infty}^\infty[x(n)-ay(n)]^2

When a satisfies the following equation, \epsilon^2take the minimum value:

\frac{\partial\epsilon^2}{\partial a}=0

It is easy to obtain a at this time as follows:

a_{opt}=\frac{\sum_{n=-\infty}^\infty x(n)y(n)}{\sum_{n=-\infty}^\infty y^2(n)}

Substituting a at this time into the original error energy can obtain its minimum value:

\epsilon^2_{min}=\sum_{n=-\infty}^\infty x^2(n)-\frac{[\sum_{n=-\infty}^\infty x(n)y(n)]^2}{\sum_{n=-\infty}^\infty y^2(n)}

This result tells us that, when ataken a_{opt}, the error energy between x(n) and ay(n) is the smallest, which can be understood as the most similar between x(n) and ay(n) at this time.

8.2 Relative minimum error

The relative minimum error is defined as follows:

\bar \epsilon_{min}^2=\frac{\epsilon_{min}^2}{\sum_{n=-\infty}^\infty x^2(n)}

Substituting the minimum error sought, and further simplifying the relative minimum error, we can get:

\bar\epsilon_{min}^2=1-\frac{[\sum_{n=-\infty}^\infty x(n)y(n)]^2}{[\sum_{n=-\infty}^\infty x^2(n)\sum_{n=-\infty}^\infty y^2(n)]}

Take a representative formula from the above formula:

\rho_{xy}=\frac{\sum_{n=-\infty}^\infty x(n)y(n)}{[\sum_{n=-\infty}^\infty x^2(n)\sum_{n=-\infty}^\infty y^2(n)]^{\frac{1}{2}}}

Schwartz's inequality theorem:

|\sum_{n=-\infty}^\infty x(n)y(n)|\leq[\sum_{n=-\infty}^\infty x^2(n)\sum_{n=-\infty}^\infty y^2(n)]^\frac{1}{2}

According to the Schwartz inequality, we can get:

\rho_{xy}\leq1

According to the energy theorem, the energies of x(n) and y(n) are finite and are defined as follows:

\sum_n x^2(n),\sum_n y^2(n),

From this the product of the two can be obtained:

E_{xy}=\sum_{n=-\infty}^\infty x^2(n)\times \sum_{n=-\infty}^\infty y^2(n)=constants

"constants" in the above formula represents a constant.

So in fact, \rho_{xy}the size of is completely determined by its molecule, and the molecule is recorded separately as follows:

r_{xy}=\sum_{n=-\infty}^\infty x(n)y(n)

So the original formula \rho_{xy}can be expressed as follows:

\rho_{xy}=\frac{r_{xy}}{\sqrt{E_{xy}}}

In fact \rho_{xy}, and r_{xy}both represent the correlation coefficient between the signals x(n) and y(n).

At this point, the following five conclusions can be drawn:

  • At that timer_{xy}=\sqrt {E_{xy}} , \rho_{xy}=1, at this time \bar\epsilon_{min}^2=0, it is understood that x(n) is most similar to y(n);
  • At that timer_{xy}=0 , \rho_{xy}=0, at this time \bar\epsilon_{min}^2=1, it is understood that x(n) is completely dissimilar to y(n);
  • \rho_{xy}Both r_{xy}can be used to describe the similarity between x(n) and y(n);
  • \rho_{xy}is the normalized correlation coefficient, the variation range is \pm1between;
  • r_{xy}reflects the actual similarity between x(n) and y(n);

Guess you like

Origin blog.csdn.net/forest_LL/article/details/124696229