Postgraduate Mathematics Note1—Division Framework

calculus

Calculus taught me why the area of ​​an ellipse = π \piπab.

insert image description here
Implicit function derivation Rule

How to understand Lagrange's function extremum?

All things can be integrated (all functions can find the original function?) - number is the universe

Linear Algebra

insert image description here

  • It’s doubtless that Gitmind&Blog is best place for taking notes!

probability theory

  1. What is the negative binomial distribution
  2. The function of the three major distribution variables, mean, variance - the essence is calculation, the key point is memory (understanding)

  1. Chi-square distribution, originate from τ \tauτ 函数—— τ ( x ) = ∫ 0 + ∞ e − t t x − 1 d t \tau(x)=\int_0^{+\infin} e^{-t}t^{x-1}dt t ( x )=0+ettx1dt,prop1: τ ( x + 1 ) = x   τ ( x ) \tau(x+1)=x\ \tau(x) t ( x+1)=x τ ( x ) 
    producekn ( x ) = e − x / 2 x − ( n − 2 ) / 2 τ ( n / 2 ) 2 n / 2 k_n(x)=\frac{e^{-x/2 }x^{-(n-2)/2}}{\tau(n/2)2^{n/2}}kn(x)=τ ( n /2 ) 2n/2ex/2x(n2)/2, recorded as xn 2 x_n^2xn2, satisfying the law of addition;

Proof: the probability density of the square of the normal distribution is the same as that of the first-order chi-square distribution , and the probability density of the normal distribution is the same as that of the chi-square distribution by induction in the calculation;

inclusion 1:Xi corresponds with index distribution,2 λ X i \lambda X_i λXi complies with x 2 2 x_2^2 x22

  • Chi-square distribution followed by produce T distribution and F distribution

  1. The normal distribution is the sum of independent and identically distributed variables (and other statistics) at n n → + ∞ n\rightarrow +\infinn+ an approximation of

Mathematical statistics

insert image description here
common pipeline:

  1. Given variable distribution, known sample, estimate parameters (point estimation) or judge parameters (parametric test);
    bayes estimation obtains estimated value ← \leftarrow sum of conditional probabilities = 1
  2. The distribution of parameters and variables is known (or forced by large sample method/beyas), and the sample interval is estimated according to a given probability (interval estimation);
  3. It is also possible to judge the confidence of the variable distribution (test of goodness of fit) when the parameters and samples are known.
  • Visible 3 pivots = variable distribution, parameter, sample
  • The three major distributions are born for the subsequent interval estimation and parameter testing
  1. How to Bayesian estimate normal distribution variance?
    insert image description here

Process of (specify level α \alphaα gives a test of the hypothesis for the parameter):

  • Given a sample and assumptions (distribution of variables)
  • Type I Error = Parameters fit the distribution, but the sample deviates from the norm
  1. Represents the first type of error probability (according to the given hypothesis H and the test with the ball ( X ‾ ≥ C \overline X \geq CXC )
  2. The identity is transformed into a sample function, using the classical conclusions, corresponding to the distribution functions of the three major statistics,
  3. According to "with level α \alphaα ” (for any satisfying H0), as long as the maximum value of the distribution function≤ α \leq \alphaα , the parameter C in the solution test of the linear equation in one variable

Guess you like

Origin blog.csdn.net/shuia64649495/article/details/132125392