calculus
Calculus taught me why the area of an ellipse = π \piπab.
Implicit function derivation Rule
How to understand Lagrange's function extremum?
All things can be integrated (all functions can find the original function?) - number is the universe
Linear Algebra
- It’s doubtless that Gitmind&Blog is best place for taking notes!
probability theory
- What is the negative binomial distribution
- The function of the three major distribution variables, mean, variance - the essence is calculation, the key point is memory (understanding)
- Chi-square distribution, originate from τ \tauτ 函数—— τ ( x ) = ∫ 0 + ∞ e − t t x − 1 d t \tau(x)=\int_0^{+\infin} e^{-t}t^{x-1}dt t ( x )=∫0+∞e−ttx−1dt,prop1: τ ( x + 1 ) = x τ ( x ) \tau(x+1)=x\ \tau(x) t ( x+1)=x τ ( x )
producekn ( x ) = e − x / 2 x − ( n − 2 ) / 2 τ ( n / 2 ) 2 n / 2 k_n(x)=\frac{e^{-x/2 }x^{-(n-2)/2}}{\tau(n/2)2^{n/2}}kn(x)=τ ( n /2 ) 2n/2e−x/2x−(n−2)/2, recorded as xn 2 x_n^2xn2, satisfying the law of addition;
Proof: the probability density of the square of the normal distribution is the same as that of the first-order chi-square distribution , and the probability density of the normal distribution is the same as that of the chi-square distribution by induction in the calculation;
inclusion 1:Xi corresponds with index distribution,2 λ X i \lambda X_i λXi complies with x 2 2 x_2^2 x22
- Chi-square distribution followed by produce T distribution and F distribution
- The normal distribution is the sum of independent and identically distributed variables (and other statistics) at n n → + ∞ n\rightarrow +\infinn→+ an approximation of ∞
Mathematical statistics
common pipeline:
- Given variable distribution, known sample, estimate parameters (point estimation) or judge parameters (parametric test);
bayes estimation obtains estimated value ← \leftarrow← sum of conditional probabilities = 1 - The distribution of parameters and variables is known (or forced by large sample method/beyas), and the sample interval is estimated according to a given probability (interval estimation);
- It is also possible to judge the confidence of the variable distribution (test of goodness of fit) when the parameters and samples are known.
- Visible 3 pivots = variable distribution, parameter, sample
- The three major distributions are born for the subsequent interval estimation and parameter testing
- How to Bayesian estimate normal distribution variance?
Process of (specify level α \alphaα gives a test of the hypothesis for the parameter):
- Given a sample and assumptions (distribution of variables)
- Type I Error = Parameters fit the distribution, but the sample deviates from the norm
- Represents the first type of error probability (according to the given hypothesis H and the test with the ball ( X ‾ ≥ C \overline X \geq CX≥C )
- The identity is transformed into a sample function, using the classical conclusions, corresponding to the distribution functions of the three major statistics,
- According to "with level α \alphaα ” (for any satisfying H0), as long as the maximum value of the distribution function≤ α \leq \alpha≤α , the parameter C in the solution test of the linear equation in one variable