Mathematical Fundamentals Course - Research Notes Sharing

foreword

For more notes on sorting, see : Basic Mathematics Course - Sharing of Postgraduate Notes

 

General Fundamentals - Emphasis

  • The research object of elementary mathematics is basically an invariant, while the research direction of advanced mathematics is a variable quantity.

  • All elementary functions are continuous within their domain

  • Zero Point Theorem and Intermediate Value Theorem

    • **Zero point theorem: **The function is continuous in the interval, and the function values ​​of the left and right endpoints have different signs, then there must be a case where the function value is equal to 0 in the middle of the function
    • **Intermediate value theorem (more extensive): **The function is continuous in the interval, then the middle must pass through the middle value on the left and right sides
  • Several important "points"

    • stagnation
      • The point where the first derivative of the function is zero
    • Inflection point: the point at which the convexity and concavity of the function curve changes
      • Solving for f ( x ) ′ ′ = 0 f(x)''=0f(x)=0 , andf ( x ) ′ ′ f(x)''f(x)' ' point that does not exist
      • If f ( x ) ′ ′ f(x)''f(x)' ' The opposite sign on the left and right sides of the above point is the inflection point
    • Defect point (unbounded discontinuity point)
      • function at point α \alphaThere is no boundary in any field of α, then the point α \alphaα is the flaw of the function
    • Aggregate point (the concept of multivariate function plane point set)
      • For any given δ > 0 \delta>0d>0 , in the decentered neighborhood of point P, there are always points in point set E, then P is called the cluster point of E
      • The existence of the cluster point is to ensure that ( x , y ) → ( x 0 , y 0 ) (x,y)\rightarrow (x_0,y_0)(x,y)(x0,y0) this process is feasible
      • Convergence points, including interior and boundary points
  • Gradient, Divergence, Curl

    • Big 算 子 :▽ = ∂ ∂ xi ⃗ + ∂ ∂ yj ⃗ + ∂ ∂ zk ⃗ \ bigtriangledown = \ frac {\ partial} {\ partial x} \ vec i + \ frac {\ partial} {\ partial y} \ vec j + \ frac {\ partial} {\ partial z} \ vec k=xi +yj +zk
    • The gradient of a scalar is a vector: the gradient of a scalar field is a vector field, and its direction at a certain point in space indicates the direction in which the field changes the most (increasing) at that point, and its value indicates the spatial rate of change of the field in the direction of maximum change
    • The divergence of a vector is a scalar: the area component of the flux of any closed surface in space is equal to the volume component of the divergence of the vector field in the volume contained by the closed surface (divergence theorem)
    • The curl of a vector is a vector: the circulation (line integral) of a vector field along any closed curve is equal to the flux (area integral) of the curl of the vector field on the surface enclosed by that closed curve (Stokes theorem)
  • Least Squares

    • To best fit the function, minimize the total sum of squared errors
    • The so-called square means square
    • According to the central limit theorem, the limit distribution of the error is the normal distribution
  • The Theorem of Large Numbers and the Central Limit Theorem

    • Theorem of Large Numbers: Although the results of the random phenomenon studied are uncertain each time, in a large number of repeated experiments, there are rules to follow. This law is the law of large numbers (there is necessity in accident)
      • In layman's terms, the theorem is that, under the condition that the test conditions are unchanged, repeat the test many times, the frequency of random events is approximately equal to its probability.
      • Includes three large numbers: Bernoulli, Sinchin, and Chebyshev numbers
    • Central Limit Theorem (normal distribution)
      • Many distributions, even mixed distributions of different distributions, will eventually become normal distributions under extreme conditions, that is, the normal distribution is the final destination of these distributions
      • Examples: height, test scores, weight of the same batch of objects, etc.
      • In real life, we do not know the true mean, standard deviation and other statistical parameters of the object we want to study
      • The central limit theorem guarantees in theory that we can use only a part of the sampling method to achieve the purpose of inferring the statistical parameters of the research object
  • The principle of impossibility of small probability events

    • Events with low probability , almost impossible to occur in one trial
  • NP problem

    • P problem: a problem that can be solved in polynomial time
    • NP problems: problems that can be verified in polynomial time
  • Common Algorithms for Data Dimensionality Reduction

    Replace more old variables with fewer new variables , and make these fewer new variables retain as much of the information reflected by the original variables as possible

    • Singular Value Decomposition (SVD)
    • Principal Component Analysis (PCA)
      • The main idea of ​​PCA is to map n-dimensional features onto k-dimensions, which are brand new orthogonal features also known as principal components
      • It is a k-dimensional feature reconstructed on the basis of the original n-dimensional feature
      • The job of PCA is to sequentially find a set of mutually orthogonal coordinate axes from the original space. The selection of new coordinate axes is closely related to the data itself.
      • By calculating the covariance matrix of the data matrix , and then obtaining the eigenvalues ​​and eigenvectors of the covariance matrix , select the matrix composed of the eigenvectors corresponding to the k features with the largest eigenvalue (ie, the largest variance).
      • In this way, the data matrix can be transformed into a new space to achieve dimensionality reduction of data features.
    • Factor Analysis (FA)
    • Independent Component Analysis (ICA)
  • Dirichlet function

    • If the argument is a rational number, the function value is 1
    • If the number is irrational, the function value is 0
  • scalar, vector, matrix, tensor (Tensor)

    • scalars, vectors, matrices, collectively known as tensors
    • A scalar (a point, only size, no direction) is a 1-dimensional tensor
    • A vector (a straight line with a direction) is a 2-dimensional tensor
    • A plane is a 3-dimensional tensor, and so on to higher dimensions
  • Orthogonality of Trigonometric Function Family

insert image description here

  • Functions of different frequencies within a family of trigonometric functions, orthogonal to each other within [-π, π] (the inner product is zero)

Guess you like

Origin blog.csdn.net/weixin_43799388/article/details/123882971