Numerical Analysis (for personal use)

Solving nonlinear equations :
1. Convergence criterion of fixed point iteration method: 1) The iterative function interval belongs to the root interval. 2) The upper limit of the absolute value of the derivative of the iterative function is less than or equal to a number less than 1.

Conditions for local convergence of fixed point iteration:
1. The absolute value of the first derivative of the iterative function at the solution point is less than 1

Steffen iteration can convert non-convergent fixed point iteration into convergent:
its specific iteration format is
yk = ϕ ( xk ) , zk = ϕ ( yk ) xk + 1 = xk − ( yk − xk ) 2 zk − 2 yk + xk y_k = \phi(x_k),z_k = \phi(y_k) \quad x_{k+1}=x_k-\frac{(y_k-x_k)^2}{z_k-2y_k+x_k}yk=ϕ ( xk),zk=ϕ ( yk)xk+1=xkzk2 yk+xk(ykxk)2

How to select the initial value of Newton iteration in the interval [a,b] to make it converge to
1, f ( a ) f ( b ) < 0 f(a)f(b)<0f(a)f(b)<0
2. In the interval,f ′ ( x ) ≠ 0 , f ′ ′ ( x ) ≠ 0 f'(x) \neq 0,f''(x) \neq 0f(x)=0,f′′(x)=0
3. For the initial valuex 0 x_0x0 f ( x 0 ) f ′ ′ ( x 0 ) > 0 f(x_0)f''(x_0)>0 f(x0)f′′(x0)>0


The direct solution of the matrix with six digits after the decimal point is generally reserved during calculation.
1. The necessary and sufficient condition for the Guass elimination method to proceed is that the first n-1 order main subform is not zero.

2. For a symmetric positive definite matrix, there is a square root decomposition
(how to judge whether the matrix is ​​positive definite: all the order main subforms of the matrix are positive)

Simple iterative method to solve the matrix:
1. Convergence criteria
1) Sufficient and necessary condition: the spectral radius of the iterative matrix is ​​less than 1 2) Sufficient condition, any norm of the iterative matrix is ​​less than 1 (the reason for the sufficient condition here is that the spectral radius of the matrix must be less than equal to any norm of the matrix, so this condition is too strong) 2. Convergence speed The convergence speed of the matrix is ​​defined as − ln ( ρ ( B ) ) -ln(\rho(B))


l n ( ρ ( B ))

The convergence criterion of the Jacobian iteration method:
1. Same as above
2. The coefficient matrix is ​​strictly diagonally dominant

Gauss-Seidel iteration method
1, 2, same as above
3, coefficient matrix symmetric positive definite

Hyper-relaxation iterative method
1. The spectral radius of the iterative matrix is ​​less than 1
2. The coefficient matrix is ​​symmetrical and positive definite, and 0 < ω < 2 0<\omega<20<oh<2

The F norm of the matrix: the square sum of each element of the matrix is ​​averaged and then the root sign is opened.
(The orthogonal matrix keeps the second norm of the vector unchanged, and keeps the F norm of the matrix unchanged)

What is an Orthogonal Matrix: AAT = I AA^T=IAAT=I
1. Givens matrix,plane rotation matrix, is an orthogonal matrix.

2. HouseHolder matrix, reflection matrix
They can be used to similarize the matrix orthogonally to a tridiagonal matrix.

Chapter 9: Find the eigenvector of the matrix
1. The power method can only get the largest eigenvalue and the corresponding eigenvector of the matrix
2. The inverse power method can get the smallest eigenvalue and the corresponding eigenvector of the matrix, combined with the origin The translation method can get the eigenvalue of the matrix closest to a certain value

3. Jacboi rotation method to solve all eigenvalues ​​and eigenvectors of a real symmetric matrix
What is a real symmetric matrix:

Numerically solve ordinary differential equations:
1. Determine the order of the algorithm: the magnitude of the local stage error is -1, eg: O ( h 3 ) − > the order is 2 O(h^3) -> the order is 2O(h3)>Order is 2

Guess you like

Origin blog.csdn.net/ambu1230/article/details/129029487