25.5 Introduction to 10 optimization methods in matlab - Newton's method (matlab program)

1. Brief description

      

1 Introduction to Newton's method
Newton's iterative method (Newton's method), also known as Newton-Raphson (Raphson) method (Newton-Raphson) method, is an approximate solution proposed by Newton in the 17th century on the real and complex fields. method of the equation.

Most equations do not have root-finding formulas, so finding exact roots is very difficult, or even unsolvable, so it is particularly important to find approximate roots of equations. The method uses the first terms of the Taylor series of the function f ( x ) f(x)f(x) to find the roots of the equation f ( x ) = 0 f(x)=0f(x)=0. Newton's iterative method is one of the important methods for finding the roots of equations. Its biggest advantage is that it has square convergence near the single root of the equation f ( x ) = 0 f(x)=0f(x)=0, and this method can also be used To find the multiple roots and complex roots of the equation, the linear convergence at this time, but it can become super-linear convergence through some methods. In addition, this method is widely used in computer programming.

2 Principle of Newton's method
Let r rr be the root of f ( x ) = 0 f(x)=0f(x)=0, choose x 0 x_{0}x 
0
​as
the initial approximate value of r rr , pass the point ( x 0 , f ( x 0 ) ) \left(x_{0}, f\left(x_{0}\right)\right)(x 
0
​,
 f(x 
0
​)
 ) make curve y = f ( x ) y =f(x)y=f(x) tangent L LL ,
L : y = f ( x 0 ) + f ′ ( x 0 ) ( x − x 0 ) L: y=f\left(x_{0} \right)+f^{\prime}\left(x_{0}\right)\left(x-x_{0}\right)L:y=f(x 0​)+ 
f  ′ 
(  x  0 ​)  (x−x  0 ​)  , then the abscissa of the intersection of L LL and x xx axis x 1 = x 0 − f ( x 0 ) f ′ ( x 0 ) x_{1}=x_{0}-\frac{f \left(x_{0}\right)}{f^{\prime}\left(x_{0}\right)}x  1 ​=  x  0














 ​−is 
  an approximation of r rr . Through point ( x 1 , f ( x 1 ) ) \left(x_{1}, f\left(x_{1}\right)\right)(x 
1
​,
 f(x 
1
​)
 ) make curve y = The tangent of f ( x ) y=f(x)y=f(x), and find the abscissa x 2 = x 1 − f ( x 1 ) f ′ ( x 1 ) of the intersection of the tangent and the × \times× axis x_{2}=x_{1}-\frac{f\left(x_{1}\right)}{f^{\prime}\left(x_{1}\right)}x 2 
​=  x 
1  −   (x  1  ​) f(x  1  ​) ​,   called x 2 x_{2}x  2


















  is a quadratic approximation to r \mathrm{r}r. Repeat the above process to get the approximate value sequence of r rr, where, xn + 1 = xn − f ( xn ) f ′ ( xn ) x_{n+1}=x_{n}-\frac{f\left(x_{ n}\right)}{f^{\prime}\left(x_{n}\right)}x 
  is called the n + 1 n+1n+1 approximation of r rr, and the above formula is called the Newton iteration formula.

Using Newton's iterative method to solve nonlinear equations is an approximate method to linearize the nonlinear equation f ( x ) = 0 f(x)=0f(x)=0.
  Expand the table neighborhood of f into Taylor series f ( x ) = f ( x 0 ) + f ′ ( x 0 ) ( x − x 0 ) + f ′ ′ ( x 0 ) ( x − x 0 ) 2 ! + ⋯ + f ( n ) ( x 0 ) ( x − x 0 ) nn ! + R n ( x ) f(x)=f\left(x_{0}\right)+f^{\prime}\ left(x_{0}\right)\left(x-x_{0}\right)+\frac{f^{\prime \prime}\left(x_{0}\right)\left(x-x_{ 0}\right)^{2}}{2 !}+\cdots+\frac{f^{(n)}\left(x_{0}\right)\left(x-x_{0}\right)^ {n}}{n !}+R_{n}(x)f(x), take its linear part (i.e. the first two terms of the Taylor expansion), and make it equal to 0, ie f ( x 0 ) + f ′ ( x 0 ) ( x − x 0 ) = 0 f\left(x_{0}\right)+f^{\prime}\left(x_{0}\right)\left(x-x_{0}\ right)=0f(x 
, as an approximate equation of the nonlinear equation f ( x ) = 0 f(x)=0f(x)=0, if f ′ ( x 0 ) ≠ 0 f^{\prime}\ left(x_{0}\right) \neq 0f 
then its solution is x 1 = x 0 − f ( x 0 ) f ′ ( x 0 ) x_{1}=x_{0}-\frac{f\left( x_{0}\right)}{f^{\prime}\left(x_{0}\right)}x 
In this way, a Zhu generation relation of Newton's iterative method is obtained: xn + 1 = xn − f ( xn ) f ′ ( xn ) x_{n+1}=x_{n}-\frac{f\left(x_{n }\right)}{f^{\prime}\left(x_{n}\right)}x.

It has been proved that if it is continuous and the zero point to be sought is isolated, then there is an area around the zero point, as long as the initial value is located in this adjacent area, then Newton's method must converge. And, if it is not 0 , then Newton's method will have quadratic convergence. Roughly speaking, this means that the number of significant figures of Newton's method will double for each generation.

2. Code

Main program:

%% Use Newton's method to find the optimal solution
f1205 = inline('x(1)*(x(1)-5-x(2))+x(2)*(x(2)-4)','x ');% objective function
grad=inline('[2*x(1)-5-x(2),-x(1)+2*x(2)-4]','x'); % objective Gradient function of function
x0 = [-8;-8]; 
options=optimset('TolX',1e-4,'TolFun',1e-9,'MaxIter',100);
xo = fsolve(grad,x0,options ) % use fsolve to solve the nonlinear equation zero point
yo=f1205(xo)

3. Running results

 

Guess you like

Origin blog.csdn.net/m0_57943157/article/details/131969861