Optimization algorithm ------ unconstrained one-dimensional extreme value


Before introducing the unconstrained one-dimensional extremum problem, explain what an unconstrained one-dimensional extremum is, simply expressed as

Insert picture description here
The MATLAB code implementation of the following various methods are not given. If you need to study and study carefully, you can send a private message to me. After I see it, I will try MATLAB programming and reply to the person in need.

1. advance and retreat

The forward and backward method is an algorithm used to determine the interval containing the minimum point. Its theoretical basis is: f (x) f(x)f ( x ) is a single valley function, and[a, b] [a,b][a,b ] is a search interval of its minimum point. For anyx 1, x 2 is contained in [a, b] x_1,x_2 is contained in [a,b]x1,x2Packet contained in [ A ,b],如果 f ( x 1 ) < f ( x 2 ) f(x_1)<f(x_2) f(x1)<f(x2) , then[a, x 2] [a,x_2][a,x2] Is the minimum point search interval, otherwise it is[x 1, b] [x_1,b][x1,b ] . Therefore, when programming in MATLAB, the input must have an initial pointx 0 x_0x0And the initial search step size h 0 h_0h0, And need to jump out of the iteration with the given precision and return to the search interval.

2. The golden section method

The golden section method is also called the 0.618 method. It is a minimal point search algorithm based on interval contraction. When the search interval is determined by the forward and retreat method, we only know that the minimal point is in this interval, but we don’t know which point it is. . The idea of ​​golden section method is very straightforward. Since the minimum point is contained in the interval, the search interval can be continuously reduced, and the end point of the search interval can be approached to the minimum point. As shown in the figure, [a, b] [a, b][a,b ] is the search interval, the golden section method first generates two interior pointsx 1, x 2 x_1, x_2according to the golden ratiox1,x2
Insert picture description here
Then according to f (x 1), f (x 2) f(x_1), f(x_2)f(x1),f(x2) Size relation to reselect the search interval.
1. If f (x 1) <f (x 2) f(x_1)<f(x_2)f(x1)<f(x2) , The search interval becomes[a, x 2] [a,x_2][a,x2]
2.若 f ( x 1 ) > f ( x 2 ) f(x_1)>f(x_2) f(x1)>f(x2) , The search interval becomes[x 1, b] [x_1,b][x1,b ] When
Insert picture description here
programming in MATLAB, you need to enter the interval endpoint information and the precision value, and return the minimum point value and the corresponding objective function value.

3. Fibonacci Method

The Fibonacci method is also an interval shrinking algorithm, but unlike the golden section method, the golden section method only changes one end point of the search interval at a time, which belongs to the one-way shrinkage method, while the Fibonacci method changes the search interval at the same time. The two end points are a two-way contraction method. There is little difference between MATLAB programming and the golden section method.

4. Newton's method

Newton's iterative method is very fast to find unconstrained one-dimensional extremum problems, and another advantage is that it can highly approximate the optimal solution without waiting. The basic Newton's method is an algorithm that uses derivatives. The iterative direction of each step is along the direction in which the function value of the current point decreases. Therefore, it can be seen that the effectiveness of the algorithm depends heavily on the selection of the initial point. Well, it can approach the minimum point very quickly. The basic iterative formula
Insert picture description here
only needs an initial point and precision and objective function as input when programming MATLAB, and returns the value of the independent variable and objective function.

5. Secant method

Different from Newton's method, the descending direction used by the secant method is the secant direction of the derivative, and the basic iterative formula is
Insert picture description here
that MATLAB programming requires two initial point values.

6. Parabola method

The parabolic method is also called quadratic interpolation. Its theoretical basis is that the quadratic polynomial can better approximate the shape of the function near the best point. The method is to take three construction points near the best point of the function, and then use these three points Construct a parabola and use the extreme points of this parabola as an approximation of the extreme points of the function. After constructing a parabola each time, the extreme point of the parabola can be used as a new construction point. The new construction point and the original three construction points go through a certain algorithm to obtain the three construction points for the next parabola approximation. This is The algorithmic process of the parabola method. The parabola method can obtain a more accurate solution, but the speed will be slower. Therefore, it is better to use other methods to obtain a relatively small extreme value interval before using the parabola method.

7. Cubic interpolation

The cubic interpolation method is also to take three construction points near the best point of the function, and then use these three points to construct a cubic curve, and use the extreme point of this curve as an approximation of the extreme point of the function. Since these three points cannot uniquely determine a cubic curve, the derivative condition of a certain point must be added. There are two extreme points of the cubic curve, and only the extreme points satisfying the second derivative greater than 0 can be used as the approximation of the function extreme points. Obviously, the idea of ​​it and the parabola method are actually the same, so they can both obtain solutions with higher accuracy, but the cubic interpolation method is slower.

Guess you like

Origin blog.csdn.net/woaiyyt/article/details/113771026