1. Brief description
Matlab implements the steepest descent method
Definition: the algorithm for searching along the negative gradient direction (the negative gradient direction is the steepest descent direction)
algorithm steps:
Step 0: Select the initial point x0, the allowable error is e=[0~1], let k=1
Step 1: Calculate the gradient of the objective function gk=▽f(xk))
If ||gk||<=e, the error requirement is reached, the calculation is stopped immediately, and xk is output as an approximate optimal solution.
Step 2: Take the search direction as dk=-gk (that is, the negative gradient direction).
Step 3: Determine the step size σ using the line search technique
(The Armijo criterion is used here to find the step size)
The step size is α
k=β is given, so mk is required
The Amrijo criterion is
(1) given b(0~1), ae(0,0.5), 令m=0
(2) If the inequality
f(xk+β^m*dk)<=f(xk)*β^m*gk'*dk
If it is established, let mk=m, Xk+1=xk+β^m*dk. Stop the operation, output mk to get the step size
(3) If the above inequality is not satisfied, set m=m+1, and then return to the second step.
Step 4: After determining the step size, let Xk+1=Xk+σk*dk, k=k+1, go to step 1.
The specific code of matlab is as follows:
2. Code
Main program:
%% Use the steepest descent method to find the optimal solution
f1204 = inline('x(1)*(x(1)-5-x(2))+x(2)*(x(2)-4)',' x');% objective function
grad=inline('[2*x(1)-5-x(2),-x(1)+2*x(2)-4]','x'); % Gradient function of objective function
x0 = [1 4];
TolX = 1e-4;
TolFun = 1e-9;
MaxIter = 100;
dist0=1;
[xo,fo] = Opt_Steepest(f1204,grad,x0,TolX,TolFun, dist0,MaxIter)
Subroutine:
function [xo,fo] = Opt_Steepest(f,grad,x0,TolX,TolFun,dist0,MaxIter)
% Use the steepest descent method to find the optimal solution
%Input: f is the function name grad is the gradient function
%x0 is the initial value of the solution TolX, TolFun are the error thresholds of variables and functions respectively
%dist0 is the initial step size MaxIter is the maximum number of iterations
%Output: xo is the point where the minimum value is taken fo is the minimum function value
% f0 = f(x(0))
%%%%%% judge the number of input variables, and set some variables to default values
if nargin < 7
MaxIter = 100; % The default maximum number of iterations is 100
end
if nargin < 6
dist0 = 10; % The default initial step size is 10
end
if nargin < 5
TolFun = 1e-8; % function value error is 1e-8
end
if nargin < 4
TolX = 1e-6; % independent variable distance error
end
%%%%% The first step is the initial value of the solution Function value
x = x0;
fx0 = feval(f,x0);
fx = fx0;
dist = dist0;
kmax1 = 25; % The maximum number of searches to determine the step size of the linear search method
warning = 0;
%%%%% iterative calculation to find Optimal solution
for k = 1: MaxIter
g = feval(grad,x);
g = g/norm(g); % find the gradient direction at x
%% linear search method to determine the step size
dist = dist*2; % make the step size Double the original step size
fx1 = feval(f,x-dist*2*g);
for k1 = 1:kmax1
fx2 = fx1;
fx1 = feval(f,x-dist*g);
if fx0 > fx1+ TolFun & fx1 < fx2 - TolFun %fx0 > fx1 < fx2,
den = 4*fx1 - 2*fx0 - 2*fx2; num = den - fx0 + fx2; % quadratic approximation method
dist = dist*num/den;
x = x - dist*g; fx = feval(f,x); % determine the next point
break;
else
dist = dist/2;
end
end
if k1 >= kmax1
warning = warning + 1; % cannot determine the optimal step size
else
warning = 0;
end
if warning >= 2|(norm(x - x0) < TolX&abs(fx - fx0) < TolFun)
break;
end
x0 = x;
fx0 = fx;
end
xo = x; fo = fx;
if k == MaxIter
fprintf('Just best in %d iterations',MaxIter);
end
3. Running results