25.9 Introduction to the 10 optimization methods in matlab - penalty function method for constrained optimization problems (matlab program)

1. Brief description

      

 

1. Algorithm principle
1. Problem introduction
Most of the algorithms we have learned before are unconstrained optimization problems. The algorithms include: golden section method, Newton method, quasi-Newton method, conjugate gradient method, simplicity method, etc. But in practical engineering problems, most optimization problems are constrained optimization problems. The penalty function method can transform the constrained optimization problem into an unconstrained optimization problem, thus using an unconstrained optimization algorithm.

2. Classification of constrained optimization problems Constrained
optimization problems can be roughly divided into three categories: equality constraints, inequality constraints, and equality + inequality constraints.

Its mathematical model is:

equality constraints

st hv(x)=0,v=1,2,...,p<n
equality constraints

s.t    

Equality + inequality constraint problem

s.t    hv(x)=0,v=1,2,...,p<n
 

3. Definition of Penalty Function Method
Penalty function method (SUMT method), also known as sequence unconstrained minimization technique, constructs a penalty function by adding the conditions of equality constraints and inequality constraints to the original objective function by adding a properly defined composite function. Thus, the constraints are removed, and a series of unconstrained optimization problems are solved instead.

According to whether the iteration point in the penalty function re-optimization process is within the feasible region of the constraints, it is divided into interior point method, exterior point method and hybrid method

Interior point method: the iteration point is within the feasible region of the constraints, and it is only used for inequality constraints.

Outer point method: The iterative point is outside the feasible region of the constraints, which can be used for both inequality constraints and equality constraints.

4. Outer point penalty function method
Equality constraints:

s.t    h1(x)=x1−2=0,h2(x)=x2+3=0

 

Algorithm steps

a. Construct penalty function: F=f+M * { [ h1(x) ]^2 + [ h2(x) ]^2 } , where M is the initial penalty factor;

b, then use the unconstrained optimization extreme value algorithm to solve (Newton's method);

c. If the distance between two adjacent unconstrained optimal points of the penalty function is small enough [norm(x1-x0)<eps], it will converge;

        Otherwise, the amplification penalty factor M=C*M, where C is the penalty factor amplification factor;

d. Turn to step a to continue iteration;

 

2. Code

 

Main program:

 

clear
f ='f1209';
x0=[3 0];
TolX = 1e-4; 
TolFun = 1e-9; MaxIter
=100;
alpha0 = 1;
The correct result of
[xo_Nelder,fo_Nelder] = Opt_Nelder(f,x0,TolX,TolFun,MaxIter) %Nelder method
[fc_Nelder,fo_Nelder,co_Nelder] = f1209(xo_Nelder) %Nelder method result
[xo_s,fo_s] = fminsearch(f, x0) %MATLAB built-in function fminsearch()
[fc_s,fo_s,co_s] = f1209(xo_s) %Corresponding result
%%%Gradient-based method steepest descent method, etc., get wrong result
grad=inline('[2*(x (1)+1)*((x(1)-1.2)^2+0.4*(x(2)-0.5)^2)+((x(1)+1)^2+4*(x( 2)-1.5)^2)*2*(x(1)-1.2),8*(x(2)-1.5)*((x(1)-1.2)^2+0.4*(x(2) -0.5)^2)+((x(1)+1)^2+4*(x(2)-1.5)^2)*0.8*(x(2)-0.5)]','x') ;
xo_steep = Opt_Steepest(f,grad,x0,TolX,TolFun,alpha0) % steepest descent method
[fc_steep,fo_steep,co_steep] = f1209(xo_steep) % corresponding result
[xo_u,fo_u] = fminunc(f,x0); % MATLAB built-in function fminunc()
[fc_u,fo_u,co_u] = f1209(xo_u) % corresponding result

 

Subroutine:

 

function [xo,fo] =Opt_Nelder(f,x0,TolX,TolFun,MaxIter)
%Nelder-Mead method is used for multi-dimensional variable optimization problem, dimension >=2.
N = length(x0);
if N == 1 % One-dimensional case, use quadratic approximation to calculate
    [xo,fo] = Opt_Quadratic(f,x0,TolX,TolFun,MaxIter);
    return
end
S = eye(N);
for i = 1:N % Independent variable dimension When it is greater than 2, repeat the calculation of each sub-plane
    i1 = i + 1;
    if i1 > N
        i1 = 1;
    end
    abc = [x0; x0 + S(i,:); x0 + S(i1,:)]; % Each directional sub-plane
    fabc = [feval(f,abc(1,:)); feval(f,abc(2,:)); feval(f,abc(3,:))]; [x0,
    fo ] = Nelder0(f,abc,fabc,TolX,TolFun,MaxIter);
    if N < 3 % two-dimensional case does not need to repeat
        break;
    end 
end
xo = x0;

 

 

3. Running results

 

6030cdbf319640b4a7efffa756de0e13.png

 2948f5bf815c49ca94a5ff0848a7846a.png

 f485fe90a6f7455daf4579379ad85363.png

 9100ec6704134ffc864344f36ff2f8a3.png

 

 

Guess you like

Origin blog.csdn.net/m0_57943157/article/details/131988551