3. Nonlinear programming of mathematical modeling

1. Definition
2. Solve the example matlan code

1. Definition

1. Nonlinear Programming (NLP) is a method of mathematical optimization problems. The objective function or constraints it handles include nonlinear terms. Unlike linear programming, nonlinear programming involves finding optimal solutions under nonlinear constraints. It has wide applications in many fields, including engineering, economics, logistics, finance, etc. It can be used to solve various practical problems such as production optimization, portfolio optimization, engineering design, etc. However, nonlinear programming problems are usually more complex than linear programming, and the solution process may encounter challenges such as local optimal solutions and numerical instability, so careful problem modeling and appropriate numerical techniques are required to deal with them.

2. The general form of nonlinear programming problem can be expressed as :

Insert image description here

This kind of problem can be solved with the help of mathematical optimization algorithms , such as gradient descent, quasi-Newton method, global optimization algorithm, etc. Choosing an appropriate algorithm often depends on the nature of the problem, the complexity of the constraints, and the accuracy requirements of the solution.

3. Differences and characteristics from linear programming

(1) Linear properties of the objective function and constraints:

Linear programming : Both the objective function and the constraints are linear, which means that the relationship between variables is linear, for example: ax + by + cz.
Nonlinear programming : allows the objective function and/or constraints to contain nonlinear terms, such as square terms, exponential terms, logarithmic terms, etc., which makes the problem more complex.
(2) Differences in algorithms:

Linear Programming : Problems with efficient solutions, such as the simplex method. These methods are usually able to find optimal solutions in polynomial time.
Nonlinear programming : The problem usually requires more complex optimization algorithms, such as gradient descent, quasi-Newton method, genetic algorithm, etc. The performance of these algorithms can be affected by the characteristics of the problem and initial guesses, and solution times can be long.
(3) Properties of solution:

Linear programming : The solution to the problem (if it exists) is either the only optimal solution, or there is no solution or infinite solutions (non-bounded problem). This makes linear programming problems relatively easy to analyze.
Nonlinear programming : Problems usually have multiple local optimal solutions, while the global optimal solution requires more computing resources and strategies to find. Therefore, the solution space of nonlinear programming problems is usually more complex.
(4) Application fields:

Linear programming : often used in resource allocation, production planning, transportation problems, etc. These problems can usually be well described by linear models.
Nonlinear programming : more suitable for problems involving nonlinear phenomena, such as curve fitting, portfolio optimization, engineering design, etc.

4. Classification of different characteristics and constraints of nonlinear programming problems

(1) Constrained nonlinear programming :
This is the most common nonlinear programming problem, which contains one or more equality constraints and/or inequality constraints. The goal of the problem is to find the minimum value of the objective function while satisfying these constraints.

(2) Unconstrained nonlinear programming :
This type of problem has no constraints and only one objective function needs to be minimized. In this case, finding the local minimum of the function becomes the main challenge.

(3) Semi-unconstrained nonlinear programming :
This kind of problem usually contains one or more integer variables and continuous variables, and has constraints. Some of these variables must be integers, while others can be continuous. This makes the problem more complex because it involves a combination of mixed integer programming and nonlinear programming.

(4) Mixed integer nonlinear programming :
In this type of problem, the objective function and/or constraints contain nonlinear terms, and the problem also contains integer variables and continuous variables. This is a very complex optimization problem that requires specialized algorithms and techniques to handle.

(5) Global nonlinear programming :
Most nonlinear programming algorithms look for local minima rather than global minima. The goal of global nonlinear programming problems is to find the global minimum of the objective function, which is usually more challenging and requires the use of global optimization algorithms such as genetic algorithms, simulated annealing, etc.

(6) Multi-objective nonlinear programming :
This kind of problem involves multiple conflicting objective functions and needs to find a set of solutions to achieve a balance between multiple objectives. The set of solutions is called the Pareto front.

(7) Convex nonlinear programming :
In convex nonlinear programming, the objective function and all constraints are convex functions. This type of problem is usually easier to solve because of the good properties of convex optimization problems.

(8) Non-convex nonlinear programming :
In this kind of problem, the objective function and/or constraints contain at least one non-convex function. Non-convex problems are generally more challenging because they may have multiple local optimal solutions.

5. Methods and steps for solving nonlinear programming problems

Step 1: Problem Modeling

Define the objective function : First, clearly define an objective function that needs to be minimized or maximized. This function is usually a nonlinear function with respect to the decision variable.

Determine the constraints : Determine the constraints of the problem, including equality constraints and inequality constraints. Constraints are functions that limit the solution to a problem and must be satisfied.

Define variable ranges : If applicable, define the range (upper and lower bounds) for the decision variable. These scopes can help narrow the search space.

Step 2: Choose a solution method

Select an optimization algorithm : Select an appropriate optimization algorithm based on the nature of the problem, constraints, and solution requirements. Common algorithms include gradient descent, quasi-Newton method, global optimization algorithm, etc.

Initialization : Select initial values ​​for the decision variables. The choice of initial values ​​can affect the performance of the algorithm, so caution is required.

Step 3:

Iterative optimization: Iterative optimization is performed using the selected optimization algorithm. The algorithm updates the decision variables based on the gradient information of the objective function and constraints in each iteration step, gradually approaching the optimal solution.

Step 4:
Result analysis: Whether the final solution meets the requirements of the problem. Check whether the objective function value is close enough to the optimal solution and whether the constraints are satisfied.

Replenish:

Gradient Descent : Gradient descent is an iterative method used to minimize an objective function. It updates the variables in the fastest decreasing direction based on the gradient (derivative) of the objective function to gradually reduce the objective function value.
There are several variations of the gradient descent method, including batch gradient descent, stochastic gradient descent, and mini-batch gradient descent. Which variant to choose depends on the size and nature of the problem.

Quasi-Newton method : The quasi-Newton method is an iterative method that updates variables by estimating the inverse of the Hessian matrix (second derivative matrix) of the objective function. It converges faster than gradient descent in many cases.
The BFGS (Broyden-Fletcher-Goldfarb-Shanno) algorithm is a commonly used variant of the quasi-Newton method.

Global optimization method : For non-convex problems or multi-modal problems, global optimization methods can find the global optimal solution of the objective function, not just the local optimal solution. This includes genetic algorithms, simulated annealing, particle swarm optimization, etc.

Constrained optimization methods : For nonlinear programming problems with constraints, constrained optimization methods, such as penalty function method, Lagrange multiplier method, or interior point method, can be used to handle constraints.
The interior point method is particularly suitable for large-scale nonlinear programming problems.

2. Several examples and matlab solution codes

Insert image description here
1. Matlab solution:
Insert image description here
Write M function fun1.m to define the objective function

function f=fun1(x);
f=sum(x.^2)+8;

Write M function fun2.m to define nonlinear constraints

function [g,h]=fun2(x);
g=[-x(1)^2+x(2)-x(3)^2
x(1)+x(2)^2+x(3)^3-20];  %非线性不等式约束
h=[-x(1)-x(2)^2+2
x(2)+2*x(3)^2-3]; %非线性等式约束

Write the main program file as follows


[x,y]=fmincon('fun1',rand(3,1),[],[],[],[],zeros(3,1),[],'fun2')

2. Matlab solution to unconstrained problems.
Insert image description here
The calculated Matlab program is as follows

clc, clear
syms x y
f=x^3-y^3+3*x^2+3*y^2-9*x;
df=jacobian(f);  %求一阶偏导数
d2f=jacobian(df); %求Hessian阵
[xx,yy]=solve(df)  %求驻点
xx=double(xx);yy=double(yy); 
for i=1:length(xx)
    a=subs(d2f,{
    
    x,y},{
    
    xx(i),yy(i)});  
    b=eig(a);  %求矩阵的特征值
    f=subs(f,{
    
    x,y},{
    
    xx(i),yy(i)});
    if all(b>0)
        fprintf('(%f,%f)是极小值点,对应的极小值为%f\n',xx(i),yy(i),f);
    elseif all(b<0)
        fprintf('(%f,%f)是极大值点,对应的极大值为%f\n',xx(i),yy(i),f);
    elseif any(b>0) & any(b<0)
        fprintf('(%f,%f)不是极值点\n',xx(i),yy(i));
    else
        fprintf('无法判断(%f,%f)是否是极值点\n',xx(i),yy(i));  
    end
end

3. To solve the extreme value problem of constraint conditions in matlab,
Insert image description here
write the following program

h=[4,-4;-4,8];
f=[-6;-3];
a=[1,1;4,1];
b=[3;9];
[x,value]=quadprog(h,f,a,b,[],[],zeros(2,1))

4. Use gradient to solve constrained optimization problems. Matlab
Insert image description here
function fun10.m defines the objective function and gradient function.

function [f,df]=fun10(x);
f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);
df=[exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+8*x(1)+6*x(2)+1);exp(x(1))*(4*x(2)+4*x(1)+2)];

Function fun11.m defines constraints and gradient functions of constraints

function [c,ceq,dc,dceq]=fun11(x);
c=[x(1)*x(2)-x(1)-x(2)+1.5;-x(1)*x(2)-10];
dc=[x(2)-1,-x(2);x(1)-1,-x(1)];
ceq=[];dceq=[];

Write the main program file as follows

options=optimset('GradObj','on','GradConstr','on');
[x,y]=fmincon(@fun10,rand(2,1),[],[],[],[],[],[],@fun11,options)

Guess you like

Origin blog.csdn.net/qq_55433305/article/details/132874084