Optimization model (2) Detailed explanation of nonlinear programming and examples, Scipy.optimize solves nonlinear programming

Mathematical modeling series of articles:

The following are some model algorithms and codes compiled by me when preparing for the National Digital Analog Competition. I will update the content when I have time:
Evaluation Model (1) Analytic Hierarchy Process (AHP) , Entropy weight method, TOPSIS analysis and its corresponding PYTHON implementation code and explanation of examples
Evaluation model (2) Principal component analysis, factor analysis, comparison between the two and their corresponding explanation of PYTHON implementation code and examples< /span>Optimization model (2) Detailed explanation of nonlinear programming, and examples, Scipy.optimize solves nonlinear programmingOptimization model (1) Detailed explanation of linear programming, and examples, Use Python's Pulp library function to solve linear programming
Optimization model (zero) Overview, classification, analysis of various optimization models and universal problem-solving steps

3.3 Scipy.optimize solves nonlinear programming

Overview of Nonlinear Programming:

When one of the objective function f(x) or the constraint condition is a nonlinear expression of the decision variable X, then the mathematical programming problem at this time belongs to nonlinear programming. Solving nonlinear programming is much more difficult than linear programming. There is currently no universal algorithm. Most algorithms seek optimal decision variables through a certain search algorithm after selecting the initial value of the decision variable. The method used in this article is based on some functions of the Scipy library.

scipy solution process – programming step instructions:

  1. Import scipy, numpy packages;
  2. defines the objective function objf3(x), the input variable x represents a vector, and the return value fx is the calculation result of the objective function. The default is min (if you want max, add a negative sign, -min)
  3. Define boundary constraints, that is, the upper and lower limits of the optimization variables:
    • minimize() The default is no boundary constraints, that is, there is no limit to the value range of each variable;
    • If boundary constraints are set, upper and lower limits must be defined for each independent variable (decision variable). Pay attention to the format of defining boundary constraints;
  4. If an argument has no upper limit (lower limit), it is expressed as None .
    Define the initial value of x.
  5. Solve the minimization problem resRosen, where the objective function objf3 and the initial value point of the search xIni are required, and specifying the optimization method and boundary conditions is optional. If the optimization problem is to find the maximum value maxFx, it can be achieved through the transformation of minFx = - maxFx .
  6. The optimal point xOpt is obtained by calling the return value of resRosen.x from the minimization problem.

Scipy.optimizeSome parameter explanations:

minimize(fun,x0,method,bounds,constraints)Solve the minimization function

fun: The objective function defined in the first step;

x0: Guess solution or initial solution, in the form of a list or tuple, usually an N-dimensional array object ndarray;;

method: Generally fill in ‘SLSQP’ (Sequential Least Squares Programming);

bounds: variable boundary/value range; b0 = (0.0, None) # 0.0 <= x[0] <= Inf None == infinity

constraints: (Constraint) defined previously, called here.

jac: Optional**,** Jacobian matrix of the objective function (first-order partial derivative of the objective function). These methods will use: CG, BFGS, Newton-CG, L-BfFGS-B, TNC, SLSQP, dogleg, trust-ncg;

cons: Restriction function cons = ({‘type’: ‘eq’, ‘fun’: lambda x:f(x)})

cons = ({‘type’:‘eq’,‘fun’: lambda x: x[0]*x[1]*x[2]-12})

type: Type of constraint eq is an equation ineq is an inequality.

lambda: It is a constraint function expression. The default is to extract the variables to the left. The symbol is the inequality of >=. For example, lambda x: x[0]*x[1]*x[2]-12 means x*y*z>=12 ; If you need <= , just extract a negative sign. For example, this -(x*y*z)>=-12 means (x*y*z) <= 12, if If you don't want to wait, just add a very small number, such as 1e-20. For example, lambda x: x[0]*x[1]*x[2]-12-1e-20 can be approximately equal to x*y*z>12

Code description one:

from scipy.optimize import brent, fmin, minimize
import numpy as np

# 多变量边界约束优化问题(Scipy.optimize.minimize)
# 定义目标函数
def objf3(x):  # Rosenbrock 测试函数
    fx = x[0]+2*x[1]+3*x[2]+1
    return fx

cons = ({
    
    'type':'eq','fun': lambda x: x[0]*x[1]*x[2]-12})

# 定义没一个变量边界约束(优化变量的上下限)
b0 = (0.0, None)  # 0.0 <= x[0] <= Inf
b1 = (0.0, None)  # 0.0 <= x[1] <= Inf
b2 = (0.0, None)  # 0.0 <= x[2] <= Inf
bnds = (b0, b1, b2)  # 边界约束

# 优化计算
xIni = np.array([1., 2., 3.]) # 定义初始值
resRosen = minimize(objf3, xIni, method='SLSQP',constraints = cons ,bounds=bnds)
xOpt = resRosen.x

print("xOpt = {:.4f}, {:.4f}, {:.4f}".format(xOpt[0],xOpt[1],xOpt[2]))
print("min f(x) = {:.4f}".format(objf3(xOpt)))

Code description two:

Solve the following optimization function:
Insert image description here

from scipy.optimize import brent, fmin, minimize
import numpy as np

# 约束非线性规划问题(Scipy.optimize.minimize)
def objF6(args):  # 定义目标函数
    a,b,c,d = args
    fx = lambda x: a*x[0]**2 + b*x[1]**2 + c*x[2]**2 + d
    return fx

def constraint2(args):
    xmin0, xmin1, xmin2 = args
    cons = ({
    
    'type': 'ineq', 'fun': lambda x: (x[0]**2 - x[1] + x[2]**2)},  # 不等式约束 f(x)>=0
            {
    
    'type': 'ineq', 'fun': lambda x: -(x[0] + x[1]**2 + x[2]**3 - 20)},  # 不等式约束 转换为标准形式
            {
    
    'type': 'eq', 'fun': lambda x: (-x[0] - x[1]**2 + 2)},  # 等式约束
            {
    
    'type': 'eq', 'fun': lambda x: (x[1] + 2*x[2]**2 - 3)},  # 等式约束
            {
    
    'type': 'ineq', 'fun': lambda x: (x[0] - xmin0)},  # x0 >= xmin0
            {
    
    'type': 'ineq', 'fun': lambda x: (x[1] - xmin1)},  # x1 >= xmin1
            {
    
    'type': 'ineq', 'fun': lambda x: (x[2] - xmin2)})  # x2 >= xmin2
    return cons

# 求解优化问题
args1 = (1,2,3,8)  # 定义目标函数中的参数
args2 = (0.0, 0.0, 0.0)  # xmin0, xmin1, xmin2
cons2 = constraint2(args2)

x0 = np.array([1., 2., 3.])  # 定义搜索的初值
res2 = minimize(objF6(args1), x0, method='SLSQP', constraints=cons2)

print("Optimization problem (res2):\t{}".format(res2.message))  # 优化是否成功
print("xOpt = {}".format(res2.x))  # 自变量的优化值
print("min f(x) = {:.4f}".format(res2.fun))  # 目标函数的优化值

Constraints can also be defined like this:

# 定义约束条件函数
def constraint1(x):  # 不等式约束 f(x)>=0
    return x[0]** 2 - x[1] + x[2]**2
def constraint2(x):  # 不等式约束 转换为标准形式
    return -(x[0] + x[1]**2 + x[2]**3 - 20)
def constraint3(x):  # 等式约束
    return -x[0] - x[1]**2 + 2
def constraint4(x):  # 等式约束
    return x[1] + 2*x[2]**2 -3

# 定义边界约束
b = (0.0, None)
bnds = (b, b, b)

# 定义约束条件
con1 = {
    
    'type': 'ineq', 'fun': constraint1}
con2 = {
    
    'type': 'ineq', 'fun': constraint2}
con3 = {
    
    'type': 'eq', 'fun': constraint3}
con4 = {
    
    'type': 'eq', 'fun': constraint4}
cons = ([con1, con2, con3,con4])  # 3个约束条件

references:

Mathematical modeling course for Python beginners-12. Nonlinear programming

Guess you like

Origin blog.csdn.net/m0_63669388/article/details/132703640