Nonlinear programming solution method: Sequential linear programming (Sequential linear programming)

来源:Cornell University Computational Optimization Open Textbook:SLP​​​​​​​

Table of contents

1 Introduction

2. Theory and Methods

2.1 Question Form

2.1.1 NLP problem form

2.1.2 SLP question form

2.2 Step Bounds

2.3 Complete SLP Algorithm

3. Case

3.1 example1

3.2 example2

4. Application

1 Introduction

Sequential linear programming (SLP), also called successive linear programming, is a mathematical programming method used to solve non-linear programming (NLP) problems. SLP can transform NLP into a series of linear programming problems linear programming (LP) through Taylor series expansion, and LP can be solved by simplex method or solver. SLP was first called approximation programming, which was developed in 1961 by R. Griffith and R. Stewart of Shell Oil Company. SLP can solve NLP problems without the use of usually expensive NLP solvers and without using high-order information. SLP is not necessarily the fastest NLP solving algorithm. In the set of 79 design problems in 1994, its performance is similar to the convex approximation method convex approximation with moving asymptotes, and it is better than CONLIN (a convex approximation based on dual optimizer) poorly. But it only needs to use an LP solver, which is more cost-effective and is still used today.

2. Theory and Methods

2.1 Question Form

2.1.1 NLP problem form

A general NLP problem has the following form, x=\left [ x_{1},x_{2},..., x_{n}\right ]which is the decision variable, f(x) is the objective function, which needs to be minimized, g(x) and h(x) are constraints, and the value of x has upper and lower bounds.

2.1.2 SLP question form

All the constraints and objective functions in the NLP problem x^{a}are replaced by the first-order Taylor expansion at the point, and converted into an SLP problem. This requires that the functions in the NLP problem must be differentiable for the SLP algorithm to work. Solving the SLP problem obtains a new solution x^{a+1}, which may be an infeasible solution for NLP, but the degree of infeasibility will decrease as the number of iterations increases.

If the optimal solution of NLP is a vertex of the feasible region, then SLP will converge. But in general there is no guarantee that SLP will converge, because it does not take superbasic variables into account, so it does not perform well on problems with many superbasic variables or problems where the optimal solution is not at the vertex.

Superbasic variables (superbasic variables) is a concept in linear programming, used to describe the state of variables in the optimal solution. In the optimal solution of linear programming problems, some variables take a non-zero value and are called basic variables, while other variables take a value of zero and are called nonbasic variables. Superbasic variables are variables with potential optimization opportunities among nonbasic variables. In other words, a superbasic variable is one of the nonbasic variables whose value can be changed to improve the value of the objective function. A better solution can be found by moving the superbasic variables into the set of basic variables.

In order to achieve convergence for problems with ultrabasic variables, the step bounds method is introduced 

2.2 Step Bounds

The error between the first-order Taylor expansion at a point x^{a}and the original nonlinear function will increase as the distance x^{a}from a certain point x^{k}increases. Therefore, it is necessary to set upper and lower bounds on the SLP search area, also known as moving limits or step boundaries, that is \alpha <x-x^{a}<\beta. If the step size bound is too large, SLP will oscillate around the current optimal solution for a long time; if the step size bound is too small, SLP may fail to find the optimal solution of NLP. Due to the simplicity of the SLP algorithm, most improvements in SLP have been in the development of better heuristics and new algorithms to adjust the step size bounds. Here is a simple algorithm for updating the step bounds:

  • Calculation: First calculate two variables q and r, assuming x^{a}the old solution and x^{a+1}the new solution, then q represents the target value reduction of the two solutions in the NLP problem, and r represents the target value reduction of the two solutions in the SLP problem

  • Update: update the size of the step boundary based on the q/r ratio, if q/r=1, then the SLP problem mirrors the NLP problem perfectly. If q/rit is close to 1, for example 0.75\leq q/r\leq 1.25, increase the step size bounds. If q/rvery offset by 1, say q/r\leq 0.25,q/r\geq 1.25, reduce the step bounds. If it is negative, it means that for NLP, x^{a+1}the ratio x^{a}is worse, reject , reduce the step size boundary, and use the solution x^{a+1}with a smaller step size boundary for SLPx^{a}

2.3 Complete SLP Algorithm

3. Case

3.1 example1

The given problem is as follows, represented by a graph, the shaded part is the feasible region, and the stopping conditions are set to the aforementioned (3) and (4)

 First, the gradient is calculated for the nonlinear term, and g(x1,x2) is first written in a standard form. The gradient should be [-0.5x1+1,-1], [-0.5x1,-1] written on the data, I think mistaken. However, since the case is only an illustration of the process, the calculation results of the original data will be described later. h(x1,x2) is a linear function and does not need to be expanded. Use the Step Bounds method introduced earlier to update the boundary, the initial boundary is set to 16, and the initial point isx_{1}=1,x_{2}=5

  • Iterate the first time

First calculate the gradient at point (1,5), the f function is [6,2], the g function is [-0.5,-1], add the step boundary constraint, and get the following at point (1,5) SLP model

Solve this SLP model to obtain the optimal solution: f^{*}=2.5,x_{1}^{*}=0,x_{2}^{*}=3.25, and then calculate q=4.25-12=-7.75, r=2.5-12=-9.5. So q/r=0.815, taking this point, the step boundary is increased to 32.

  • iterate a second time

Same as the previous iteration process, calculate the gradient at the point (0,3.25), the f function is [4.25,1], the g function is [1,-1], and the step size boundary is 32

 Solve this SLP model to obtain the optimal solution: f^{*}=4,x_{1}^{*}=0,x_{2}^{*}=3, then calculate q=-0.062, r=1.5, q/r=-0.041, reject this point, and reduce the step size boundary to 16.

  • Iterate the third time

Subsequent iterations will find that the new solution found will always be rejected, and the step size boundary will continue to shrink. This is because the step size boundary chosen for the first time is too large and the problem will eventually terminate after 20 iterations. The graph shows the solution history for step sizes of 16 and 0.5, respectively.

3.2 example2

 In example1, the optimal solution is at a fixed point, so SLP will converge after iterations. In example2, the optimal solution is not at the vertex. Given the following problem, follow the same process as example1 to calculate the gradient using the first-order Taylor expansion Instead, the initial step boundary is 16 and the initial point is (1,5)

  • The first iteration: SLP optimal solution -3.714, NLP is 7.739, (4.375,5.428), q/r=0.022, receive, step size boundary reduced to 8

  •  The second iteration: SLP optimal solution -10.060, NLP is 11.420, q/r=-0.579, rejected, step size boundary reduced to 4

  

  •   The third iteration: SLP optimal solution -8.965, NLP is 8.040, q/r=-0.057, rejected, step boundary reduced to 2

  •  The fourth iteration: SLP optimal solution -3.857, NLP is 0.623, q/r=49.704, (3.663,3.428), accepted, step size boundary reduced to 1
  • The fifth iteration: SLP optimal solution -0.093, NLP is 1.412, q/r=0.209, (2.663, 4.139), accepted, step size boundary reduced to 0.5
  • The sixth iteration: the optimal solution of SLP is 0.128, and that of NLP is 0.0551, (3.163, 3.724), and there will be no better solution than 0.0551 in subsequent iterations

In SLP, the step boundary and update method are very important and will directly affect the results. Therefore, many literatures are devoted to developing more effective update methods, such as heuristic methods.

4. Application

  • Hybrid Renewable Energy System Operation Planning: Vaccari and colleagues 2019 used SLP to generate short-term operating plans to meet power and heat load requirements and minimize operating costs. Every device is an NLP problem.
  • Optimal Power Flow (OPF): Mhanna and Mancarella 2021 developed an SLP algorithm to solve Optimal Power Flow problems (non-convex and nonlinear)
  • Obstacle Avoidance and Trajectory Planning: Plessen and colleagues in 2017 addressed the problem of trajectory planning and obstacle avoidance for autonomous vehicles by using a space-based problem formulation and SLP to continuously improve the planned trajectory.

Guess you like

Origin blog.csdn.net/weixin_45526117/article/details/131089011