Integer Programming - Chapter 2 Linear Programming

Integer Programming - Chapter 2 Linear Programming

2.1 Preliminary convex analysis

2.1.1 Convex sets and separation theorem

Definition 2.1 Let the set C ⊆ R n C\sube \R^nCRn , if for anyx , y ∈ C x,y\in Cx,yC λ ∈ [ 0 , 1 ] \lambda \in [0,1] l[0,1 ],有
λ x + ( 1 − λ ) y ∈ C , \lambda x+(1-\lambda) y\in C,λx+(1l ) yC,

is called CCC isa convex set.

In convex geometry, a convex set is a subset of the affine space closed under convex combinations. More specifically, in Euclidean space, a convex set is such that for every pair of points in the set, every point on the straight line segment connecting the pair of points is also in the set. For example, a cube is a convex set, but anything that is hollow or has indentations such as a crescent is not a convex set.

Theorem 2.1 Suppose C ⊆ R n C\sube \R^nCRn is a non-empty closed convex set,y ∈ R n , y ∉ C y\in \R^n,y\not\in CyRn,yC , then

  1. There is a unique point x ‾ ∈ C \overline{x}\in CxC,使
    ∣ ∣ x ‾ − y ∣ ∣ = inf ⁡ { ∣ ∣ x − y ∣ ∣   ∣   x ∈ C } ||\overline{x}-y|| = \inf \{||x-y||\ |\ x\in C \} ∣∣xy∣∣=inf{ ∣∣xy∣∣  xC}

    In mathematical notation, "inf" stands for "infimum", which is a concept in mathematical analysis and set theory. The infimum of a set is the largest real number in the lower bound of the set. Simply put, it is the smallest real number that is less than or equal to all the numbers in the set.

    This theorem shows that there exists a non-empty closed convex set with a given point yyy is the closest point, and this point is unique.

  2. x ‾ ∈ C \overline{x}\in C xC is pointyyy to setCCThe necessary and sufficient condition for the nearest point of C
    is ( x − x ‾ ) T ( x ‾ − y ) ≥ 0 , ∀ x ∈ C (2.1) (x-\overline{x})^T(\overline{x}-y )\ge 0,\quad \forall x\in C\tag{2.1}(xx)T(xy)0,xC(2.1)

Theorem 2.2 Convex Set Separation TheoremC ⊆ R n C\sube \R^nCRn is a non-empty closed convex set,y ∈ R n , y ∉ C y\in \R^n,y\not\in CyRn,yC , then there is a non-zero vectora ∈ R na\in R^naRn and numberβ \betaβ 使得
a T x ≤ β < a T y , ∀ x ∈ C a^\text{T}x\le\beta <a^{\text{T}}y,\quad \forall x\in C aTxb<aTy,xSpecifically
:

  • a a a 是一个非零向量,它定义了一个超平面。
  • β \beta β 是一个实数,它表示超平面在 a T x a^\text{T}x aTx 方向上的偏移量。
  • 对于集合 C C C 中的所有点 x x x,它们都位于这个超平面的一侧,并且超平面分离了点 y y y 和集合 C C C

这个定理直观地说明了在凸集 C C C 和点 y y y 之间存在一个超平面,将点 y y y 与集合 C C C 完全分开,即点 y y y 在超平面的一侧,而集合 在另一侧 C C C 。这个超平面可以看作是一条线( n = 2 n=2 n=2 )或者一个平面( n = 3 n=3 n=3)或者更高维空间中的超平面。

2.1.2 多面体基本知识

Definition 2.2 If S ⊆ R n S\sube R^nSRIf n is the intersection of a finite number of half spaces, it is calledSSS 为多面体,即 S = { x ∈ R n ∣ a i T ≤ b i ,   i = 1 , . . . , m } S=\{x\in \R^n |a^T_i \le b_i,\ i=1,...,m\} S={ xRnaiTbi, i=1,...,m } ,ai ∈ R n , ai ≠ 0 , bi ∈ R a∈\R^n,ai≠0, b_i∈\RaRn,ai=0biR

It is easy to see that polyhedra are closed convex sets . Let A ∈ R m × n , b ∈ R m A∈\R^{m\times n},b∈R^mARm×n,bRm , the following sets are all polyhedrons:
S = { x ∈ R n ∣ A x ≤ b } , S = { x ∈ R n ∣ A x = b , x ≥ 0 } , S = { x ∈ R n ∣ A x ≤ b , x ≥ 0 } . \begin{aligned} &S=\{x\in \R^n| Ax\le b\},\\ &S=\{x\in \R^n| Ax= b ,\ x\ge 0\},\\ &S=\{x\in \R^n| Ax\le b,\ x\ge 0\}. \end{aligned}S={ xRnAxb},S={ xRnAx=b, x0},S={ xRnAxb, x0}.

A polyhedron is a set of solutions to a (finite number of) linear equations and inequalities.

A half-space is the region formed by a hyperplane and all its points on a specific side

Definition 2.3 Let S ⊆ R n S\sube \R^nSRn is a non-empty convex set,x∈S x∈SxS. _ If for anyλ ∈ ( 0 , 1 ) \lambda \in (0,1)l(0,1 ) andx 1 , x 2 ∈ S x_1,x_2∈Sx1,x2S ,x = λ x 1 十 (1 一 λ ) x 2x=λx1ten ( 1 - λ ) x2It can be deduced that x 1 = x 2 = x x_1=x_2=xx1=x2=x , then it is calledxxx isSSThe vertex of S.

That is if CCThere are no two different points in C such that xxx becomes a point on the line connecting these two points, thenxxx is the vertex.

Definition 2.4 Let S ⊆ R n S\sube \R^nSRn is a non-empty convex set,d ∈ R nd\in \R^ndRn . If for anyx ∈ S x∈SxS sumλ ≥ 0 \lambda \ge 0l0 hasx + λ d ∈ S x+λd∈Sx+λdS , then it is calledddd isSSA direction of S. If for anyλ 1 , λ 2 > 0 \lambda_1,\lambda_2>0l1,l2>0 andSSThe direction of S is d 1 , d 2 d_1,d_2d1,d2, d = λ 1 d 1 + λ 2 d 2 d=\lambda_1d_1+\lambda_2d_2d=l1d1+l2d2It can be deduced that d 1 = α d 2 d_1=\alpha d_2d1=αd2, where α > 0 \alpha>0a>0 , then it is calledddd isSSThepolar directionof S. (IfSSS directionddd 不能表示为该集合的两个不同方向的正的线性组合,则称 d d d S S S 的极方向。)

直观上来说,极方向可以看作是在凸集 S S S 的边界点 x 0 x_0 x0 处的“最远延伸方向”。在极方向上,你可以在该方向上无限远地向外延伸,而仍然保持在凸集 S S S 内部。

定理2.3 S = { x ∈ R n ∣ A x = b , x ≥ 0 } S=\{x\in \R^n|Ax=b,x\ge0\} S={ xRnAx=b,x0},其中 A A A 行满秩,则 x ∈ S x\in S xS S S S 的顶点的充要条件为 x x x can be expressed as $ x = \left( \begin{aligned} &B^{-1}b \cr &0 \end{aligned} \right)$, where A = ( B , N ) A=(B,N)A=(B,N) B B B is reversible andB − 1 b ≥ 0 B^{-1}b\ge 0B1b0

定理2.4 S = { x ∈ R n ∣ A x = b , x ≥ 0 } S=\{x\in \R^n|Ax=b,x\ge 0\} S={ xRnAx=b,x0 } non-empty, whereAARow A is of full rank, thenSSS has at least one vertex.

2.2 Linear programming and primitive simplex method

Consider the standard form of Linear Programming (LP):
( LP ) min ⁡ { c T x ∣ A x = b , x ≥ 0 } (LP)\quad \min\{c^{\text{T}} x|Ax = b,x\ge 0\}(LP)min{ cTxAx=b,x0}
此处 A ∈ R m × n A\in R^{m\times n} ARm×n A A A 行满秩,设 S = { x ∈ R n ∣ A x = b , x ≥ 0 } S=\{x\in \R^n | Ax = b, x\ge 0\} S={ xRnAx=b,x0 } , calledSSS is the feasible region of (LP).

In solving a system of linear equations, a full-rank matrix often has a unique solution.

Theorem 2.7 (Basic Theorem of Linear Programming) Let the linear programming feasible region SSS is not empty,SSThe vertices of S arex 1 , . . . , xk x_1,...,x_kx1,...,xk, the polar direction is di, . . . , dl d_i,...,d_ldi,...,dl, then the necessary and sufficient conditions for (LP) to have a finite optimal solution are c T di ≥ 0 , i = 1 , . . . , lc^Td_i≥0, i=1,...,lcTdi0i=1,...,lIf (LP) has a finite optimal solution, then (LP) must be at one of the vertices xj x_jxjreach the optimal.

Definition 2.5 Consider the feasible region SS of linear programming (LP)S,设A = ( B , N ) A=(B,N)A=(B,N ) , whereBBIf B is reversible, it is calledBBBAAA __A 的一个基, x = ( x B x N ) = ( B − 1 b 0 ) x =\left( \begin{aligned} &x_B \cr &x_N \end{aligned} \right)= \left( \begin{aligned} &B^{-1}b \cr &0 \end{aligned} \right) x=(xBxN)=(B1b0)A x = b Ax=bAx=Basic solutionof b ,x B x_BxBare called base variables , x N x_NxNis a non-basic variable. If B − 1 b ≥ 0 B^{-1}b\ge 0B1b0 , then it is calledBBB isthe original feasible basis,x = ( B − 1 b 0 ) x = \left( \begin{aligned} &B^{-1}b \cr &0 \end{aligned} \right)x=(B1b0) is calledthe basic feasible solution.

Known matrix AAA m × n m\times n m×matrix of n , here we assume AAA has full row rank (that is, every row of equations is meaningful), andm < n m<nm<n , 则AAThe basis vector of A ismmm . We rearrange Aby columnsand express it asA = ( B , N ) A=(B,N)A=(B,N ) , whereBBB bymmm basis vector representation,NNN is given by othern − m nmnIt consists of m non-basic vectors. Correspondingly,xxx can also be expressed as a combination of basic variables and non-basic variablesx = ( x B x N ) x =\left( \begin{aligned} &x_B \cr &x_N \end{aligned} \right)x=(xBxN),目标函数表示为 c B T x B + c N T x N c_B^Tx_B+c_N^Tx_N cBTxB+cNTxN 。对于等式 A x = b Ax=b Ax=b ,可以表示成 B x B + N x N = b Bx_B+Nx_N=b BxB+NxN=b

非基变量 x N x_N xN 全部取值为0得到的解称之为基解,基解的个数最多为 C n m C_n^m Cnm,若 x > 0 x>0 x>0,则称之为基本可行解。

假设2.1(非退化假设) 对任意基本可行解 x = ( B − 1 b 0 ) x = \left( \begin{aligned} &B^{-1}b \cr &0 \end{aligned} \right) x=(B1b0),都有 B − 1 b > 0 B^{-1}b> 0 B1b>0

性质2.1 线性规划的可行域S的顶点和基本可行解一一对应,线性规划若有有限最优解,则一定存在基本可行解是最优解

上述定理和性质表明,我们只需在可行域的顶点中寻找线性规划的最优解.

单纯形法

单纯形:n维空间内有n+1个顶点的多面体

算法2.1 原始单纯形算法

步0:计算初始基本可行基 B B B 和对应的基本可行解 x = ( B − 1 b 0 ) x = \left( \begin{aligned} &B^{-1}b \cr &0 \end{aligned} \right) x=(B1b0)

步1:如果 r N = c N T − c B T B − 1 N ≥ 0 r_N=c_N^T-c_B^TB^{-1}N\ge 0 rN=cNTcBTB1N0,停止,当前基本可行解 x x x 是最优解,否则,转步2.

步2:选取 j j j 满足 c j − c B T B − 1 a j < 0 c_j-c_B^TB^{-1}a_j<0 cjcBTB1aj<0,若 a ‾ j = B − 1 a j ≤ 0 \overline{a}_j=B^{-1}a_j\le 0 aj=B1ajStop at 0 , the original problem is unbounded; otherwise, go to step 3.

Step 3: Calculate λ \lambda from the following formulaλ,令x : = x + λ djx:=x+\lambda d_jx:=x+λdj,其中 d j = ( − B − 1 a j e j ) d_j=\left( \begin{aligned} &-B^{-1}a_j \\&\qquad e_j \end{aligned} \right) dj=(B1ajej),转步1
λ = min ⁡ { b ‾ i a ‾ i j ∣ a ‾ i j > 0 } = b ‾ r a ‾ r j ≥ 0 \lambda=\min\left\{\frac{\overline{b}_i}{\overline{a}_{ij}}|\overline{a}_{ij}>0\right\}=\frac{\overline{b}_r}{\overline{a}_{rj}}\ge 0 l=min{ aijbiaij>0}=arjbr0

Internal idea: First we can determine a set of basis, and then find the basis feasible solution through this set of basis. This is the work of step 0-1. After we find the basic feasible solution, we still need to judge whether it is the optimal solution. This is the work optimality test of step 2. Suppose we know after testing that the solution we are looking for is the optimal solution, then we are really lucky. If not, it doesn't matter. We will enter the third step to change the base variable. In this way, a new set of basic feasible solutions can be found, and then optimality testing is performed until the optimal solution is found.

The simplex method is to move from one vertex to another better vertex on a simplex in any n-dimensional space. Until a better vertex cannot be found, it means that the optimal solution has been reached.

Theorem 2.8 Under the assumption of non-degeneracy, the original simplex algorithm terminates in finite steps, or finds an optimal solution of (LP), or determines that (LP) is unbounded.

The above algorithm can be expressed in the form of a table, assuming b ≥ 0 b\ge 0b0 , let the initial feasible basis beBBB A = ( B , N ) A=(B,N) A=(B,N ) , then one iteration process of the simplex method can be performed on the following table:

image-20230802160547168

"rhs" is the abbreviation of "right-hand side" (right-hand constant)

Example 2.1 Use the simplex algorithm to solve the following linear programming
min ⁡ − 7 x 1 − 2 x 2 s . t . − x 1 + 2 x 2 + x 3 = 4 , 5 x 1 + x 2 + x 4 = 20 , 2 x 1 + 2 x 2 − x 5 = 7 , x ≥ 0 \begin{aligned} &\min -7x_1-2x_2\\ &s.t.-x_1+2x_2+x_3=4,\\ &\quad\quad 5x_1 +x_2+x_4=20,\\ &\quad\quad 2x_1+2x_2-x_5=7,\\ &\quad\quad x\ge 0 \end{aligned}min7x _12x _2s.t.x1+2x _2+x3=4,5x _1+x2+x4=20,2x _1+2x _2x5=7,x0
The initial simplex table is as follows:

image-20230802164526671

Iteration 1. Select the initial basis B = ( a 1 , a 3 , a 4 ) B=(a_1,a_3,a_4)B=(a1,a3,a4),以 x B = ( x 1 , x 3 , x 4 ) T x_B=(x_1,x_3,x_4)^T xB=(x1,x3,x4)The simplex table with T as the basic variable is shown in the table below. Its corresponding initial basic feasible solution is x B = ( x 1 , x 3 , x 4 ) T = ( 3 1 2 , 7 1 2 , 2 1 2 ), x N = ( x 2 , x 4 ) T = ( 0 , 0 ) T x_B=(x_1,x_3,x_4)^T=(3\frac{1}{2},7\frac{1}{2}, 2\frac{1}{2}),x_N=(x_2,x_4)^T=(0,0)^TxB=(x1,x3,x4)T=(321,721,221)xN=(x2,x4)T=(0,0)T

image-20230802203432008

The second iteration, because − 7 2 < 0 -\frac{7}{2}<027<0 , selectx 5 x_5x5 为进基变量,计算 λ = 2 1 2 2 1 2 = 1 \lambda=\frac{2\frac{1}{2}}{2\frac{1}{2}}=1 λ=221221=1,故 x 4 x_4 x4 是离基变量,新的基变量为 x B = ( x 1 , x 3 , x 5 ) x_B=(x_1,x_3,x_5) xB=(x1,x3,x5)。新的单纯形表为表2.3:

image-20230802203922772

第3次迭代,选择 x 2 x_2 x2 为进基变量,计算 min ⁡ { 8 11 / 5 , 4 1 / 5 } \min\left\{ \frac{8}{11/5},\frac{4}{1/5} \right\} min{ 11/58,1/54}。故 x 3 x_3 x3 是离基变量,新的单纯形表2.4,由于 r N = ( 3 11 , 6 11 ) ≥ 0 r_N=(\frac{3}{11},\frac{6}{11})\ge 0 rN=(113,116)0 , the current basic feasible solutionx = (36 11, 40 11, 0, 0, 75 11) T x=(\frac{36}{11},\frac{40}{11},0,0,\frac {75}{11})^Tx=(1136,1140,0,0,1175)T is the optimal solution of linear programming, and the optimal value is− 30 2 11 -30\frac{2}{11}30112

image-20230802204513550

In the above simplex algorithm, we require that the problem has an initial basic feasible solution. Let's discuss how to find the initial feasible solution of the problem under general circumstances. Consider the standard form of linear programming:
( LP ) min ⁡ { c T x ∣ A x = b , x ≥ 0 } (LP)\quad \min\{c^{\text{T}}x|Ax = b, x\ge 0\}(LP)min{ cTxAx=b,x0 }
Hereb ≥ 0 b\ge 0b0 , introducing artificial variablesy ∈ R + my\in \R^{m}_+yR+m, consider the following linear programming:
( LP a ) min ⁡ { e T y ∣ A x + y = b , x ≥ 0 , y ≥ 0 } (LP^a)\quad \min\{e^{\text{T }}y|Ax+y = b,x\ge 0,y\ge 0\}(LPa)my { eTyAx+y=b,x0,y0 }
The following conclusions are obviously true:

  • Linear Programming ( LP a LP^aLPa ) Feasible and feasible solution(x, y) = (0, b) (x,y)=(0,b)(x,y)=(0,b ) , its optimal value is greater than or equal to 0.
  • Linear Programming (LP) (LP)( LP ) has a feasible solution if and only if(LP a) (LP^a)(LPThe optimal value of a )is 0. In particular, if(LP a) (LP^a)(LPThe artificial variable yi ( i = 1 , . . . , m ) y_i(i=1,...,m)in the optimal solution of a )yi(i=1,...,m ) are all non-basic variables, then the optimal solution is also(LP) (LP)A basic feasible solution of ( LP )

Using the above method to find the initial basic feasible solution is called stage I, and after obtaining the initial feasible solution, finding the optimal solution to the original problem is called stage II. The entire solution process is called a two-stage method .

Example 2.1 (continued) : Using stage I to solve the initial feasible solution, the following problem can be constructed:
min ⁡ y 1 + y 2 + y 3 , s . t . − x 1 + 2 x 2 + x 3 + y 1 = 4 , 5 x 1 + x 2 + x 4 + y 2 = 20 , 2 x 1 + 2 x 2 − x 5 + y 3 = 7 , x ≥ 0 , y ≥ 0 \begin{aligned} &\min y_1+y_2+ y_3,\\ &s.t.-x_1+2x_2+x_3+y_1=4,\\ &\quad\quad 5x_1+x_2+x_4+y_2=20,\\ &\quad\quad 2x_1+2x_2-x_5+y_3 =7,\\ &\quad\quad x\ge 0,y\ge 0 \end{aligned}miny1+y2+y3,s.t.x1+2x _2+x3+y1=4,5x _1+x2+x4+y2=20,2x _1+2x _2x5+y3=7,x0,y0
Note that x 3 x_3x3and x 4 x_4x4 可以看作是松弛变量,故人工变量 y 1 y_1 y1 y 2 y_2 y2 是多余的,只需引进 y 3 y_3 y3 为人工变量。故阶段Ⅰ的线性规划问题可简化为:
min ⁡ y 3 , s . t . − x 1 + 2 x 2 + x 3 = 4 , 5 x 1 + x 2 + x 4 = 20 , 2 x 1 + 2 x 2 − x 5 + y 3 = 7 , x ≥ 0 , y 3 ≥ 0 \begin{aligned} &\min y_3,\\ &s.t.-x_1+2x_2+x_3=4,\\ &\quad\quad 5x_1+x_2+x_4=20,\\ &\quad\quad 2x_1+2x_2-x_5+y_3=7,\\ &\quad\quad x\ge 0,y_3\ge 0 \end{aligned} miny3,s.t.x1+2x2+x3=4,5x1+x2+x4=20,2x1+2x2x5+y3=7,x0,y30
Obviously, the initial basis variable is x B = ( x 3 , x 4 , y 3 ) T x_B=(x_3,x_4,y_3)^TxB=(x3,x4,y3)T , the corresponding simplex table is shown in Table 2.5. Selectx 1 x_1x1For example, λ = min ⁡ ( 20 5 , 7 2 ) \lambda=\min(\frac{20}{5},\frac{7}{2})l=min(520,27),故y 3 y_3y3Out of the basis, the new basis variable is x B = ( x 1 , x 3 , y 4 ) T x_B=(x_1,x_3,y_4)^TxB=(x1,x3,y4)T , the corresponding simplex table is shown in Table 2.6. This gives the initial basic feasible solutionx = ( 7 2 , 0 , 15 2 , 5 2 , 0 ) T x=(\frac{7}{2},0,\frac{15}{2},\frac{ 5}{2},0)^Tx=(27,0,215,25,0)T. _ Delete the corresponding columns in the simplex table 2.6 to obtain the initial simplex table of stage II.

image-20230802210456994

2.3 Dual and dual simplex methods of linear programming

Consider the following linear programming problem:
( P ) max ⁡ { c T x ∣ A x ≤ b , x ∈ R + n } (\text{P})\quad \max\{c^{\text{T}} x |Ax\le b,x\in \R^n_{+}\}(P)max{ cTxAxb,xR+n}
此处 A ∈ R m × n A\in \R^{m\times n} ARm×n A A A row has full rank, and the dual problem of § is defined as:
( D ) min ⁡ { b T u ∣ AT u ≥ c , u ∈ R + m } (\text{D})\quad \min\{b^{\ text{T}} u|A^{\text{T}}u\ge c,u\in \R^m_{+}\}(D)min{ bTuATuc,uR+m}
HereR + m R^m_{+}R+mmeans mmA collection of m- dimensional nonnegative vectors. Easy to verify, the standard form of linear programming
(LP) min ⁡ { c T x ∣ A x = b , x ∈ R + n } (\text{LP})\quad \min\{c^{\text{T} } x|Ax= b,x\in \R^n_{+}\}(LP)min{ cTxAx=b,xR+n}
的对偶问题为:
( LD ) max ⁡ { b T u ∣ A T u ≤ c , u ∈ R + m } (\text{LD})\quad \max\{b^{\text{T}} u|A^{\text{T}}u\le c,u\in \R^m_{+}\} (LD)max{ bTuATuc,uR+m}
定理2.9 (弱对偶性) 设 x x x 是 (LP) 的可行解, u u u 是 (LD) 的可行解,则 c T x ≥ b T u c^{\text{T}} x\ge b^{\text{T}} u cTxbTu

推论2.1 v ( L P ) = − ∞ v(LP)=-∞ v(LP)=,则(LD)不可行;反之若 v ( L D ) = + ∞ v(LD)=+∞ v(LD)=+,则(LP)不可行。即当原始线性规划问题(LP)是无界的,对偶问题(LD)一定不可行。

推论2.2 (互补松弛条件)设 x x x u u u 分别是(LP)和(LD)的可行解,令 s = c − A T u s=c-A^{\text{T}}u s=cATu,则 x x x u u The necessary and sufficient conditions for u to be the optimal solutions of (LP) and (LD) respectively are sixi = 0, i = 1, . . . , n s_ix_i=0,i=1,...,nsixi=0,i=1,...,n

Theorem 2.10 (Strong Duality) Suppose (LP) or (LD) is feasible and the optimal solution is finite, then v (LP) = v (LD) v(LP)=v(LD)v(LP)=v(LD)

That is, if there is at least one feasible solution in the primitive problem (LP) and the dual problem (LD), and both problems have finite optimal solutions, then their optimal objective function values ​​are equal

Corollary 2.3 For linear programming (LP) and its dual problem (LD), only one of the following four situations occurs:

  1. Both (LP) and (LD) have finite optimal solutions and v (LP) = v (LD) v(LP)=v(LD)v(LP)=v(LD);
  2. v ( L P ) = − ∞ v(LP)=-∞ v(LP)= ,(LD) is not feasible;
  3. v ( LD ) = + ∞ v(LD)=+∞v(LD)=+,(LP)不可行;
  4. (LP)和(LD)皆不可行.

推论2.4 (Farkas引理) 设 A ∈ R m × n , c ∈ R n A\in \R^{m\times n},c\in \R^n ARm×n,cRn,则下列系统有且仅有一个有解:

  1. A x ≤ 0 , c T x > 0 Ax\le 0,c^Tx>0 Ax0,cTx>0
  2. A T u = c , u ≥ 0 A^Tu=c,u\ge0 ATu=c,u0

性质2.2 A = ( B , N ) A=(B,N) A=(B,N) B B B 是(LP)的对偶可行基,即 r N = c N T − c B T B − 1 N ≥ 0 r_N=c_N^T-c_B^TB^{-1}N\ge0 rN=cNTcBTB1N0 , letB \mathcal{B}BsumN \mathcal{N}N means BBrespectivelyB andNNThe column index set of N , denoted a ‾ j = B − 1 aj , j ∈ N , b ‾ = B − 1 b \overline{a}_j=B^{-1}a_j,j\in \mathcal{N} ,\overline{b}=B^{-1}baj=B1aj,jN,b=B1 b, letb ‾ s < 0 \overline{b}_s<0bs<0

  • a ‾ s j ≥ 0 , ∀ j ∈ N \overline{a}_{sj}\ge 0,\forall j\in \mathcal{N} asj0,jN , then (LP) is not feasible
  • If there exists j ∈ N j\in \mathcal{N}jN 使 a ‾ s j < 0 \overline{a}_{sj}<0 asj<0 , then there isBBThe adjacency dual feasible basis B of B ‾ \overline{B}B, its column index set B ‾ = B ∪ { t } ∪ { s } \overline{\mathcal{B}}=\mathcal{B}\cup\{t\}\cup\{s\}B=B{ t}{ s } , here

t = arg ⁡ min ⁡ j ∈ N { − r j a ‾ s j ∣ a ‾ s j < 0 } (2.14) t=\arg\underset{j\in \mathcal{N}}{\min}\{ -\cfrac{r_j}{\overline{a}_{sj}}|\overline{a}_{sj}<0\}\tag{2.14} t=argjNmin{ asjrjasj<0}(2.14)

Dual simplex algorithm

  • Step 0 Calculate the initial dual feasible basis BBB A = ( B , N ) A=(B,N) A=(B,N)
  • Step 1 If b ‾ = B − 1 b ≥ 0 \overline{b}=B^{-1}b\ge 0b=B1b0,则 x T = ( x B T , 0 ) = ( b ‾ T , 0 ) ≥ 0 x^T=(x_B^T,0)=(\overline{b}^T,0)\ge 0 xT=(xBT,0)=(bT,0)0 is the optimal solution to the original problem, otherwise, go to step 2
  • Step 2 Select sss satisfiesb ‾ s < 0 \overline{b}_s<0bs<0,若 a ‾ s j ≥ 0 , ∀ j ∈ N \overline{a}_{sj}\ge 0,\forall j\in \mathcal{N} asj0,jN , then the original problem is infeasible, otherwisettt,令 B : = B ∪ { t } ∪ { s } {\mathcal{B}}:=\mathcal{B}\cup\{t\}\cup\{s\} B:=B{ t}{ s } , go to step 1

Example 2.1 (continued) The linear programming can be written as
min ⁡ − 7 x 1 − 2 x 2 s . t . − x 1 + 2 x 2 + x 3 = 4 , 5 x 1 + x 2 + x 4 = 20 , 2 x 1 + 2 x 2 − x 5 = 7 , x ≥ 0 \begin{aligned} &\min -7x_1-2x_2\\ &s.t.-x_1+2x_2+x_3=4,\\ &\quad\quad 5x_1 +x_2+x_4=20,\\ &\quad\quad 2x_1+2x_2-x_5=7,\\ &\quad\quad x\ge 0 \end{aligned}min7x _12x _2s.t.x1+2x _2+x3=4,5x _1+x2+x4=20,2x _1+2x _2x5=7,x0
Solve using the dual simplex method:

The 1st iteration. Select the initial feasible basis as B = ( a 2 , a 3 , a 5 ) B=(a_2,a_3,a_5)B=(a2,a3,a5) . The corresponding simplex table is Table 2.7. Becauser N = r N = c NT − c BTB − 1 N = ( 3 , 2 ) ≥ 0 r_N=r_N=c_N^T-c_B^TB^{-1} N=(3,2)≥0rN=rN=cNTcBTB1N=(3,2)0 , soB = ( a 2 , a 3 , a 5 ) B=(a_2,a_3,a_5)B=(a2,a3,a5) is a dual feasible basis. Andx B = b ‾ = ( − 36 , 20 , 33 ) T x_B=\overline{b}=(-36,20,33)^TxB=b=(36,20,33)T , soBBB is not a primitive feasible base.

[External link image transfer failed. The source site may have an anti-leeching mechanism. It is recommended to save the image and upload it directly (img-0asFff8h-1691065146820)(%E6%95%B4%E6%95%B0%E8%A7%84% E5%88%92%E2%80%94%E2%80%94%E7%AC%AC%E4%BA%8C%E7%AB%A0%20%E7%BA%BF%E6%80%A7% E8%A7%84%E5%88%92.assets/image-20230803200714056.png)]

For the second iteration, select b ‾ 3 = − 36 < 0 \overline{b}_3=-36<0b3=36<0,计算 λ = min ⁡ ( 3 11 , 3 2 ) = 3 11 \lambda =\min(\cfrac{3}{11},\cfrac{3}{2})=\cfrac{3}{11} l=min(113,23)=113, so x 1 x_1x1Enter base, x 3 x_3x3Leaving the basis, the new dual feasible basis is B = ( a 1 , a 2 , a 5 ) B=(a_1,a_2,a_5)B=(a1,a2,a5) , the corresponding simplex table is as follows. At this time, the feasible dual solution isx = ( 36 11 , 40 11 , 0 , 0 , 75 11 ) x=(\cfrac{36}{11},\cfrac{40}{11},0,0,\cfrac{ 75}{11})x=(1136,1140,0,0,1175) is also the original feasible solution, and thus the original optimal solution.

references

  1. Integer Programming Sun Xiaoling, Li Rui Beijing, Science Press 2010
  2. Convex collection
  3. Operations Research S01E02—Simplex Method
  4. A simple understanding of the simplex algorithm for linear programming

Guess you like

Origin blog.csdn.net/weixin_47692652/article/details/132091500