Operations Research - Du Gang (Tianjin University) (to be continued)

  • Table of contents

    1. Introduction

    1. Operations research and its origin in development

    2. Disciplinary Status of Operations Research

    3. Content System of Operations Research

    4. Applications of operations research

    2.1 Linear programming - model and graphic method

    1. Linear programming problems and their academic models

    2. Graphical method: a method to solve linear programming by drawing

    3. Obtain some properties of linear programming solution by graphical method

    2. Bilinear programming - simplex method

    1. Simplex method

    2. Preliminary knowledge

    2.1 The standard form of linear programming

    2.2 Non-standardized standardized type

    3. Basic concepts

    3.1 Feasible solution and optimal solution

    3.2 Basic matrix and basic variables

    3.3 Basic solution and basic feasible solution

    4. Fundamental Theorem

    5. Steps of the simplex method


    1. Introduction

  • 1. Operations research and its origin in development

    • 1.1 Operations Research (OR) is a discipline that provides scientific basis for management decision-making with quantitative methods
      • 1.1.1 Management decision-making is the source and application object of operations research; quantitative analysis is the technical feature of operations research
      • 1.1.2 Research on management issues based on operations research forms the school of management science; the method of management science is mainly the method of operations research. Therefore, operations research can also be called management science (Management Science, MS)
  • 2. Disciplinary Status of Operations Research

    • 2.1 Philosophy (systematic materialist dialectics) → basic science (mathematics, systematics) → technical science ( operations research ) → engineering technology (management decision-making, engineering optimization)
  • 3. Content System of Operations Research

    • 3.1 Operations Research has the characteristics of multi-branches . Its main branches include: mathematical programming (linear/nonlinear), dynamic programming, graph and network methods, decision analysis, storage theory, queuing theory, game theory, and stochastic simulation, etc.
    • 3.2 If operations research and related content are compared to a big tree, the trunk of the big tree is " optimization "
    • 3.3 The root system of the big tree is its basic science, the branches of the big tree are optimized in various angles and directions, the dense branches and leaves of the big tree are its rich content, and the fruits of the big tree are the results of the application of operations research in various fields
  • 4. Applications of operations research

    • 4.1 The general procedure of application: clarify the problem→select the model→determine the parameters→calculate the solution→result analysis (then feed back to the previous stages to make appropriate corrections)
    • operations research method linear programming nonlinear programming 0-1 planning dynamic programming network analysis queuing theory storage theory Strategic analysis
      application Production Structure Optimization Portfolio Optimization Site selection resource allocation problem Engineering plan optimization Service system optimization Order inventory optimization Opportunity to choose
  • 2.1 Linear programming - model and graphic method

  • 1. Linear programming problems and their academic models

    • 1.1 In production management and business activities, it is often necessary to solve a problem: how to rationally use limited resources to obtain maximum benefits
    • 1.2 Three elements of linear programming model
      • 1.2.1 Decision variable : the quantity to be decided, that is, the unknown quantity to be requested
      • 1.2.2 Objective function : the quantity to be optimized, that is, the goal to be achieved , expressed by the expression of the decision variable
      • 1.2.3 Constraints : Restrictions to achieve the optimization goal , expressed by the equation or inequality of decision variables
    • 1.3 A basic feature of the linear programming model: both the objective and the constraints are linear expressions of variables. If there are nonlinear expressions (such as exponential, logarithmic) in the model, it does not belong to linear programming
    • 1.4 General model: Max z=CX, st AX<=b, X>=0
      • 1.4.1 Among them, X is called the decision variable vector, C is called the price coefficient vector, A is called the technology coefficient matrix, and b is called the resource restriction vector 
      • 1.4.2 The column in A represents the unit consumption of resources. In different enterprises, the unit consumption of resources is different, reflecting the differences in the technical level of different companies, so it is called the technology coefficient matrix
  • 2. Graphical method: a method to solve linear programming by drawing

    • 2.1 Although it can only be used to solve two-dimensional (two-variable) problems, its main function is not to solve, but to intuitively illustrate some important characteristics of linear programming solutions
    • 2.2 Graphical method steps
      • 2.2.1 Constrained graph : first make the graph of non-negative constraints ; then make the graph of resource constraints
      • 2.2.2 The graph of the target : for the objective function z=CX, two different values ​​of z can be given , and two corresponding straight lines can be made. From this, it can be seen which direction the straight line moves when z increases (Thus finding the maximum value of z in the feasible domain, at this time, the straight line is at the boundary of the feasible domain)
      • 2.2.3 Find the optimal solution : move the target straight line to the direction that increases the target z until it reaches the boundary of the feasible region, then the tangent point between it and the feasible region X^{*}is the optimal solution
  • 3. Obtain some properties of linear programming solution by graphical method

    • 3.1 The constraint set (feasible region) of linear programming is a convex polyhedron
      • 3.1.1 A convex polyhedron is a type of convex set
      • 3.1.2 The so-called convex set means: the line connecting any two points in the set still belongs to this set
      • 3.1.3 The "pole" in a convex set, also known as a vertex or a corner point, means that it belongs to a convex set, but cannot be expressed as an interior point of a line connecting two points in the set, such as a vertex of a polygon
    • 3.2 The optimal solution of linear programming (if it exists) must be obtained at the vertex of the feasible region
      • 3.2.1 Therefore, it can be seen from the graphical method that the target z can be optimized to the maximum only when the target is translated to the boundary in a straight line 
      • 3.2.2 The significance of this property: it makes the optimization work in the feasible domain rise from "infinite" to "finite", thus providing an important basis for the algorithm design of linear programming
    • 3.3 Several situations of linear programming solutions
      • 3.3.1 Unique optimal solution : only one pole
      • 3.3.2 Multiple optimal solutions : If the slope of the target line is the same as that of a certain constraint line, and the boundary is the position where the optimal solution is taken, then all points on the boundary are optimal solutions
      • 3.3.3 No feasible solution : the feasible domain is the intersection of all constraints, if the intersection is empty, the feasible domain is empty. At this time, there is no feasible solution
      • 3.3.4 Unlimited optimal solution (unbounded solution): the feasible domain is unbounded, and the optimization direction of the target line is an unbounded direction
      • 3.3.5 If the latter two situations appear in the actual modeling solution, it is necessary to check whether there is a problem with the model establishment. Why are the constraints disjoint? Check whether there are unrealistic situations; for unbounded solutions, check whether there are fewer constraints
  • 2. Bilinear programming - simplex method

  • 1. Simplex method

    • 1.1 This method is the main algorithm for solving linear programming. It was proposed by Dan Jiege, a professor at Stanford University in 1947. Although many algorithms came out later, the simplex method maintains the main algorithm status because of its simple and practical characteristics.
  • 2. Preliminary knowledge

    • 2.1 The standard form of linear programming

      • 2.1.1 The linear programming models from practical problems have different forms. Before using the simplex method to solve the problem, the model must be transformed into a unified form— standard form
      • 2.1.2 Standard type : Max z= CX; st AX=b, X>=0. Among them, the A_{m\times n}rank of is m (m<n), b>0
      • 2.1.3 Features of standard type : Max type, equality constraint, non-negative constraint 
    • 2.2 Non-standardized standardized type

      • 2.2.1 Change from Min to Max
        • 2.2.1.1 Min z=CX → Max z^{'}=-CX
        • 2.2.1.2 Because finding the minimum point of a function is equivalent to finding the maximum point of the negative function of the function
        • 2.2.1.3 Note : After changing from Min to Max, the optimal solution remains unchanged, and the optimal value difference has a negative sign
      • 2.2.2 Inequality Constraints Converted to Equality Constraints
        • 2.2.2.1 Take 9x_{1}+4x_{2}\leqslant 360as an example
        • 2.2.2.2 The reason for "unequal" is because there is a difference between the left and right sides, which can be called "slack". If this slack is added to the left side, it becomes an equation. And this slack is also a variable, denoted as x_{3}, then we have9x_{1}+4x_{2}+x_{3}\leqslant 360
        • 2.2.2.3 After adding the slack variable to the equality constraint, add the slack variable to the non-negative constraint
        • 2.2.2.4 The relationship between the standard form and the solution of the original model: the non-slack variable of the optimal solution of the standard form is the optimal solution of the original model
        • 2.2.2.4 In general, the vector recording the slack variable is X_{s}, then  s.t.\left\{\begin{matrix} AX\leqslant b\\ X\geqslant 0 \end{matrix}\right. \rightarrow s.t.\left\{\begin{matrix} AX+IX_{s}=b\\ X,X_{s}\geqslant 0 \end{matrix}\right., or s.t.\left\{\begin{matrix} AX\geqslant b\\ X\geqslant 0 \end{matrix}\right. \rightarrow s.t.\left\{\begin{matrix} AX-IX_{s}=b\\ X,X_{s}\geqslant 0 \end{matrix}\right.
      • 2.2.3 When there is a variable in the model x_{k}that does not have non-negative requirements , it is called a free variable, and x_{k}=x_{k}^{'}-x_{k}^{''}, x_{k}^{'},x_{k}^{''}\geqslant 0it can be reduced to standard form
  • 3. Basic concepts

    • 3.1 Feasible solution and optimal solution

      • 3.1.1 Feasible solution : the solution that satisfies all constraints, denoted as X
      • 3.1.2 Optimal solution : the optimal solution among feasible solutions, denoted as X^{*}, then for any feasible solution X, we haveCX\leqslant CX^{*}
      • 3.1.3 Intuitively, the feasible solution is a point in the feasible region, which is a feasible solution; the optimal solution is the boundary corner point of the feasible region, which is an optimal solution
    • 3.2 Basic matrix and basic variables

      • 3.2.1 Basis matrix (abbreviated as basis): the reversible sub-matrix of order m (number of rows) in the coefficient matrix A , denoted as B. The rest are called non-basic matrices, denoted as N
      • 3.2.2 Basis vector : each column in the base matrix B; each column in the non-basic matrix is ​​called a non-basic vector
      • 3.2.3 Basic variables : The decision variables corresponding to the basic vectors are called basic variables, X_{B}and the vectors formed by them are denoted asX_{N}
      • 3.2.4 Generally, m\times nthe number of bases in the order matrix A is at most C_{n}^{m}one
    • 3.3 Basic solution and basic feasible solution

      • 3.3.1 After the base B in A is determined, let B represent the first m columns in A, then A=(BN) can be recorded, and the correspondingX=(X_{B}\ X_{N})^T
        • AX=b in the constraint can be expressed as (B\ N) \begin{pmatrix} X_{B}\\ X_{N} \end{pmatrix} =b, that isBX_{B}+NX_{N}=b
        • Understand X_{B}=B^{-1}b-B^{-1}NX_{N}, X_{N}=0at that time , there was X_{B}=B^{-1}b,X=(B^{-1}b\ 0)^{T}
        • The solution of AX=b is called a basic solutionX=(B^{-1}b\ 0)^{T} of linear programming . It can be seen that a basic solution is determined by a basis
      • 3.3.2 Note : The basic solution is only a resource-constrained solution, and it is not required to be non-negative , so it may not be feasible
      • 3.3.3 The non-negative basic solution is called the basic feasible solution (referred to as the basic feasible solution)
  • 4. Fundamental Theorem

    • 4.1 The feasible region of linear programming is a convex polyhedron
    • 4.2 There is a one-to-one correspondence between the vertices of the feasible region of linear programming and the basic feasible solutions
    • 4.3 The optimal solution of linear programming (if it exists) must be obtained at the vertices of the feasible region . For the case where a boundary is the optimal solution, since the boundary must have two vertices, these two vertices are also the optimal solution.
  • 5. Steps of the simplex method

    • 5.1 The simplex method is an iterative algorithm, and its idea is to optimize in the vertex of the feasible region - the basic feasible solution. Since the number of vertices is finite, the algorithm can be terminated after a finite number of steps
    • 5.2 steps
      • 5.2.1 Determine the initial basic feasible solution
      • 5.2.2 Check whether it is optimal . If yes, stop the algorithm; if not, proceed to the next step
      • 5.2.3 Find a better basic feasible solution . Then, go back to the second step to check whether the feasible solution is optimal
    • 5.3 Determine the initial basic feasible solution : Since the basic feasible solution is determined by a feasible basis, determining the initial basic feasible solution X_{0}is equivalent to determining an initial feasible basisB_{0}
      • 5.3.1 If A contains the unit matrix I, then B_{0}=I; if A does not contain I, then use the artificial variable method to construct an I
      • 5.3.2 B_{0}=IAt that time , X_{0}=(B_{0}^{-1}b\ 0)^{T}=(b\ 0)^{T}\geq 0, feasible
    • 5.4 Optimality test (test with target)
      • 5.4.1 Target z=CX,X=(X_{B}\ X_{N})^T
        • z=CX=(C_{B}\ C_{N})(X_{B}\ X_{N})^T=C_{B}X_{B}+C_{N}X_{N} 
        • because X_{B}=B^{-1}b-B^{-1}NX_{N}, soz=C_{B}(B^{-1}b-B^{-1}NX_{N})+C_{N}X_{N}=C_{B}B^{-1}b+(C_{N}-C_{B}B^{-1}N)X_{N}
        • make \sigma =C_{N}-C_{B}B^{-1}N. At that time\sigma \leqslant 0 , because the first term of the above formula is always positive, only X_{N}=0at that time , z takes the maximum value. This shows that the solution at this time is the optimal solution (the basic feasible solution is X_{N}=0obtained by making
        • This \sigmais called the test number vector , and each component of it is called the test number
      • 5.4.2 Methods
        • 5.4.2.1 Calculate x_{j}the number of tests for each\sigma _{j}=c_{j}-C_{B}B^{-1}P_{j}
        • 5.4.2.1 If there are all \sigma_{j} \leqslant 0, the current solution is the optimal solution; otherwise, it is not optimal, and a better basic feasible solution is searched for
    • 5.5 Finding a better basic feasible solution
      • 5.5.1 Since the basic feasible solution corresponds to the basis, that is, to find a new basic feasible solution is equivalent to transforming from the previous basis B0 to the next new basis B1. Therefore, this step is also called base transformation
      • 5.5.2 Principles of basis transformation. Improvement: z1>z0; Feasible:B_{1}^{-1}b\geqslant 0
      • 5.5.3 Method of base transformation:

Guess you like

Origin blog.csdn.net/qq_44681809/article/details/113308174