Divide and conquer, dynamic programming and greedy iterative algorithm sentiment

Divide and conquer, dynamic programming, greedy algorithms were similar between the three, for example, need to divide a problem into sub-problems, and then finally to solve the problem by addressing these sub-problems. But in fact the difference between the three is still quite large.

 

1. Divide and Conquer

    Method partition (divide-and-conquer): original problem into n smaller scale similar to the structure of the original problem subproblems; recursively solve subproblems, then recombined as a result, to obtain a solution of the original problem .

   Partition model has three steps on each layer recursion:

  • Decomposition (Divide): original problem is decomposed into a series of sub-problems;

  • Solve (conquer): solution of various sub-problems recursively. If the child is small enough problem, the direct solver;

  • Merge (Combine): The results of sub-problems combined into solution of the original problem.

 

   Merge sort (merge sort) is a typical example of the partition method. Intuitive operation corresponding follows:

  • Decomposition: The n elements is divided into subsequences n / 2 elements each comprising;

  • Solution: The combined sort of two subsequences recursively sorting;

  • Merge: merge two sequences sorted to obtain the ranking result.

 

2. Dynamic Programming

   Dynamic programming algorithm design can be divided into the following four steps:

  • Optimal Solutions described structure

  • Recursive definition of the value of the optimal solution

  • By bottom-up calculation of the optimal solution by value

  • An optimal solution calculated by the configuration of the results

     Divide and conquer means to divide the problem into a number of independent sub-problems recursively solving each sub-problem solution of the original problem and solution of problems of sub-merged to obtain. In contrast, dynamic programming suitable for the child independent and overlapping case , that is, the sub-sub-sub-problems include a common problem. In this case, if a sub-rule government will do a lot of unnecessary work, that is, repeatedly solving the common problem of child. Dynamic programming algorithm for solving a problem only for each sub-sub, the result is stored in a table to avoid recalculating each encounter each sub-question answer.

   Two elements of the optimization problem for the dynamic programming method of: optimal substructure and overlapping sub-problems. 

   Optimal substructure: if an optimal solution contains sub-optimal solution of the problem, the problem with optimal substructure.

       Overlapping sub-problems: for dynamic optimization problem solver must have a second element is the space sub-problems to be small, which is used to solve the original problem of recursive algorithm sub-class solution of the same problem repeatedly rather than the total generating new sub-problems. Two sub-problems, if they are really the same sub-problems, but problems arise as children of different issues, then they are overlapping.

    "Divide and Conquer: each sub-Independent Dynamic Programming: each sub-issue overlap"

https://my.oschina.net/feistel/blog/1633592

     Introduction to Algorithms:  dynamic programming requires its sub-problems not only to be independent but also overlap , it seems a bit strange. While these two requirements may sound contradictory, but they describe two different concepts, rather than two sides of the same problem. If you do not share resources with two sub-problems is a problem, then they are independent. Both said two sub-problems, if they are really the same sub-problems, but problems arise as different sub-problems, overlap, they are overlapping.

The final algorithm is a recursive algorithm to solve a big problem by a smaller example of the same problem one or more. In order to achieve recursive algorithm in C language, often using recursive function, that function can call itself. The basic features of recursive procedure: it calls itself (the parameter value is smaller), with the termination condition, the result can be calculated directly.

      When using a recursive procedure, we need to consider a programming environment must be able to maintain its size and recursion depth is proportional to the push-down stack. For large problems, this stack space required may interfere with the way we use recursion.

     Model is a recursive divide and conquer, the most essential features is: put a problem into independent sub-problems. If the child is not independent of the problem, the problem will be more complex, the main reason is that even the most simple and direct recursive algorithm to achieve this, it may require unimaginable time, using dynamic programming techniques can avoid this defect.

     For example, recursive Fibonacci number sequence deed to achieve the following:

    F you (you i)

    {

             if(i < 1)  return 0;

             if(i == 1) return 1;

              return F(i-1) + F(i - 2);

    }

    Never use such a program, because its efficiency is very low, we need exponential time. In contrast, if the calculated first before the N Fibonacci numbers, and stores them in an array, can be used a linear time (proportional to N) is calculated F.

      F[0] = 0;F[1] = 1;

      for(i = 2; i <= N; i++)

            F[i] = F[i-1] + F[i-2];

     This technology gives us a numerical solution of any quick way to get a recursive relationship, in the example of Fibonacci numbers deed, we can even give up the array, you need to save only the first two values.

     From the above discussion we can conclude that: we can calculate the value of all the functions in the order starting from the smallest value to find any similar function at each step using previously been calculated value to calculate the current value, we call this technical term for the bottom-up dynamic programming. As long as there is storage space values ​​have been calculated, this technology can be applied to any of the recursive computation, the improved algorithm will be able to run from the time to the linear exponential uptime.

    Top-down dynamic programming is even a simpler technology, which allows us to perform a cost function with bottom-up dynamic programming the same (or less), but its calculation is automatic. We realize that it recursive procedure to store each calculated value (as it is the last step in), and by checking the value stored to avoid any re-calculate their term (as its first step). This method is also sometimes referred to as a memorandum of law.

                       Fibonacci (dynamic programming)

By storing the calculated values ​​in the array outside of the recursive process, specifically to avoid double counting. This program calculates a time proportional to N.

                  F you (you i)

                  {

                          if(knownF[i] != unknown)

                                 return knownF[i];

                          if(i == 0) t = 0;

                          if(i == 1) t = 1;

                          if(i > 1)  t = F(i - 1) + F(i - 2);

                          return knownF[i] = t;

                  }

       Properties: dynamic programming reduces the running time of a recursive function , i.e. reduce the computational all less than or equal to a recursive call to the required time given parameter, wherein the processing time is constant recursive call.

       We do not need to be limited to the case of a single iteration parameters shaping parameters. When there is a function with a plurality of shaping parameters, it is possible to store smaller subproblem solutions in a multi-dimensional array, a parameter corresponding to the one-dimensional array. Other circumstances that do not involve shaping parameters on the use of abstract discrete problem formulation, it allows us to break the problem into one small problem.

      In the top-down dynamic programming, we store known values; in the bottom-up dynamic programming, we pre-calculate these values. We often choose top-down bottom-up dynamic programming without dynamic programming selected for the following reasons:

     1 top-down dynamic programming is a natural transformation of the mechanical problem solving.

     Calculating sub-sequence 2 can handle the problem himself.

     3 possible solutions we need to calculate all the sub-issues.

     We can not ignore the crucial point is that, when the number of possible values of the function we need is too large to store (top-down) or pre-calculated (bottom-up) all the values, dynamic programming becomes inefficient. Since the basic technology is indeed a top-down dynamic programming to develop efficient recursive algorithm implemented , these algorithms should be included in any kit required in algorithm design and implementation.

3. greedy algorithm

    For many optimization problems, the dynamic programming method to determine the best choice for a little "overkill", as long as the use of other more simple and effective algorithm on the line. Greedy algorithm is to select the current best looks are made, it is desirable to produce a global optimal solution by a local optimal selection. For most greedy algorithm for optimization problems can produce the optimal solution, but not necessarily always the case.

    Just consider a greedy algorithm selection (ie, greedy choice); greedy when making choices, one of sub-problems must be empty, thus leaving only one non-idle question.

    Greedy algorithms and dynamic programming and a lot of similarities. In particular, the problem is also applicable greedy algorithm optimal substructure. Greedy algorithms and dynamic programming there is a significant difference is the greedy algorithm is based on a top-down approach of using optimal substructure. Greedy algorithm would do first choice, at that time appears to be the best choice, and then to solve the problem of a fruit, not the first child to find the optimal solution of the problem, and then make choices.

        Greedy algorithm is given to a question by making a series of select optimal solution. For each decision point in the algorithm, do a time appears to be the best option. This is different from the greedy algorithm of dynamic programming. In dynamic programming, each step must make choices, but these choices depend on the sub-problem solution. Therefore, the solution is usually dynamic programming problem from the bottom up, deal with the problem Tai sub-kid problem. Greedy algorithm made to the current selection may depend on all the options that have been made, but does not need to be dependent on the selection made or sub-problem solutions. Therefore, the greedy algorithm is usually top-down to make the greedy choice, constantly given instance of the problem go about smaller issues. Results into sub-problems greedy algorithm, usually there is only one non-empty sub-problems.

https://juejin.im/post/5d859087f265da03bd055832

Original: http: //hxrs.iteye.com/blog/1055478

Published 101 original articles · won praise 73 · views 120 000 +

Guess you like

Origin blog.csdn.net/usstmiracle/article/details/104769710