Common algorithm design methods

    The algorithm design techniques that are often used mainly include iterative method, exhaustive search method, recursive method, greedy method, backtracking method, divide and conquer method, dynamic programming method and so on. In addition, in order to design and despise the algorithm in a more concise form, recursive technology is often used in algorithm design , and the algorithm is described by recursion.

1. Iterative method

  Iterative method is a commonly used algorithm design method for finding approximate roots of equations or systems of equations.

2. Exhaustive search method

  The exhaustive search method is to enumerate and test many candidate solutions that may be solutions one by one in a certain order, and find those candidate solutions that meet the requirements as the solution of the problem.

3. Recursive method

  The recursion method is a method to solve the problem by using a recursive relation of the problem itself.

4. Recursion

   Algorithms described by recursion usually have the following characteristics: in order to solve a problem of size N, try to decompose it into smaller problems, and then easily construct solutions to large problems from the solutions of these small problems, and these larger scales Small problems can be decomposed into smaller problems using the same decomposition and synthesis methods, and solutions to larger problems can be constructed from the solutions of these smaller problems. In particular, when the scale N=1, the solution can be obtained directly.

5. Backtracking

    The backtracking method, also known as heuristics, first temporarily abandons the limitation on the size of the problem, and enumerates and tests the candidate solutions of the problem one by one in a certain order.

6. The Law of Greed

  The greedy method is a method that does not pursue the optimal solution, but only hopes to obtain a more satisfactory solution. The greedy method can generally get a satisfactory solution quickly, because it saves a lot of time to exhaust all possible solutions to find the optimal solution. The greedy method often makes the optimal choice based on the current situation, without considering all possible overall situations, so the greedy method does not backtrack.

7. Divide and rule

1. The basic idea of ​​divide and conquer

The computational time required for any computer-solvable problem is related to its size N. The smaller the problem size, the easier it is to solve directly, and the less computational time is required to solve the problem.  

The design idea of ​​the divide-and-conquer method is to divide a big problem that is difficult to solve directly into some smaller-scale identical problems, so that each can be broken and divided and conquered.

Such as:

            The Fibonacci sequence can be defined recursively as:

            

2. The applicable conditions of the divide and conquer law

  The problems that can be solved by the divide and conquer method generally have the following characteristics:

  (1) The problem can be easily solved by reducing the scale to a certain extent;

  (2) The problem can be decomposed into several smaller-scale identical problems, that is, the problem has the property of optimal substructure;

  (3) The solution of the sub-problem decomposed by the problem can be combined into the solution of the problem;

  (4) The sub-problems decomposed by this problem are independent of each other, that is, the sub-problems do not contain common sub-sub-problems.

3. The basic steps of divide and conquer

 Divide and conquer has three steps at each level of recursion:

  (1) Decomposition: decompose the original problem into several smaller, independent sub-problems with the same form as the original problem;

  (2) Solving: if the sub-problem is small and easy to be solved, solve it directly, otherwise solve each sub-problem recursively;

  (3) Merge: merge the solutions of each sub-problem into the solution of the original problem.

Eight, dynamic programming method

  In order to save the time of repeatedly solving the same sub-problems, an array is introduced, regardless of whether they are useful for the final solution, and the solutions of all sub-problems are stored in the array. This is the basic method adopted by the dynamic programming method.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325213628&siteId=291194637