Algorithm Overview - Summary of the Four Algorithms

1. What is an algorithm

What is an algorithm? In short, it is a specific method or process to solve a problem, that is, the corresponding conditions can be input according to the actual situation, and the required results can be obtained within a limited time. Because the computer's computing resources and storage containers are limited, in order to ensure the execution efficiency of the program, we need to use appropriate algorithms to deal with specific problems, and the efficiency of different algorithms to solve the same problem may vary greatly Large, the impact of the gap in algorithm efficiency is sometimes even greater than the gap in computer configuration, so a correct and suitable algorithm is very important when facing a problem! Algorithms and data structures constitute executable programs, and algorithms are the soul of programs!

1.1, the characteristics of the algorithm

Feasibility , any calculation steps performed in the algorithm can be completed within a limited time;
deterministic , each step of the algorithm must have an exact meaning, there can be no ambiguity;
finiteness , the finiteness of the algorithm refers to the It must be terminated after executing a limited number of steps, and cannot be infinitely looped;
input , you can input or set corresponding conditions according to the actual situation to describe the initial situation of the operation object;
output , the output is the result of the operation based on the input data, and the algorithm without output is meaningless!

1.2. Algorithm Evaluation

Usually a good algorithm should achieve the following goals:
correctness , the algorithm should solve the problem correctly.
Readability , the algorithm should have good readability, so that people can understand the function of the algorithm.
Robustness , fault tolerance for abnormal situations, such as when illegal data is input, the algorithm can also respond appropriately without crashing or producing inexplicable results.

1.3, the complexity of the algorithm

Algorithmic complexity refers to the time resources and memory consumed by an algorithm after it becomes an executable program.
Time Complexity: The time required to evaluate the program.
Space Complexity: Estimates the storage space required by the program.
Note: Time complexity is generally given priority.

2. Four classic algorithms

2.1. Divide and conquer algorithm

2.1.1 What is it

What is the divide and conquer algorithm ? Literally, it means divide and conquer . It is mainly divided into two parts. The first is to continuously split the original large-scale problem, reduce the size of the sub-problems, and achieve the optimal sub-structure. Then merge. In the process of merging, the sub-problems should be small enough and easy to calculate, and then continuously merge the answers to the sub-problems, and finally find the solution to the problem. This is the so-called divide and conquer method. This technique is the basis of many efficient algorithms, such as sorting algorithms, recursive algorithms, etc., which will be explained in detail with examples in subsequent articles.

2.1.2. Why

Why use divide and conquer? Because the resources required by any function to solve the problem during the running of the program are related to its scale. The smaller the problem size, the easier it is to solve directly, and the fewer computational resources are required to solve it. It is sometimes quite difficult to directly solve a large-scale problem, but after splitting the problem, as the scale of the problem continues to shrink, the final sub-problem is easy to find its solution directly, which can greatly improve Simplify the complexity of the code and improve the running speed to reduce resource consumption.

2.1.3, how to use

How to use divide and conquer ? When encountering a problem, use the divide and conquer method to have three steps on each level of recursion:
1) Decomposition: analyze the problem, and decompose the original problem into several smaller and easier-to-solve sub-problems with the same form as the original problem;
2 ) solution: if the sub-problem is small and easy to solve, then solve it directly, otherwise solve each sub-problem recursively;
3) Merge: combine the solutions of each sub-problem into the solution of the original problem.

2.1.4. Summary

The divide-and-conquer method is problem-oriented and has the following characteristics. When the following characteristics are met at the same time, the divide-and-conquer method can be used to solve the problem:
1) The problem can be decomposed into several smaller-scale and easy-to-solve identical problems, that is, the problem has Optimal substructure properties;
2) The solutions of the subproblems decomposed by the problem can be combined into the solution of the problem;
3) The subproblems decomposed by the problem are independent of each other, that is, there is no common problem between the subproblems subproblem.

2.2. Backtracking Algorithm

2.2.1 What is it

The backtracking algorithm, also known as the heuristic method , is a kind of trial-and-error thinking , enumerating and testing the candidate solutions of the problem one by one in a certain order. When it is found that the current candidate solution cannot be the correct solution, the next candidate solution is selected. If the current candidate can meet all the other requirements except that it does not meet the problem size requirement, continue to expand the size of the current candidate solution and continue to test. If the current candidate solution satisfies all the requirements including the problem size, the candidate solution is a solution to the problem. For example, the common maze route selection problem, by choosing different bifurcation routes, try to find the exit one by one, if you encounter a dead end, return to the previous non-fork point and choose another route until you finally get out of the maze, this kind of walking will not work The idea of ​​going back and going is the idea of ​​backtracking method, and the point of a certain state that satisfies the backtracking condition is called "backtracking point".

2.2.2. Why

Compared with the brute force search, the backtracking algorithm will evaluate the situation of high detection at each step. If the current situation can no longer meet the requirements, then there is no need to continue, that is, it can help us avoid many detours and improve the search. efficiency!

2.2.3, how to use

1) Analyze the problem and define the solution space of the problem;
2) Determine the solution space structure that is easy to search;
3) Under the current situation, traverse all possible situations and make the next attempt;
4) Judge whether the current situation is true, if it is illegal Then perform pruning, return immediately and cancel the attempt made in the previous step;
5) Whether the current situation has met the recursive end condition, if so, save the current result and return;

2.2.4. Summary

The backtracking algorithm is to search the solution space of the problem in a depth-first manner, and use the pruning function to avoid invalid searches during the search process, thereby improving code efficiency!

2.3. Greedy Algorithm

2.3.1 What is it

The greedy algorithm is also known as the greedy algorithm , it always wants to use the current best method to achieve when solving the problem. This kind of algorithm idea does not consider the problem from the overall optimum, but only solves the local optimum in a certain sense. Although the greedy algorithm cannot obtain the overall optimal solution of all problems, it can produce an overall optimal solution or an approximate solution to the overall optimal solution when faced with a wide range of problems. It can be seen that the greedy algorithm only pursues the optimum within a certain range, which can be called "gentle greed".

2.3.2. Why

Starting locally, decompose the problem into several sub-problems, then find the optimal solutions to these sub-problems, and then combine these optimal solutions into the final solution to obtain an approximate optimal solution to the entire problem! Substituting the form of decomposing the problem to obtain a local optimal solution instead of exhausting all possible algorithms to find the global optimal solution can save a lot of time. Of course, the local optimal solution is not necessarily the optimal solution to the overall problem, so it should be analyzed according to the actual situation.

2.3.3, how to use

1) Establish a mathematical model to describe the problem;
2) Divide the problem to be solved into several sub-problems;
3) Solve each sub-problem to obtain the local optimal solution of the sub-problem;
4) Synthesize the local optimal solution of the sub-problems A solution to the original problem.

2.3.4. Summary

The greedy algorithm does not consider the overall optimality, but only makes a local optimal choice in a certain sense. The overall optimal solution of a problem can be achieved through a series of local optimal solution selections, and each selection can depend on the previous selection, but not on the subsequent selection. This is greedy choice.

2.4. Dynamic programming algorithm

2.41 What is it

The dynamic programming method is similar to the divide and conquer method. Its basic idea is to decompose the problem into several small problems, and obtain the solution of the original problem by solving the small problems. The difference between it and the divide-and-conquer method is that the sub-problems of the divide-and-conquer method are independent of each other, while the solution of the next sub-problem of the dynamic programming method is based on the solution of the previous sub-problem. That is to say, each solution stage of the problem is a recursive relationship, such as problem 1-> problem 2-> problem 3-> problem solving.

2.4.2. Why

Similar to the divide and conquer method, the problem is decomposed, the smaller the problem size, the less resources are consumed, and a large problem is divided into small problems, so as to obtain the optimal solution step by step and improve the efficiency of program execution.

2.4.3, how to use

The main difficulty of dynamic programming lies in the theoretical design. Once the design is completed, the implementation part starts very simple! The design of dynamic programming generally requires several steps:
1) Problem division: According to the actual problem, it is divided into several staged problems in an orderly manner ;
2) Stage decision: To solve the problems at each stage, it is necessary to ensure that the stage solution has no consequences
3) Determine the side: because there is a natural connection between decision-making and state transition, state transition is to derive the state of this stage based on the state and decision-making of the previous stage. So if the decision is determined, the state transition equation can also be written. But in fact, it is often done in reverse, and the decision is determined according to the relationship between the states of the two adjacent segments;
4) Looking for boundary conditions: the given state transition equation is a recursive formula, which requires a recursive termination condition or Boundary conditions.

2.4.4. Summary

The key of dynamic programming algorithm is to solve redundancy, which is the fundamental purpose of dynamic programming algorithm. Dynamic programming is essentially a technology that trades space for time. During its implementation, it has to store various states in the generation process, so its space complexity is greater than other algorithms. The dynamic programming algorithm is chosen because the dynamic programming algorithm can afford space, but the search algorithm cannot afford time, so we choose time instead of space.

Guess you like

Origin blog.csdn.net/xianren95/article/details/126674531