The meaning of the algorithm

25028c0a5cea469b974c9865d605f56a.png

Algorithm refers to the accurate and complete description of the problem-solving scheme, and it is a series of clear instructions for solving problems. Algorithms represent the strategy mechanism for describing problem-solving in a systematic way; for computation, data processing, and automated reasoning. It can be understood as a complete problem-solving procedure composed of basic operations and prescribed operation sequences. Or it can be regarded as a limited and exact calculation sequence designed according to the requirements, and such steps and sequences can solve a class of problems.

The instructions in an algorithm describe a computation that, when run, can start from an initial state and (possibly empty) initial input, go through a finite and well-defined sequence of states, finally produce an output, and stop at a final state. Transitions from one state to another are not necessarily deterministic. Some algorithms, including randomization algorithms, include some random inputs.

The concept of formal algorithms arose in part from attempts to solve decision problems posed by Hilbert, and took shape in subsequent attempts to define computationally efficient or efficient methods. These attempts included recursive functions proposed by Kurt Gödel, Jacques Herbrand, and Stephen Cole Kleiny in 1930, 1934, and 1935 respectively, Alonzo Church's lambda calculus in 1936, Formulation 1 by Emil Leon Post in 1936 and the Turing machine proposed by Alan Turing in 1937. Even now, there are often cases where intuitive ideas are difficult to define as formal algorithms.

Basic Information

Chinese name

algorithm

Foreign name

Algorithm

Pinyin

suanfa

source

calculator

type

Mathematics, Computer Terminology

features

Finite definite input and output feasible

Subject

math

commonly used

calculate,

data processing

and automatic reasoning

element

Operations and operations on data objects, etc.

put away

overview

A mechanical, unified method for solving a class of problems consisting of a finite number of steps which, for each given specific problem in a class of problems, are performed mechanically to obtain the solution to the problem. This characteristic of the algorithm makes calculations not only done by humans, but also by computers. The process of solving a problem with a computer can be divided into three stages: analyzing the problem, designing an algorithm, and implementing the algorithm. [1]

History development

In ancient China, calculation and abacus calculations and their execution rules are the embryonic form of algorithms. Here, the type of problems to be solved is arithmetic operations. The ancient Greek mathematician Euclid proposed an algorithm in the 3rd century BC to find the greatest common divisor of two positive integers. Words such as "arithmetic" and "algorithm" have long existed in China, but their meanings refer to all mathematical knowledge and computing skills at that time, which are different from the meanings of modern algorithms. The English word algorithm (algorithm) has also undergone an evolution process. The original spelling was algorism or algorithmitmi, which originally meant the process of calculating with Arabic numerals. The word is derived from the last part of the name of the 9th century AD Persian numerologist Al-Khwarizmi. [1]

In ancient times, calculation usually refers to numerical calculation. Modern computing has gone far beyond the scope of numerical calculations, including a large number of non-numerical calculations, such as retrieval, form processing, judgment, decision-making, formal logic deduction, etc.

Before the 20th century, it was generally believed that all problem classes had algorithms. At the beginning of the 20th century, mathematicians discovered that there were no algorithms for some problem classes, so they began to conduct feasibility research. In this research, the concept of modern algorithm is gradually clarified. In the 1930s, digitalists proposed computational models such as recursive functions and Turing machines, and proposed the Church-Turing thesis (see Computability Theory), which made it possible to formalize the concept of algorithms. According to the Church-Turing thesis, any algorithm can be realized by a Turing machine, and vice versa, any Turing machine represents an algorithm.

According to the above understanding, the algorithm is composed of a finite number of steps, and it has the following two basic characteristics: each step clearly stipulates what operation to perform; each step can be executed by humans or machines within a limited time Finish. People have another different understanding of the algorithm, which requires the algorithm to have a third basic feature in addition to the above two basic features: although some steps may be repeated many times, after a limited number of executions, You must be able to get the answer to the question. In other words, a Turing machine that stops everywhere (that is, stops for any input) represents an algorithm, and each algorithm can be implemented by a Turing machine that stops everywhere [1]

Algorithm classification

Algorithms can be roughly divided into basic algorithms, data structure algorithms, number theory and algebraic algorithms, computational geometry algorithms, graph theory algorithms, dynamic programming and numerical analysis, encryption algorithms, sorting algorithms, retrieval algorithms, randomization algorithms, and parallel algorithms. [1]

Algorithms can be broadly divided into three categories:

Bounded, deterministic algorithms These algorithms terminate for a finite period of time. They may take a long time to perform the assigned task, but will still terminate within a certain amount of time. The results of such algorithms often depend on the input values.

Finite, non-deterministic algorithms These algorithms terminate in a finite amount of time. However, for a given value (or some) the result of the algorithm is not unique or deterministic.

Infinite algorithms are those that do not terminate because no termination condition is defined, or the defined condition cannot be satisfied by the input data. Often, infinite algorithms arise due to undefined termination conditions. [1]

Algorithm features

1. Inputs: An algorithm has zero or more inputs to describe the initial situation of the operand. For example, in Euclid's algorithm, there are two inputs namely m and n. [1]

2. Determinism: Each step of the algorithm must be defined exactly. That is, all actions to be executed in the algorithm must be strictly and unambiguously specified without ambiguity. For example, in the Euclidean algorithm, in step 1, it is clearly stipulated that "divide m by n, instead of such two possible methods as dividing m by n or n by m.

3. Finiteness: An algorithm must terminate after executing a finite step lag. That is to say, for an algorithm, the calculation steps it contains are limited. For example, in the Euclidean algorithm, both m and n are positive integers, after step 1, r must be smaller than n, if

b3529e19c4400e7aeee6ffece10ccddd4ab091fbfcd695773663474ba3edcf4a

, when step 1 is performed next time, the value of n has been reduced, and the descending sequence of positive integers must eventually terminate. Therefore, no matter how large the original values ​​​​of m and n are given, step 1 is performed finitely.

4. Output: The algorithm has one or more outputs, that is, the quantity that has a certain relationship with the input. Simply put, it is the final result of the algorithm. For example, in Euclid's algorithm there is only one output, n in step 2.

5. Feasibility: The calculations and operations to be performed in the algorithm must be quite basic, in other words, they can all be performed accurately, and the algorithm executor does not even need to grasp the meaning of the algorithm to follow each step of the algorithm Actions are required, and ultimately correct results. [1]

Algorithm elements

Data calculation and manipulation

The basic operations that a computer can perform are described in the form of instructions. The set of all instructions that a computer system can execute becomes the instruction system of the computer system. The basic calculations and operations of a computer fall into the following four categories: [1]

1. Arithmetic operations: addition, subtraction, multiplication, division, etc.

2. Logical operations: or, and, not, etc.

3. Relational operations: greater than, less than, equal to, not equal to, etc.

4. Data transmission: input, output, assignment and other operations

Algorithmic Control Structure

The functional structure of an algorithm depends not only on the selected operations, but also on the execution order of the operations. [1]

Algorithm evaluation

The same problem can be solved by different algorithms, and the quality of an algorithm will affect the efficiency of the algorithm and even the program. The purpose of algorithm analysis is to select the appropriate algorithm and improve the algorithm. The evaluation of an algorithm is mainly considered from the time complexity and space complexity.

The complexity of the algorithm 1. Time complexity: The time complexity of the algorithm refers to the time resources that the algorithm needs to consume. The time complexity of an algorithm refers to the computational effort required to execute the algorithm. In general, computer algorithms are a function of the size of the problem, and the time complexity of the algorithm is therefore written as:

Therefore, the larger the scale of the problem, the growth rate of the algorithm execution time is positively related to the growth rate of , which is called asymptotic time complexity (Asymptotic Time Complexity). [1]

2. Space complexity: The space complexity of an algorithm refers to the space resources that the algorithm needs to consume. Its calculation and representation methods are similar to the time complexity, and are generally expressed by the asymptotic nature of the complexity. Compared with time complexity, the analysis of space complexity is much simpler.

3. Correctness: The correctness of an algorithm is the most important criterion for evaluating the quality of an algorithm.

4. Readability: The readability of an algorithm refers to the ease with which an algorithm can be read by humans.

5. Robustness: refers to the ability of an algorithm to respond and process unreasonable data input, also known as fault tolerance. Robustness refers to an algorithm's ability to respond and handle unreasonable data inputs, also known as fault tolerance.

Description

1. Describe the algorithm in natural language [1]

The previous descriptions of Euclid's algorithm and algorithm examples are all in natural language. Natural language is the language that people use every day, such as Chinese, English, German and so on. No special training is required to use these languages, and the algorithms described are easy to understand.

2. Describe the algorithm with a flow chart

In the mathematics course, we learned to use the block diagram to describe the algorithm. In the program block diagram, the flow chart is a common tool to describe the algorithm, and the algorithm is represented by some graphical symbols.

3. Describe the algorithm in pseudocode

Pseudocode is a tool for describing algorithms with words and symbols between natural language and computer language. It does not use graphic symbols, so it is convenient to write, compact in format, easy to understand, and easy to transition to a computer programming language.

Historical records

Algorithms are called "shu" in ancient Chinese literature, and they first appeared in "Zhou Bi Suan Jing" and "Nine Chapters of Arithmetic". In particular, "Nine Chapters on Arithmetic" gives four arithmetic operations, greatest common divisor, least common multiple, square root, cube root, sieve of Eratosthenes for prime numbers, and algorithms for solving linear equations. Liu Hui of the Three Kingdoms Dynasty gave an algorithm for calculating pi: Liu Hui's circle cutting technique. [1]

Since the Tang Dynasty, there have been many monographs devoted to "algorithms" in successive dynasties:

Tang Dynasty: One volume of "One Algorithm", one volume of "Algorithm";

Song Dynasty: one volume of "Introduction to Algorithms", one volume of "Algorithm Secrets"; the most famous is "Yang Hui's Algorithm" by Yang Hui;

Yuan Dynasty: "Ding Ju Algorithm";

Ming Dynasty: Cheng Dawei "Algorithm Tongzong"

Qing Dynasty: "Kaiping Algorithm", "Algorithm Yide", "Algorithm Complete Book".

The English name Algorithm comes from the Persian mathematician al-Khwarizmi in the 9th century, because al-Khwarizmi proposed the concept of algorithm in mathematics. "Algorithm" was originally "algorism", meaning the algorithm of Arabic numerals, which evolved into "algorithm" in the 18th century. Euclid's algorithm is considered to be the first algorithm in history. The first program was written by Ada Byron in 1842 Ada Byron is considered by most to be the world's first programmer because he wrote a program for the Babbage Analytical Engine to solve Bernoulli's equations in 1999. Because Charles Babbage failed to complete his Babbage Bache Analytical Engine, this algorithm failed to execute on Babbage's Analytical Engine. Because the "well-defined procedure" lacks a mathematically precise definition, mathematicians and logicians in the 19th and early 20th centuries appeared on the definition algorithm Difficulty. In the 20th century, the British mathematician Turing proposed the famous Turing thesis, and proposed an abstract model of a hypothetical computer, which is called the Turing machine. The emergence of the Turing machine solved the problem of algorithm definition, Turing The idea played an important role in the development of the algorithm. The sieve method of Eratosthenes for prime numbers and the square root method formula (the algorithm is not equal to the formula, but the formula provides an algorithm)[1]

method

1. Recursive method

The recurrence method is a method to find the solution of the problem by using a recurrence relation of the problem itself. It divides the problem into several steps and finds the relationship between several adjacent steps to achieve the goal. This method is called the recursive method. [1]

2. Recursive method

Recursion refers to a process in which a function keeps referencing itself until the referenced object is known

3. Exhaustive search method

The exhaustive search method is to enumerate and test many candidate solutions that may be solutions one by one in a certain order, and find out those candidate solutions that meet the requirements as the solution of the problem.

4. Greedy Law

The greedy method is a method that does not pursue the optimal solution, but only hopes to obtain a more satisfactory solution. The greedy method can generally obtain a satisfactory solution quickly, because it saves a lot of time that must be spent in exhausting all possibilities to find the optimal solution. The greedy method often makes the optimal choice based on the current situation, without considering all possible overall situations, so the greedy method does not go back.

5. Divide and conquer

Divide and conquer is to divide a complex problem into two or more identical or similar sub-problems, and then divide the sub-problems into smaller sub-problems... until the final sub-problem can be solved simply and directly, the solution of the original problem is Combination of solutions to subproblems.

6. Dynamic programming method

Dynamic programming is a method used in mathematics and computer science to solve optimization problems that contain overlapping subproblems. The basic idea is to decompose the original problem into similar sub-problems, and obtain the solution of the original problem through the solution of the sub-problems in the process of solving. The idea of ​​dynamic programming is the basis of many algorithms and is widely used in the fields of computer science and engineering.

7. Iterative method

The iterative method is the process of solving problems (usually solving equations or equations) by finding a series of approximate solutions starting from an initial estimate in numerical analysis. The methods used to realize this process are collectively called iterative methods.

8. Branch and bound method

Like the greedy algorithm, this method is also used to design a solution algorithm for combinatorial optimization problems. The difference is that it searches the entire possible solution space of the problem. Although the time complexity of the designed algorithm is higher than that of the greedy algorithm, it is Its advantage is that it is similar to the exhaustive method, and can guarantee the best solution to the problem, and this method is not a blind exhaustive search, but can stop halfway through the limit in the search process to find some impossible solutions. The subspace of the optimal solution is further searched (similar to pruning in artificial intelligence), so it is more efficient than the exhaustive method.

 

Guess you like

Origin blog.csdn.net/2301_76571514/article/details/130739561