Data structure: preliminary understanding of algorithms

An algorithm is a description of the steps to solve a specific problem, represented in a computer as a finite sequence of instructions, and each instruction represents one or more operations.

what is algorithm

You are asked to write a program that obtains the result of 1+2+3+…+100. How should you write it?
Most people will immediately write the following C language code (or code in other languages):

int a, sum- o, n= 100;
for (1 = 1; 1 <= n; i++) {
    
    
	sum = sumt + i;
}
printf("%d", sum);

The answer given by Gauss, a great mathematician who was born in a small German village in the 18th century, is:

The program is implemented as follows:
int i, sum = O, n = 100; 
 sum = (1 + n ) * n / 2;
 printf("%d", sum);

In order to understand a certain or a certain type of problem, instructions need to be represented as a certain sequence of operations. The sequence of operations includes a set of operations, and each operation completes a specific function. This is the algorithm. (Algorithms actually describe methods of solving problems.)

We have also seen from the above example that for a given problem, there can be multiple algorithms to solve it quickly.
Is there a general algorithm? Is there any medicine that can cure all diseases? Problems in the real world are all kinds of strange, and of course the algorithms are ever-changing. There is no universal algorithm that can solve all problems. Even to solve a small problem, a very good algorithm may not be suitable for it.

Characteristics of algorithms

Algorithms have five basic characteristics: input, output, finiteness, certainty and feasibility.

  • The input
    algorithm has zero or more inputs . Most algorithm input parameters are necessary, but for some cases, such as printing hello world!code like this, no input parameters are required.
  • An output
    algorithm has at least one or more outputs . The algorithm must output, but there is no need for output. Why are you using this algorithm? The output can be in the form of printout or return of one or more values.
  • Finiteness
    means that the algorithm automatically ends after executing a limited number of steps without infinite loops, and each step is completed within an acceptable time . If you write an algorithm, it will take the computer twenty years to complete it. It is finite in a mathematical sense, but since the daughter-in-law has become a mother-in-law, the significance of the algorithm will not be great.
  • Each step of a deterministic
    algorithm has a definite meaning and there is no ambiguity . The same input can only have unique output results.
  • Each step of the feasibility
    algorithm must be feasible and can be completed by executing a limited number of times . Feasibility means that the algorithm can be converted into a program and run on a computer for a long time and get the correct results.

Algorithm design requirements

Algorithms are not unique. There can be multiple algorithms to solve the same problem. Although the algorithm is not unique, relatively good algorithms still exist.

  • Correctness
    The correctness of an algorithm means that the algorithm should at least have unambiguous input, output and processing, can correctly reflect the needs of the problem, and can get the correct answer to the problem.
  • Readability
    Another purpose of algorithm design is to facilitate reading, understanding and communication.
  • Robustness
    : When the input data is illegal, the algorithm can also handle it without producing abnormal or inexplicable results.
  • High time efficiency and low storage capacity
    The design algorithm should try to meet the requirements of high time efficiency and low storage capacity. People want to spend the least money and do the greatest thing in the shortest time. Algorithms have the same idea. It is best to use the least storage space and spend the least time to accomplish the same thing, which is a good algorithm.

In summary, a good algorithm should have the characteristics of correctness, readability, robustness, high efficiency and low storage capacity.

Algorithm time complexity

When analyzing the algorithm, the total number of statement executions T(n)is na function of the problem size, and then the change of with is analyzed and T(n)the order of magnitude of is determined. The time complexity of the algorithm, that is, the time measurement of the algorithm, is written as: . It means that as the problem size increases, the growth rate of the algorithm execution time is the same as the growth rate of , which is called the asymptotic time complexity of the algorithm, referred to as time complexity. where is some function of problem size . This notation that uses capital O() to reflect the time complexity of the algorithm is called Big O notation.nT(n)
T(n) = O(f(n))nf(n)f(n)n

Derive the big O order:
  • Replace all additive constants in runtime with the constant 1.
  • In the modified number of runs function, only the highest order terms are retained.
  • If the highest-order term exists and is not 1, remove the constant multiplied by this term.

Common time complexity

Execution times function order informal term
12 O(1) constant order
2n+3 O( n n n) linear order
3 n 2 n^{2} n2 +2n+3 O( n n n) square order
5 l o g 2 log_{2} log2n+20 O(log n n n) Logarithmic order
2n+3n l o g 2 log_{2} log2n+19 O( n n n lognnn) nlogn order
6 n 3 n^{3} n3+2 n 2 n^{2} n2 +3n+4 O( n 3 n^{3} n3) cubic order
2 n 2^{n} 2n O( 2 n 2^{n} 2n) Exponential order

The commonly used time complexity, from small to large, is:
O(1)<O(log nnn)<O( n n n)<O( n n n lognnn)<O( n 2 n^{2} n2)<O( n 3 n^{3} n3)<O( 2 n 2^{n} 2n)<O( n n n!)<O( n n n^{n} nn)

One way to analyze an algorithm is to calculate the average of all situations. This method of calculating time complexity is called average time complexity. Another approach is to calculate the worst-case time complexity, which is called worst-case time complexity. Generally, unless otherwise specified, it refers to the worst time complexity.

Algorithm space complexity

When we write code, we can trade space for time. For example, to determine whether a certain year is a leap year, you may spend some time writing an algorithm, and since it is an algorithm, it means that every time Given a year, it is necessary to calculate whether it is a leap year. Another way is to create an array with 2050 elements in advance (the number of years is slightly more than reality), and then correspond all the years to the subscripted numbers. If it is a leap year, the value of this array item is 1, if not The value is 0. In this way, the so-called judging whether a certain year is a leap year becomes a problem of finding the value of a certain item in this array. At this time, our operation is minimized, but these 2050 0s and 1s need to be stored on the hard disk or in memory. This is a small trick that trades space overhead for computation time.

The space complexity of the algorithm is realized by calculating the storage space required by the algorithm. The calculation formula of the algorithm space complexity is recorded as: S(n)=O(f(n)), where n is the scale of the problem, f(n) is a function of the storage space occupied by the statement about n.

If the auxiliary space required for algorithm execution is constant relative to the amount of input data, the algorithm is said to work in place, and the space complexity is 0(1).

Guess you like

Origin blog.csdn.net/wujakf/article/details/127885423