Learning about time and space complexity

Algorithm time complexity definition

When analyzing the algorithm, the total number of statement executions T(n) is a function of the problem size n , and then the change of T(n) with n is analyzed and the order of magnitude of T(n) is determined. The time complexity of an algorithm is the time measurement of the algorithm. Recorded as: T(n)=O(f(n)) . It means that as the problem n increases, the growth rate of the algorithm execution time is the same as the growth rate of f(n), which is called the asymptotic time complexity of the algorithm, referred to as time complexity . Among them, f(n) is a function of problem size n.

Standard algorithmic measurement unit

Asymptotic notation

1、Θ(big-theta)

If there are positive constants c1, c2 and n0 such that when n ≥ n0, the inequality 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) is always true, then g(n) is said to be a member of f(n) The asymptotically compact bound is denoted as Θ. It contains an asymptotic upper bound and an asymptotic lower bound .

A simple understanding is that when n ≥ n0, f(n) is sandwiched between c1g(n) and c2g(n), c1g(n) is the asymptotic lower bound of f(n) , and c2g(n) is f( The asymptotic upper bound of n) is as shown in the figure below.

Insert image description here

2、O(big-oh)

If there are positive constants c and n0, so that when n ≥ n0, the inequality 0 ≤ f(n) ≤ cg(n) always holds, then g(n) is said to be an asymptotic upper bound of f(n) , denoted as O.

A simple understanding is that when n ≥ n0, cg(n) is always above f(n). cg(n) is the asymptotic upper bound of f(n) . As shown below.

Insert image description here

3、Ω(big-omege)

If there are positive constants c and n0, so that when n ≥ n0, the inequality 0 ≤ cg(n) ≤ f(n) is always true, then g(n) is said to be an asymptotic lower bound of f(n) , denoted as Ω .

A simple understanding is that when n ≥ n0, cg(n) is always below f(n). cg(n) is the asymptotic lower bound of f(n) . As shown below.
Insert image description here

We use O to represent the time complexity of the algorithm in the worst case, Ω to represent the time complexity of the algorithm in the best case, and Θ to represent the time complexity of the algorithm in the average case. This is relatively standard in algorithm books. Nowadays, most of the Internet directly uses O to summarize it. Just understand it here.

I learned from an article on the Internet here.
Original link: https://blog.csdn.net/qq_31116753/article/details/81602582

Derivation of time complexity steps and rules

step

  1. Find the basic statement in the algorithm;
    the statement that is executed the most times in the algorithm is the basic statement, usually the loop body of the innermost loop.

  2. Calculate the order of magnitude of the number of executions of a basic statement;
    calculate the order of magnitude of the number of executions of a basic statement, retaining only the highest-order term. This simplifies algorithm analysis and focuses attention on the most important point: growth rate.

  3. Replace all additive constants in runtime with the constant 1.

  4. If the highest-order term is present and is not 1, remove the constant multiplied by this term. The result is Big O order.

O(log2n), O(n), O(nlog2n), O(n2) and O(n3) are called polynomial time, while O(2n) and O(n!) are called exponential time. Computer scientists generally believe that the former is an effective algorithm and call such problems P (Polynomial, polynomial) problems, while the latter is called NP (Non-Deterministic Polynomial, non-deterministic polynomial) problems.

law

  1. For some simple input and output statements or assignment statements, it is estimated that it takes O(1) time.

  2. For the sequential structure, the time required to execute a series of statements in sequence can be used to use the "summation rule" under Big O. The summation rule: means that if the time complexity of the two parts of the algorithm is T1(n)=O(f(n) )) and T2(n)=O(g(n)), then T1(n)+T2(n)=O(max(f(n),g(n))). In particular, if T1(m)=O(f(m)), T2(n)=O(g(n)), then T1(m)+T2(n)=O(f(m)+g( n))

  3. For selection structures, such as if statements, the main time consumption is the time spent executing the then clause or else clause. It should be noted that the test condition also requires O(1) time.

  4. For loop structures, the running time of loop statements is mainly reflected in the time spent executing the loop body and testing loop conditions in multiple iterations. Generally, the "multiplication rule" under Big O can be used. The multiplication
    rule: refers to the time complexity of the two parts of the algorithm. T1(n)=O(f(n)) and T2(n)=O(g(n)) respectively, then T1*T2=O(f(n)*g(n))

  5. For complex algorithms, you can divide it into several parts that are easy to estimate, and then use the summation rule and the multiplication rule technology. The time complexity of the entire algorithm
    also has the following 2 operation rules: (1) If g(n)=O (f(n)), then O(f(n))+O(g(n))=O(f(n)); (2)O(Cf(n)) = O(f(n)) , where C is a positive constant.

Example

1.Constant order

The complexity of the sequence structure. Next, use Gauss's theorem to calculate the sum of 1, 2, 3...n numbers.

Insert image description here
We treat this element as a function. The function has three statements, denoted as f(n) = 3. According to the above rules, the function is not affected by n and is a constant term, so the time complexity is denoted as O(1). . This algorithm, which has a constant execution time regardless of the size of the problem (the size of n), is called a time complexity of O(1), and is also called a constant order .

2. Linear order

The linear-order cyclic structure will be much more complicated. To determine the order of an algorithm, we often need to determine the number of times a particular statement or set of statements has been run. Therefore, if we want to analyze the complexity of the algorithm, the key is to analyze the operation of the loop structure.

Insert image description here
In the above figure, the frequency of statement 1 is 1,
the frequency of statement 2 is 1,
the frequency of statement 3 is 1,
the frequency of statement 4 is n, the
frequency of statement 5 is n,
the frequency of statement 6 is n
, so the secondary function is recorded as f (n) = 1 + 1 + 1 + n + n + n = 3n + 3. According to the rule, the time complexity of this algorithm is O(n).

3. Logarithmic order

Insert image description here
As shown in the figure above, it is known that n>1
, the frequency of the 1 statement is 1, and
the frequency of the 2 statement is 2^f(n) <= n
f(n) = log2^n,
take the maximum value f(n) = log2^ n
so the time complexity is recorded as O(log2^n).

4. Square order

Insert image description here
As shown in the figure above
, the frequency of statement 1 is 1, and
the frequency of statement 2 is n n
f(n) = 1 + n
n = n^2+1
, so the time complexity is recorded as O(n^2).

5. Cubic order

Insert image description here
As shown in the figure above,
f(n) = 1 + n n n = n^3+1
, so the time complexity is recorded as O(n^3).

Expand on an interesting question I encountered before

Insert image description here
Insert image description here
Math formula supplement

Formula 1: 1 2/2+2 3/2+3*4/2+……+n(n+1)/2=n(n+1)(n+2)/6
Formula 2: 1 2+ 2 2+3 2+……+n 2=n(n+1)(2n+1)/6Formula 3
: 1 3+2 3+3 3+……+n 3=[n(n+1) /2]^2
Insert image description here
f(n) = n(n+1)(n+2)/6
= n(n^2 +3n +3)/6
= (n^3 + 3n^2 + 3n)/6
The time complexity is O(n^3)

Common time complexity comparisons

The commonly used time complexity, from small to large, is:
O(1) < O(log2 ^ n) < O(n) < O(nlog2 ^ n) < O(n ^ 2) < O(n ^ 3) < O(2 ^ n) < O(n!) < O(n ^ n)

Exponential order O(2^n) and factorial order O(n!), etc. Unless it is a very small n value, even if n is only 100, it is a nightmare running time. Therefore, we generally do not discuss this unrealistic algorithm time complexity.

Worst case vs. average case

We search for a number in an array of n random numbers. The best case is that the first number is, then the time complexity of the algorithm is O(1), but it is also possible that the number is waiting at the last position. , then the time complexity of the algorithm is O(n), which is the worst case.
The worst-case runtime is a guarantee that the runtime will never get worse. In applications where this is one of the most important requirements, generally, unless otherwise specified, the running times we mention are worst-case running times.
The average running time is from a probability perspective. The possibility of this number at each position is the same, so the average search time is n/2 times to find the target element. The average run time is the most meaningful of all because it is the expected run time. In other words, when we run a piece of program code, we hope to see the average running time. However, in reality, the average running time is difficult to obtain through analysis. It is usually estimated by running a certain amount of experimental data. Generally, unless otherwise specified, it refers to the worst time complexity.

Algorithm space complexity

Similar to the discussion of time complexity, the space complexity S(n) of an algorithm is defined as the storage space consumed by the algorithm, which is also a function of the problem size n. Asymptotic space complexity is also often referred to simply as space complexity. Space complexity is a measure of the amount of storage space an algorithm temporarily occupies during operation. The storage space occupied by an algorithm on the computer memory includes three aspects : the storage space occupied by the storage algorithm itself , the storage space occupied by the input and output of the algorithm, and the storage space temporarily occupied by the algorithm during operation . The storage space occupied by the input and output data of the algorithm is determined by the problem to be solved and is passed by the calling function through the parameter table. It does not change with the different algorithms. The storage space occupied by the storage algorithm itself is proportional to the length of the algorithm writing. To compress the storage space in this area, a shorter algorithm must be written. The storage space temporarily occupied by the algorithm during operation varies with the algorithm. Some algorithms only need to occupy a small number of temporary work units and do not change with the size of the problem. We call this algorithm "in-place" , is a storage-saving algorithm, as described above. The number of temporary work units that some algorithms need to occupy is related to the scale of the problem n, which increases with the increase of n. When n is larger, more storage units will be occupied.

When the space complexity of an algorithm is a constant, that is, it does not change with the size of the amount of data n to be processed, it can be expressed as O(1); when the space complexity of an algorithm is proportional to the logarithm of n with base 2 When it is proportional, it can be expressed as 0(log2^n); when the space complexity of an algorithm is linearly proportional to n, it can be expressed as 0(n). If the formal parameter is an array, you only need to allocate a storage for it The space of an address pointer transmitted by the actual parameter, that is, a machine word length space; if the formal parameter is a reference, you only need to allocate a space to store an address, and use it to store the address of the corresponding actual parameter variable. So that the actual parameter variables are automatically referenced by the system.

The space complexity of the algorithm is realized by calculating the storage space required by the algorithm. The calculation formula of the algorithm space complexity is recorded as: S(n) = O(f(n)), where n is the scale of the problem, f(n) is a function of the storage space occupied by the statement about n.

Generally, when a program is executed on a machine, in addition to storing the instructions, constants, variables and input data of the program itself, it also needs to store storage units for data operations. If the space occupied by the input data only depends on the problem itself and has nothing to do with the algorithm, then only the auxiliary units required for the implementation of the algorithm need to be analyzed. If the auxiliary space required for algorithm execution is constant relative to the amount of input data, the algorithm is said to work in place, and the space complexity is O(1). Regarding the issue of O(1), O(1) means that the data size has nothing to do with the number of temporary variables, and it does not mean that only one temporary variable is defined. For example: No matter how big the data size is, I define 100 variables. This is called data size and the number of temporary variables has nothing to do with it. That is to say, the space complexity is O(1).

For an algorithm, its time complexity and space complexity often affect each other. When pursuing a better time complexity, the performance of the space complexity may become worse, that is, it may occupy more storage space; conversely, when seeking a better space complexity, the time complexity may become worse. The performance deteriorates, which may lead to longer running time. In addition, there is more or less mutual influence between all performances of the algorithm. Therefore, when designing an algorithm (especially a large algorithm), we must comprehensively consider the performance of the algorithm, the frequency of use of the algorithm, the amount of data processed by the algorithm, the characteristics of the algorithm description language, the machine system environment in which the algorithm runs, etc. Only by considering various factors can a better algorithm be designed.

The time complexity and space complexity of commonly used algorithms

Insert image description here

some calculation rules

1. Addition rules

 T(n,m) = T1(n) + T2(m) = O(max{f(n), g(m)})

2. Multiplication rules

 T(n,m) = T1(n) * T2(m) = O(max{f(n)*g(m)})

3. The relationship between complexity and time efficiency

c(常数) < logn < n < n*logn < n^2 < n^3 < 2^n < 3^n < n!
l------------------------------l--------------------------l--------------l
               较好                          一般                    较差

Original link: https://blog.csdn.net/daijin888888/article/details/66970902#commentBox

Guess you like

Origin blog.csdn.net/qq_42194657/article/details/135438355