Data structure zero-based C language version Yan Weimin-time complexity, space complexity

After the algorithm is written into an executable program, it requires time resources and space (memory) resources to run. Therefore, the quality of an algorithm is generally measured from two dimensions: time and space , namely time complexity and space complexity.

Time complexity mainly measures how fast an algorithm runs, while space complexity mainly measures the extra space required to run an algorithm. In the early days of computer development, computer storage capacity was very small, so space complexity was very important. However, with the rapid development of the computer industry, computer storage capacity has reached a very high level. So we no longer need to pay special attention to the space complexity of an algorithm.

1. Time complexity

1.1 The concept of time complexity

Definition of time complexity: In computer science, the time complexity of an algorithm is a function that quantitatively describes the running time of the algorithm. The time it takes to execute an algorithm cannot be calculated theoretically. You can only know it if you put your program on the machine and run it. But do we need to test every algorithm on a computer? It is possible to test everything on a computer, but this is very troublesome, so the time complexity analysis method is introduced. The time an algorithm takes is proportional to the number of executions of its statements. The number of executions of basic operations in the algorithm is the time complexity of the algorithm .

That is: finding the mathematical expression between a certain basic statement and the problem size N is the time complexity of the algorithm.

Example: (nested loop time complexity calculation)

In practice, when we calculate the  time complexity, we do not actually have to calculate the exact number of executions, but only the approximate number of executions, so here we use the asymptotic representation of Big O.

In practice the general case is concerned with the worst case operation of the algorithm. Therefore, the time complexity of Func1 in the above question is O(N^2)

1.2 Asymptotic notation of Big O

Big O notation: is a mathematical notation used to describe the asymptotic behavior of a function.

Derive the Big O method:

1. Replace all additive constants in the run time with constant 1.

2. In the modified number of operations function, only the highest order term is retained .

3. If the highest-order term exists and is not 1, remove the constant multiplied by this term, and the result will be the big O order.

Example 1: (Double cycle time complexity calculation)

  The time complexity is O(N)

Example 2:

 The time complexity is O(M+N)

If the question has a premise:

1) M is much larger than N -> O(M)

2) N is much larger than M -> O(N)

3) M and N are about the same size -> O(M) or O(N) can be used

Note: Under normal circumstances, N is used as the unknown number when calculating time complexity, but M, K, etc. can also be used.

Example 3: (Constant cycle time complexity calculation)

  The time complexity is O(1) (Note: This does not mean that the algorithm is run once, but a constant number of times)

Example 4: (strchar time complexity)

Note: strchr() is used to find a character in a string and return the position of the first occurrence of the character in the string.

 

When an algorithm has different time complexity with different inputs, the time complexity should be pessimistically expected. Looking at the worst case scenario, that is, the time complexity of the above question is O(N) .

Example 5: (Bubble sort time complexity calculation)

 Exact: F(N)=[N*(N-1)]/2

The time complexity is O(N^2)

Example 6: (Binary search time complexity calculation)

Worst case scenario: Each time, half the range of the previous search is searched, and each search is divided by two.

Then N/2/2/2....=1 (1 means that when it is finally reduced to only one value left, it is either that value or it cannot be found)

Then 2^X=N (X is the number of times)

The final time complexity is O(log2N)

Example 7: (Factorial time complexity calculation)

 Note: Recursive algorithm: number of recursions * number of recursive calls per time

The time complexity is O(N)

Example 8: (Fibonacci time complexity calculation)

  

 ​​​​

 Note: X refers to the missing recursive calls circled on the diagram.

 The time complexity is O(2^N)

1.3 Space complexity

Space complexity is also a mathematical function expression, which is a measure of the amount of additional storage space an algorithm temporarily occupies during operation .

Space complexity is not how many bytes of space the program occupies, because this is not very meaningful, so space complexity is calculated as the number of variables. The calculation rules of space complexity are basically similar to those of time complexity, and also use big O asymptotic notation .

Note: The stack space required when the function is running (storing parameters, local variables, some register information, etc.) has been determined during compilation, so the space complexity is mainly determined by the additional space requested by the function during runtime .

Example 1:

 The only additional requirements are an end and an i (i is destroyed at the end of the loop, and i is defined again when entering the loop again. This i shares a space with the previous i). It can be understood that at any time period or moment, there is only one i and An end.

Space complexity: O(1)

Example 2:

Space complexity: O(N)

Time complexity: O(N)

Example 3:

The recursive call is made N times, N stack frames are opened, and each stack frame uses a constant amount of space.

Space complexity: O(N)

Example 4:

Space complexity: O(N)

Note: Space can be reused and is not accumulated, but time is accumulated once it is gone.

The common complexities of some algorithms are as follows:

Exercise 1:

 

 Idea 2 code: (Written by myself for reference only)

#include<stdio.h>
void main()
{
	int num[5] = {0};
	int i = 0;
	for (i = 0; i < 5; i++)
	{
		scanf("%d", &num[i]);
	}
	int sum = 0;
	int sum_num = 0;
	for (i = 0; i < 6; i++)
	{
		sum = sum + i;
	}
	for (i = 0; i < 5; i++)
	{
		sum_num = sum_num + num[i];
	}
	printf("缺%d.", sum - sum_num);
}

Idea 3 main code

Exercise 2:

 

 

 

Idea 3 code:

 

 

Guess you like

Origin blog.csdn.net/Chen298/article/details/132512299