<Data structure>Time complexity and space complexity

content

1. Concept

       1.1. Algorithm efficiency

       1.2, time complexity

       1.3. Space complexity

2. Calculation

        2.1. Asymptotic representation of big O

        2.2, time complexity calculation

                 example:

        2.3, space complexity calculation

                 example

3. Exercises with complexity requirements


1. Concept

1.1. Algorithm efficiency

How to measure the quality of an algorithm? For example, for the following Fibonacci sequence:

long long Fib(int N)
{
	if (N < 3)
		return 1;
	return Fib(N - 1) + Fib(N - 2);
}

The recursive implementation of the Fibonacci sequence is very concise, but is it good to be concise? So how to measure its good and bad? The time complexity will be revealed to you at the end of the study.

There are two types of algorithm efficiency analysis: the first is time efficiency, and the second is space efficiency. Time efficiency is called time complexity, while space efficiency is called space complexity. Time complexity mainly measures the running speed of an algorithm, while space complexity mainly measures the extra space required by an algorithm. In the early days of computer development, the storage capacity of computers was very small. So it is very concerned about the space complexity. But after the rapid development of the computer industry, the storage capacity of the computer has reached a very high level. So we no longer need to pay special attention to the space complexity of an algorithm

1.2, time complexity

The time spent by an algorithm is proportional to the number of executions of the statements in it, and the number of executions of the basic operations in the algorithm is the time complexity of the algorithm.

1.3. Space complexity

Space complexity is a measure of the amount of storage space temporarily occupied by an algorithm during its execution. The space complexity is not how many bytes the program occupies, because this does not make much sense, so the space complexity is the number of variables. The space complexity calculation rules are basically similar to the practical complexity, and also use the big-O asymptotic notation.

2. Calculation

2.1. Asymptotic representation of big O

First look at a string of code:

// 请计算一下Func1基本操作执行了多少次?
void Func1(int N)
{
	int count = 0;
	for (int i = 0; i < N; ++i)
	{
		for (int j = 0; j < N; ++j)
		{
			++count;
		}
	}
	for (int k = 0; k < 2 * N; ++k)
	{
		++count;
	}
	int M = 10;
	while (M--)
	{
		++count;
	}
	printf("%d\n", count);
}

The number of executions of the basic operations in the algorithm is the time complexity of the algorithm. Obviously, the most accurate number of operations performed by Func1 here: F(N)=N*N+2*N+10

For example, F(10)=130, F(100)=10210, F(1000)=1002010

It stands to reason that the time complexity of this problem is the above formula, but it is not. Time complexity is an estimate, looking at the item that has the most impact on the expression. As N increases in this question, N^2 in this expression has the greatest impact on the result

In practice, when we calculate the time complexity, we do not necessarily need to calculate the exact number of executions, but only the approximate
number of executions, so here we use the asymptotic representation of big O. , so the time complexity of the above question is O(N^2)

Big O notation: is a mathematical notation used to describe the asymptotic behavior of a function.

  • Derive the big-O method:
  1. Replace all additive constants in runtime with the constant 1.
  2. In the modified run times function, only the highest order terms are kept.
  3. If the highest-order term exists and is not 1, remove the constant multiplied by this term. The result is a big-O order.

Through the above, we will find that the asymptotic representation of big O removes those items that have little effect on the result, and shows the number of executions succinctly and clearly. In addition, the time complexity of some algorithms has a best, average and worst case:

  1. Worst case: maximum number of runs for any input size (upper bound)
  2. Average case: desired number of runs for any input size
  3. Best case: minimum number of runs for any input size (lower bound)
  • For example: search for a data x in an array of length N
  1. Best case: 1 find
  2. Worst case: N times found
  3. Average case: N/2 found

In practice, the general concern is the worst-case operation of the algorithm, so the time complexity of searching for data in the array is O(N)

Note: recursive algorithm time complexity calculation

  1. Each function call is O(1), then it depends on the number of recursion
  2. Each function call is not O(1), then it depends on the accumulation of the number of recursive calls.

2.2, time complexity calculation

example:

  • Example 1:
// 计算Func2的时间复杂度?
void Func2(int N)
{
	int count = 0;
	for (int k = 0; k < 2 * N; ++k)
	{
		++count;
	}
	int M = 10;
	while (M--)
	{
		++count;
	}
	printf("%d\n", count);
}
  • Answer: O(N)

Analysis : The most accurate number of times in this question is 2*N+10, and the most influential one is N. Some people may think it is 2*N, but as N continues to increase, 2 has little impact on the result. , and to comply with the third rule above: if the highest-order term exists and is not 1, then remove the constant multiplied by this term. The result is a big-O order. So remove 2, so the time complexity is O(N)

  • Example 2:
// 计算Func3的时间复杂度?
void Func3(int N, int M)
{
	int count = 0;
	for (int k = 0; k < M; ++k)
	{
		++count;
	}
	for (int k = 0; k < N; ++k)
	{
		++count;
	}
	printf("%d\n", count);
}
  • Answer: O(M+N)

Analysis : Because M and N are both unknowns, both N and M must be carried, but if it is clear that M is much larger than N, then the time complexity is O(M). If M and N are about the same size, then the time complexity is O(M) or O(N)

  • Example three:
// 计算Func4的时间复杂度?
void Func4(int N)
{
	int count = 0;
	for (int k = 0; k < 100; ++k)
	{
		++count;
	}
	printf("%d\n", count);
}
  • Answer : O(1)

Explanation : The most accurate number here is 100, but to comply with the rules of the asymptotic notation of big O, replace all additive constants in runtime with the constant 1. So the time complexity is O(1)

  • Example 4:
// 计算strchr的时间复杂度?
const char* strchr(const char* str, char character)
{
	while (*str != '\0')
	{
		if (*str == character)
			return str;
		++str;
	}
	return NULL;
}
  • Answer : O(N)

Analysis : This question will be divided into situations. Here, it is assumed that the string is abcdefghijklmn. If the target character is found to be g, it needs to be executed N/2 times. If it is found to be a, it needs to be executed once. If it is found to be n, then N times, so it is necessary to divide the situation. Here, there are some algorithms whose time complexity has the best O(1), average O(N/2) and worst O(N) cases, but in practice, the general concern is The worst running case of the algorithm, so the time complexity of this problem is O(N)

  • Example 5:
// 计算BubbleSort的时间复杂度?
void BubbleSort(int* a, int n)
{
	assert(a);
	for (size_t end = n; end > 0; --end)
	{
		int exchange = 0;
		for (size_t i = 1; i < end; ++i)
		{
			if (a[i - 1] > a[i])
			{
				Swap(&a[i - 1], &a[i]);
				exchange = 1;
			}
		}
		if (exchange == 0)
			break;
	}
}
  • Answer : O(N^2)

Analysis : This code examines the bubble sort. The bubble sort of the first trip goes N times, the second trip goes N-1 times, the third trip N-2, ... The last is 1, the regularity of the times coincides with the arithmetic sequence, and the sum is (N+ 1)*N/2, of course, this is the most accurate, here we need to find the item that has the greatest impact on the result, namely N^2, so the time complexity is O(N^2)

  • Example 6:
// 计算BinarySearch的时间复杂度?
int BinarySearch(int* a, int n, int x)
{
	assert(a);
	int begin = 0;
	int end = n;
	while (begin < end)
	{
		int mid = begin + ((end - begin) >> 1);
		if (a[mid] < x)
			begin = mid + 1;
		else if (a[mid] > x)
			end = mid;
		else
			return mid;
	}
	return -1;
}
  • Answer : O(logN)

Analysis : This question is obviously a binary search. Assuming that the length of the array is N, and X times are found, then 1*2*2*2*2*...*2=N, that is, 2^X=N, then X is equal to the logarithm of log base 2 N , and the complexity calculation of the algorithm, I like to omit the abbreviation as logN, because it is not easy to write the base in many places, so the time complexity of this problem is O(logN)

  • Example seven:
// 计算阶乘递归Factorial的时间复杂度?
long long Factorial(size_t N)
{
return N < 2 ? N : Factorial(N-1)*N;
}
  • Answer : O(N)

Parsing : if N is 10

  • Example 8:
long long Fib(int N)
{
	if (N < 3)
		return 1;
	return Fib(N - 1) + Fib(N - 2);
}

This string of code is the code presented at the beginning. The code style is very simple. The calculation of the Fibonacci sequence can be completed in just a few lines. But is the seemingly simple code really "good"? Let's first calculate the time complexity:

  • Answer : O(2^N)

Parse:

 As can be seen from the above figure, the first row is executed once, the second row is executed 2^1 times, the third row is executed 2^2 times, and so on, it is a proportional sequence, which is accumulated and then expressed according to the big O order. According to the rules of the law, the time complexity of this Fibonacci sequence is O(2^N).

However, according to the time complexity of 2^N, which is a very large number, when n=10, the answer is quickly and easily obtained in the VS environment, but when n is slightly larger, such as 50, it will take a long time to wait. It takes a while to calculate the results, which shows that concise code is not necessarily the best code.

 Common time complexity: O(N^2), O(N), O(logN), O(1)

  • Complexity comparison:

2.3, space complexity calculation

  • Space complexity is also a mathematical expression, which is a measure of the amount of storage space temporarily occupied by an algorithm during its operation. The space complexity is not how many bytes the program occupies, because this does not make much sense, so the space complexity is the number of variables .
  • The space complexity calculation rules are basically similar to the practical complexity, and also use the big-O asymptotic notation .
  • Note: The stack space (storage parameters, local variables, some register information, etc.) required by the function to run has been determined during compilation, so the space complexity is mainly determined by the additional space explicitly requested by the function at runtime .

example

  • Example 1:
// 计算BubbleSort的空间复杂度?
void BubbleSort(int* a, int n)
{
	assert(a);
	for (size_t end = n; end > 0; --end)
	{
		int exchange = 0;
		for (size_t i = 1; i < end; ++i)
		{
			if (a[i - 1] > a[i])
			{
				Swap(&a[i - 1], &a[i]);
				exchange = 1;
			}
		}
		if (exchange == 0)
			break;
	}
}
  • Answer : O(1)

Analysis: There are actually three spaces opened up here, namely end, exchange, and i. Since it is a constant variable, the space complexity is O(1). int*a has nothing to do with int n. Some people may think that this is a for loop, and the exchange should be opened n times. In fact, each time the loop comes in, the exchange will be re-opened, and the exchange will be destroyed once the loop is over. And so on, the exchange is always the same space.

And when will O(n) appear?

  • 1. malloc an array
int *a = (int*)malloc(sizeof(int)*numsSize); //O(N)

The premise of this situation is that numsSize must be an unknown number. If it is a specific number, then the space complexity is still O(1)

  • 2. Variable length array
int a[numsSize]; //numsSize未知,O(N)
  • Example 2:
// 计算Fibonacci的空间复杂度?
// 返回斐波那契数列的前n项
long long* Fibonacci(size_t n)
{
	if (n == 0)
		return NULL;
	long long* fibArray = (long long*)malloc((n + 1) * sizeof(long long));
	fibArray[0] = 0;
	fibArray[1] = 1;
	for (int i = 2; i <= n; ++i)
	{
		fibArray[i] = fibArray[i - 1] + fibArray[i - 2];
	}
	return fibArray;
}
  • Answer: O(N+1)

Analysis: Here we see that malloc has opened up n+1 arrays of type long long. I see that there is no need to calculate too much and create several variables later, because the space complexity is an estimate, so it is directly O(N )

  • Example three:
// 计算阶乘递归Fac的空间复杂度?
long long Fac(size_t N)
{
	if (N == 0)
		return 1;
	return Fac(N - 1) * N;
}
  • Answer : O(1)

Analysis: The recursive function here is to create a stack frame, and the number of stack frames to be created is N, and the variables of each stack frame are constant, and the space complexity of N is O(N).

  • Example 4:
// 计算斐波那契递归Fib的空间复杂度?
long long Fib(size_t N)
{
	if (N < 3)
		return 1;
	return Fib(N - 1) + Fib(N - 2);
}
  • Answer : O(N)

Analysis : Time is gone forever, it is accumulated, and space can be reused after reclamation. When recursing to Fib(3), call Fib(2) and Fib(1) at this time, and call Fib(2) to return. At this time, the stack frame of Fib(2) is destroyed. The called Fib(1) and Fib(2) use the same space. Similarly, Fib(N-1) creates a total of N-1 stack frames. Similarly, call Fib(N-2) and just Fib( N-1) The same space is used, which fully shows that time is gone forever, it is accumulated, and the space can be reused after recovery.

3. Exercises with complexity requirements

  • Question 1: (disappearing numbers)

Link: https://leetcode-cn.com/problems/missing-number-lcci/

 This question clarifies a requirement: find a way to complete it in O(n) time, this question will provide two effective and feasible methods, the text begins:

Method 1: Addition - Addition

  • Thought:

This problem is that a number is missing in a series of consecutive integers, then we add the number of integers that should be in turn and subtract the sum of all the elements of the missing number in the original array, which is the number we want.

code show as below:

int missingNumber(int* nums, int numsSize){
    int sum1=0;
    int sum2=0;
    for(int i=0;i<numsSize+1;i++)
    {
        sum1+=i;
    }
    for(int i=0;i<numsSize;i++)
    {
        sum2+=nums[i];
    }
    return sum1-sum2;
}

Method 2: XOR

  • Thought:

As in example 2, here we assume a total of 10 numbers, then the nums array here is [ 0 - 9 ], but there is one number missing, we already know the rules of XOR operation (same is 0, difference is 1) and Two important conclusions: 1. The XOR of two identical numbers is equal to 0. 2. The XOR of 0 with any number is equal to that arbitrary number. Therefore, we can XOR all the elements of the original array first, and then XOR all the elements that theoretically increase sequentially from 0 to n, and then XOR the two pieces again to get the missing number.

  • Drawing show:

 code show as below:

int missingNumber(int* nums, int numsSize){
    int n=0;
    for(int i=0;i<numsSize;i++)
    {
        n^=nums[i];
    }
    for(int i=0;i<numsSize+1;i++)
    {
        n^=i;
    }
    return n;
}

Note : The number of loops in the second for loop should be based on numsSize plus 1, because a number is missing, so theoretically the length of the array is increased by 1 on the original basis.

  • Question 2: (rotate array)

Link: https://leetcode-cn.com/problems/rotate-array/

 In the advanced idea of ​​this question, it is clear that an algorithm with space complexity of O(1) is used to solve this problem. The text begins

Method 1: Rotate right K times, move one at a time

  • Thought:

First, define a variable tmp to store the last element of the array. Second, move the first N-1 values ​​backwards. Finally, put the value of tmp in the first position. as the picture shows:

The time complexity of this method is: O(N*K), and the space complexity is O(1). The space complexity of this method meets the meaning of the question, but there is a risk that the time complexity is when K%N=N-1 If it is too large, it is O(N^2), so let's see if there is a better way:

Method 2: Extra open array

  • Thought:

 Create an additional new array, put the last K elements in front of the new array, and then copy the NK elements of the original array to the back of the new array. However, the time complexity of this method is O(N), and the space complexity is also O(N), which does not meet the meaning of the title, and then change:

Method 3: Three times inversion

  • Thought:

The first pass inverts its first NK elements, the second pass inverts its last K elements, and finally the whole is inverted. as the picture shows:

This method is very clever, the time complexity is O(N), and the space complexity is O(N), which is in line with the meaning of the title.

code show as below:

void reverse(int*nums,int left,int right)
{
    while(left<right)
    {
        int tmp=nums[left];
        nums[left]=nums[right];
        nums[right]=tmp;
        left++;
        right--;
    }
}
void rotate(int* nums, int numsSize, int k){
    k%=numsSize;
    reverse(nums,0,numsSize-k-1);
    reverse(nums,numsSize-k,numsSize-1);
    reverse(nums,0,numsSize-1);
}

 Note: When k=7 here, it is equivalent to complete the inversion once, that is, it returns to the original state, there are rules to follow, so the number of real inversions is k%=numsSize;

Guess you like

Origin blog.csdn.net/bit_zyx/article/details/123266353