How to compare the time complexity and space complexity of the algorithm?

Table of contents

1. Time complexity BigO

Asymptotic notation for big O:

Example one:

Example 2:

Example 3: Time complexity of bubble sort

Example 4: Time complexity of binary search

Notes on writing logarithms:

Example 5:

 Example 6:

Using Time Complexity to Solve Programming Problems

​Editing idea one:

Idea 2:

source code:

Idea three:

Review bitwise operators

2. Detailed Explanation of Space Complexity

concept:

Example 1: What is the space complexity of bubble sort?

Example 2: One-way recursion

Example 3

Parse:

Example 4 (hard dishes, two-way recursion)

Using Space Complexity to Solve Programming Problems

Idea one:

code:

Idea 2:

code:

Idea three:

code:


1. Time complexity BigO

First of all, we cannot judge the time complexity of an algorithm by the time the machine runs the algorithm, because even the same algorithm may run differently on different machines (individual differences of machines), so we use

Big O notation】——The asymptotic complexity of the algorithm T(n)=O(f(n)).

It is the number of executions !

        First interpret this formula, f(n) represents the number of times the code is executed, O represents the proportional relationship, and T(n) represents the asymptotic complexity of the algorithm (that is, when the magnitude of a problem increases, the running time of the algorithm increases. ).

That is, to find the mathematical expression between a certain basic sentence and the problem size N is to calculate the time complexity of the algorithm. 

Asymptotic notation for big O:

In practice, when we calculate the time complexity, we don't necessarily need to calculate the exact number of executions, but only the approximate number of executions .

Rules for big O asymptotic notation:

  1. Replace all additive constants in run time with the constant 1.
  2. In the modified run count function, only the highest order term is kept.
  3. If the highest-order term exists and is not 1, take out the constant multiplied by this term so that the coefficient in front of it is 1, and the obtained is the big O asymptotic expression.
  4. Use the worst case to consider the computational time complexity.

Example one:

 We can calculate the time complexity of the algorithm by counting how many times the ++count statement is executed.

It is not difficult to calculate such a mathematical function expression

Example 2:

Strchr is a library function used to calculate the position of a specific character in a string, and the implementation method is to loop through.

In this way, there are three cases of time complexity:

1. The luckiest thing is to find it after one traversal

2. The most unfortunate thing is that I have been traversing until the end to find it

3. Take the average value and traverse to the middle to find

Which of the above three situations is in line with the time complexity? The answer is worst case ! That is O(N)

The following are some more complex examples of computational time complexity.

For some more complex codes, we can't just look at the code to calculate the time complexity, we have to pay attention to the idea of ​​the code, the underlying logic !

Example 3: Time complexity of bubble sort

 

 We first need to calculate the worst case, that is, the data is originally arranged in order from small to large, but it is required to be arranged from large to small, so all need to be rearranged, the first n-1, the second n-2, and the third Times n-3, and so on until the last 1, this is the sum of an arithmetic sequence, the tolerance is 1, and the highest order item is calculated to be N^2, so the final O(N)=N^2.

Is the best case O(1)?

The answer is no, because if it has been sorted, we still need to judge whether it is in order, and it takes time to judge whether it is in order! So the best case is O(N)

Example 4: Time complexity of binary search

 The worst case of binary search is that the data we are looking for is at the boundary, and when there is only one value left in the search interval scaling, it is the worst. How many lookups were done in the worst case? Except how many times 2 is found .

Assuming that the number of intervals is N, /2/2/2/2 is divided until only one value remains in the last interval.

Notes on writing logarithms:

Since logarithms are not easy to write in text, it is convenient to support some display formula editors, so the time complexity is abbreviated as logN. Only the logarithm of log with base 2 N can be abbreviated as logN, and the others must be written out.

There is a big difference between violent search O(N) and binary search O(logN)

Example 5:

Time complexity of computing factorial recursion

Note that the time complexity of  calculating recursion mainly depends on the number of times the function is called, and then the time complexity inside the function .

The time complexity of the recursive algorithm is the accumulation of multiple calls.

We found that the recursive function of the above code is called N+1 times, and the interior of each function is O(1), so the final time complexity is O(N). It is equivalent to the time complexity of N+1 1s

 Example 6:

 The difference from the above code is that this is a two-way recursion, and the above is a single-way recursion

 The figure above shows the number of double-way recursive calls. It is not difficult to find that the law is incremented on the order of 2^n, and then the geometric sequence is summed. The final calculated order of magnitude is 2^n, so O(N)=2 ^N

The final triangle is missing a piece in the lower right corner, but that doesn't affect our order of magnitude.

But the time complexity of 2^n is very slow, because the CPU can accept the unit of billions, but 2^n will soon reach the peak of the CPU.

So using recursion to solve the Fibonacci sequence is only theoretically feasible

Using Time Complexity to Solve Programming Problems

Idea one:

Sorting + traversal (the next number is not equal to the next data + 1, the next number is the number that disappeared)

Time complexity: O(logN*N) on the premise of using quick sort qsort

Idea 2:

Subtract the value in the array from the calculation result of the 0~N arithmetic sequence summation formula, and the result is the number that disappears

Time complexity: O(N)

source code:
int main()
{
	int arr[] = { 0,1,3 };
	int sum = 2 * 3;//求和直接用等差数列的公式计算
	for (int i=0;i<3;i++)
	{
		sum -= arr[i];
	}
	printf("%d\n", sum);
	return 0;
}

Idea three:

Single dog idea: XOR, two numbers that appear in pairs, a separate number appears, use XOR to solve

int main()
{
	int arr[] = { 1,3,4 };
	int ret = 0;
	for (int i=1;i<=4;i++)
	{
		ret ^= i;
	}
	for (int i=0;i<3;i++)
	{
		ret ^= arr[i];
	}
	printf("%d\n", ret);
	return 0;
}
Review bitwise operators

^——XOR operator——return 0 if the corresponding binary bits are the same, and return 1 if the corresponding binary bits are different. Note that the XOR result of two identical numbers is 0, and the XOR result of any data and 0 is itself. And the XOR operator satisfies the commutative law

& - bitwise AND operator - as long as there is a 0, the result is 0

|——bitwise OR operator—as long as there is 1, the result is 1

2. Detailed Explanation of Space Complexity

concept:

Space complexity is also a mathematical expression, which is a measure of the additional temporary storage space occupied by an algorithm during operation

Space complexity is not how many bytes of space the program occupies, but the number of variables is calculated, and the big O progressive notation is also used.

Note: The stack space (storage parameters, local variables, some register information, etc.) required by the function when it is running has been determined during compilation, so the space complexity is mainly determined by the additional space requested by the function when it is running.

Example 1: What is the space complexity of bubble sort?

First of all, the array passed by the parameter is not included in the space complexity. If we create an additional array to sort the array, such an array will be included in the space complexity.

This calculation found that only end, exchange, and i are the additional variables we created, so there are 3 in total, that is, the space complexity is O(1). Note that O(1) does not mean that the space space complexity is 1, but a constant indivual.

Example 2: One-way recursion

It is not difficult to see that in order to solve the problem, the code creates an additional array, so the final space complexity is O(N)

Example 3

 

Parse:

Suppose there are N layers of recursion, each recursion needs to call a function, and calling a function needs to create a stack frame, every time a function is called, a stack frame needs to be created, and creating a stack frame requires a constant number of spaces, note that the stack frame is in the function It will be destroyed after use, but the space complexity calculation is the largest space occupation, so the overall stack frame is calculated only when the recursion ends. So the final space complexity is O(N).

Example 4 (hard dishes, two-way recursion)

 We must first clarify such a principle that time is cumulative, once gone, space can be reused .

Unspoken rules for creating and destroying function stack frames

Let's first understand the truth that when a function is called, the stack frame space created by the first function will be returned to the operating system, and then continue to call another function. The stack frame space required after the second function is created is The space of the previous function is exactly the same ! The following example proves that the addresses of a and b are the same.

With the above foundation, we also need to know the calling order of the two-way recursive function, as shown in the figure below.

When we recursively call all the way, the stack frame created by the function is destroyed, and then another new function will continue to use this space and reuse it, so at most N additional spaces are occupied, that is, the space complexity is O(N) .

Using Space Complexity to Solve Programming Problems

Idea one:

First take out the last number, and then move the previous elements in the array to the right by one bit, so that it is counted as a right rotation, and then loop for a total of k times.

This kind of space complexity is O(1), time complexity is O(N^2), because considering the worst case, it cannot be KN, because K is a variable, the situation is good or bad, and the complexity is directly taken as the worst Condition.

code:
int main()
{
	int arr[] = { 1,2,3,4,5,6,7 };
	int sz = sizeof(arr) / sizeof(arr[0]);
	int k = 0;
	scanf("%d", &k);
	for (int i=0;i<k;i++)
	{
		int tmp = arr[sz - 1];
		for (int j=0;j<sz-1;j++)
		{
			arr[sz - 1 - j] = arr[sz - 2 - j];
		}
		arr[0] = tmp;
	}
	for (int i=0;i<sz;i++)
	{
		printf("%d ", arr[i]);
	}
	return 0;
}

Idea 2:

Exchange space for time, open up an array, copy the data directly to the new array, and then copy the whole to the original array

The time complexity is O(N), because we have opened up an additional array space, so our space complexity is O(N)

code:
int main()
{
	int arr[] = { 1,2,3,4,5,6,7 };
	int* tmp = (int*)malloc(sizeof(arr) * sizeof(arr[0]));
	if (tmp == NULL)
	{
		perror(malloc);
		exit(-1);
	}
	int k = 0;
	int sz = sizeof(arr) / sizeof(arr[0]);
	scanf("%d", &k);
	//注意memcpy最后一个参数以字节为单位!
	memcpy(tmp, arr + sz - k % sz, k % sz * sizeof(arr[0]));
	memcpy(tmp + k % sz, arr, (sz - k % sz) * sizeof(arr[0]));
	memcpy(arr, tmp, sz * sizeof(arr[0]));
	for (int i=0;i<sz;i++)
	{
		printf("%d ", arr[i]);
	}
	free(tmp);
	tmp = NULL;
	return 0;
}

Idea three:

Three-step flipping method :

The space complexity is O(1), and the time complexity is O(N). This is the optimal solution!

It is not easy to think of, need to accumulate!

code:
void reverse(int* left, int* right)
{
	while (left <= right)
	{
		/*int* tmp = left;
		left = right;
		right = tmp;*///将两个地址交换
		int tmp = *left;
		*left = *right;
		*right = tmp;
		left++;
		right--;
	}
}

int main()
{
	int arr[] = { 1,2,3,4,5,6,7 };
	int sz = sizeof(arr) / sizeof(arr[0]);
	int k = 0;
	scanf("%d", &k);
	reverse(arr, arr+sz-k-1);
	reverse(arr + sz-k, arr + sz - 1);
	reverse(arr, arr + sz - 1);
	for (int i=0;i<sz;i++)
	{
		printf("%d ", arr[i]);
	}
	return 0;
}

Guess you like

Origin blog.csdn.net/hanwangyyds/article/details/131918003