Data structure savior [time complexity, space complexity--1]

content

Data Structure Preface

1. The complexity of the algorithm

2. Time complexity

2.1 The concept of time complexity

2.2 Asymptotic Notation for Big O

2.3 There are best, average and worst cases for time complexity

2.4 Common time complexity calculation examples

3. Space complexity

Note: time is accumulated (gone forever), space is not accumulated (reusable)

4. Common time complexity and complexity oj exercises


 

Data Structure Preface

What is a data structure?

A data structure is a way for a computer to store and organize data, and refers to a collection of data elements that have one or more specific relationships with each other.

What is an algorithm?

Algorithm: A well-defined computational process that takes one or a set of values ​​as input and produces one or a set of values ​​as output. Simply put, an algorithm is a series of computational steps used to convert input data into output results.


1. The complexity of the algorithm

After the algorithm is written into an executable program, it takes time resources and space (memory) resources to run. Therefore, to measure the quality of an algorithm, it is generally measured from two dimensions of time and space, namely time complexity and space complexity .

Time complexity mainly measures how fast an algorithm runs, while space complexity mainly measures the extra space required to run an algorithm. In the early days of computer development, the storage capacity of computers was very small. So it is very concerned about the space complexity. But after the rapid development of the computer industry, the storage capacity of the computer has reached a very high level. So we no longer need to pay special attention to the space complexity of an algorithm.


2. Time complexity

2.1 The concept of time complexity

The time complexity of an algorithm is a function that quantitatively describes the running time of the algorithm. The time it takes for an algorithm to execute, in theory, cannot be calculated, only when you run your program on a machine can you know it. The time spent by an algorithm is proportional to the number of executions of the statements in it, and the number of executions of the basic operations in the algorithm is the time complexity of the algorithm.

Please count how many times the ++count statement in Func1 is executed in total? 

void Func1(int N)
{
int count = 0;
for (int i = 0; i < N ; ++ i)
{
 for (int j = 0; j < N ; ++ j)
 {
 ++count;
 }
}
 
for (int k = 0; k < 2 * N ; ++ k)
{
 ++count;
}
int M = 10;
while (M--)
{
 ++count;
}
printf("%d\n", count);
}

Finding the mathematical expression between a basic statement and the problem size N is to calculate the time complexity of the algorithm.

                                

N=10 F(N)= 130
N=100 F(N)= 10210
N=1000 F(N)= 1002010
N=10000 F(N)= 100,020,010

We can find that when N is larger, the effect of 2*N+10 on the result is smaller, and N^2 accounts for the bulk of the result. If we use F(N) to represent the time complexity, each algorithm needs to carefully calculate the corresponding function, which is very complicated. In practice, when we calculate the time complexity, we do not necessarily need to calculate the exact number of executions, but only An approximate number of executions (estimated order of magnitude) is required, so we use the asymptotic notation of big O.


2.2 Asymptotic Notation for Big O

Big O notation: is a mathematical notation used to describe the asymptotic behavior of a function.

Derive the big-O method:

1. Replace all additive constants in runtime with constant 1

2. In the modified run times function, only the highest order terms are kept

3. If the highest order term exists and is not 1, remove the constant multiplied by this term. The result is a big O order

After using the asymptotic representation of big O, the time complexity of Func1 is: O(N^2)

N=10 F(N)= 100
N=100 F(N)= 10,000
N=1000 F(N)= 1,000,000
N=10000 F(N)= 100,000,000

The items that have little effect on the result are removed, and the number of executions is displayed concisely and clearly

Calculate the time complexity of Func3?

void Func3(int N, int M)
{
 int count = 0;
 for (int k = 0; k < M; ++ k)
 {
 ++count;
 }
 for (int k = 0; k < N ; ++ k)
 {
 ++count;
 }
 printf("%d\n", count);
}

 Answer: O(M+N), under normal circumstances, N is used as the unknown, but M+N does not know the size (unknown), and both M and N need to be left

If it is stated that N is much larger than M, it is O(N); if M is much larger than N, it is O(M); if N is the same as M, O(N) or O(M) can be used


2.3 There are best, average and worst cases for time complexity

Worst case: maximum number of runs for any input size (upper bound)

Average case: desired number of runs for any input size

Best case: minimum number of runs for any input size (lower bound)

Example: Search an array of length N for a data x Best case: 1 find Worst case: N finds Average case: N/2 finds

In practice, the general concern is the worst-case operation of the algorithm, so the time complexity of searching for data in the array is O(N)


2.4 Common time complexity calculation examples

Calculate the time complexity of Func2?   

void Func2(int N)
{
 int count = 0;
 for (int k = 0; k < 2 * N ; ++ k)
 {
 ++count;
 }
 int M = 10;
 while (M--)
 {
 ++count;
 }
 printf("%d\n", count);
}

Answer: O (N)

Big O-order rule: 2. In the modified run times function, keep only the highest-order term 3. If the highest-order term exists and is not 1, remove the constant multiplied by this term. The result is a big-O order. When N tends to infinity, N represents the order of magnitude (billionaires, 100 million and 200 million are still billionaires), +10 has little effect on it, and the coefficient is removed at the same time

Calculate the time complexity of Func4?

void Func4(int N)
{
 int count = 0;
 for (int k = 0; k < 100; ++ k)
 {
 ++count;
 }
 printf("%d\n", count);
}

Answer: O(1)

As long as it is constant, the time complexity is replaced by 1. Big-O-order rule: 1. Replace all additive constants in runtime with the constant 1

Time complexity of computing strchr?

const char * strchr ( const char * str, int character );//在字符串中查找一个字符

Answer: O(N), the best basic operation is performed once, the worst is N times, the time complexity is generally the worst, and the time complexity is O(N)


Calculate the time complexity of bubble sort?

void BubbleSort(int* a, int n)
{
 assert(a);
 for (size_t end = n; end > 0; --end)
 {
 int exchange = 0;
 for (size_t i = 1; i < end; ++i)
 {
 if (a[i-1] > a[i])
 {
 Swap(&a[i-1], &a[i]);
 exchange = 1;
 }
 }
 if (exchange == 0)
 break;
 }
}

Answer: O (N ^ 2)

The calculation time complexity cannot take the number loop, which is not necessarily accurate. It must be calculated according to the algorithm idea.

The bubble sort compares N-1 times, the exact number of times: F(N)=N-1+N-2+N-3+...2+1 The basic operation is best performed N times, and the worst is performed ( N*(N-1)/2 times (arithmetic sequence), by deriving the big O-order method + time complexity is generally the worst, the time complexity is O(N^2), the best case is O(N )


Calculate the time complexity of BinarySearch (binary search)?

int BinarySearch(int* a, int n, int x)
{
 assert(a);
 int begin = 0;
 int end = n-1;
 // [begin, end]:begin和end是左闭右闭区间,因此有=号
 while (begin <= end)
 {
 int mid = begin + ((end-begin)>>1);
 if (a[mid] < x)
 begin = mid+1;
 else if (a[mid] > x)
 end = mid-1;
 else
 return mid;
 }
 return -1;
}

 Answer: O(logN)

The basic operation is best performed once O(1), the worst case is that it cannot be found or there is only one number left to find, when the last number is found, ×2, x2, and finally the original array is obtained. Similarly, how many times are folded in half Just divide a few 2, assuming that the halved search is x times, 2^x=N

Because it is not easy to write logarithms in the text, we abbreviate them as logN, which means that the base is 2 and the logarithm is N in the algorithm analysis. In some places, it will be written as lgN (in mathematics, lgN is based on 10, this way of writing is ambiguous, it is best to write it as logN)


Time complexity of calculating factorial recursive Fac?

long long Fac(size_t N)
{
 if(0 == N)
 return 1;
 
 return Fac(N-1)*N;
}

Answer: O () N)

Through calculation and analysis, it is found that the recursion is N+1 times in total, the internal recursion is constant times, Fac(N) Fac(N-1) Fac(N-2)... The time complexity is O(N).


Time complexity of calculating Fibonacci recursive Fib?

long long Fib(size_t N)
{
 if(N < 3)
 return 1;
 
 return Fib(N-1) + Fib(N-2);
}

Answer: O (2 ^ N)

Proportional sequence, 2^0+2^1+...+2^(N-2)=2^(N-1)-1  

 


 

3. Space complexity

Space complexity is also a mathematical expression, which is a measure of the amount of storage space temporarily occupied by an algorithm during its operation.

The space complexity is not how many bytes the program occupies, because this does not make much sense, so the space complexity is the number of variables. The space complexity calculation rules are basically similar to the practical complexity, and also use the big-O asymptotic notation.

Note: The stack space (storage parameters, local variables, some register information, etc.) required by the function to run has been determined during compilation, so the space complexity is mainly determined by the additional space explicitly requested by the function at runtime.

Note: time is accumulated (gone forever), space is not accumulated (reusable)


Calculate the space complexity of BubbleSort?

void BubbleSort(int* a, int n)
{
assert(a);
 for (size_t end = n; end > 0; --end)
 {
 int exchange = 0;
 for (size_t i = 1; i < end; ++i)
 {
 if (a[i-1] > a[i])
 {
 Swap(&a[i-1], &a[i]);
 exchange = 1;
 }
 }
 if (exchange == 0)
 break;
 }
}

Answer: O(1)

A constant amount of extra space is used, so the space complexity is O(1). The array of N numbers provided is not included in our algorithm. The space complexity is only calculated because of the opening of this algorithm. There are only four temporary The variable is created because of the algorithm. Although the exchange is destroyed, it still uses the original space.


Calculate the space complexity of Fibonacci?

long long* Fibonacci(size_t n)
{
 if(n==0)
 return NULL;
 
 long long * fibArray = (long long *)malloc((n+1) * sizeof(long long));
 fibArray[0] = 0;
 fibArray[1] = 1;
 for (int i = 2; i <= n ; ++i)
 {
 fibArray[i] = fibArray[i - 1] + fibArray [i - 2];
 }
 return fibArray;
}

Answer: The space complexity is O(N)

malloc opens up n+1 spaces because of the algorithm


Space complexity of calculating factorial recursive Fac?

long long Fac(size_t N)
{
 if(N == 0)
 return 1;
 
 return Fac(N-1)*N;
}

Answer: The space complexity is O(N)

Open up stack frames, open up a total of n+1 stack frames, each stack frame is O(1)


Calculate space complexity of Fibonacci recursive Fib?

long long Fib(size_t N)
{
 if(N < 3)
 return 1;
 
 return Fib(N-1) + Fib(N-2);
}

Answer: The space complexity is O(N)

The function first calculates Fib (N) Fib (N-1) Fib (N-2) .., and then goes to the right to calculate, the function recurses downward, and the function is destroyed after returning, and the space complexity is O(N)

void test1()
{
	int a = 0;
	printf("%p\n", &a);
}
void test2()
{
	int b = 0;
	printf("%p\n", &b);
}
int main()
{
	test1();
	test2();
	return 0;
}

You can see that the address is the same. For details, please refer to my function stack frame article C language savior extra article (explanation of the creation and destruction of function stack frame) - Programmer Sought

 


4. Common time complexity and complexity oj exercises

114514 O(1) constant order
3n+4 O(N) Linear order
3n^2+4n+5 O(N^2) Square order
3log(2)n+4 O (logN) logarithmic
2n+3nlog(2)n+14 O (nlogN) nlogn order
n^3+2n^2+4n+6 O(N^3) cubic order
2^n O(2^N) Exponential order

 4.1 Leetcode: The array numscontains all integers from 0to n, but one is missing. Please write code to find that missing integer. Do you have a way to do it in O(n) time?

 Ideas:

It is required to be completed in O(N) time complexity, we can use the characteristic of XOR to complete, the XOR of two identical numbers together is 0. As long as the contents are the same, XOR doesn't care about the order, just let the missing numbers and the corresponding array XOR

int missingNumber(int* nums, int numsSize){
  int x=0;

  for(int i=0;i<numsSize;i++)
  {
   x ^= nums[i];
  }
  for(int j=0;j<numsSize+1;j++)
  {
   x^=j;
  }
  return x;
}

 Rotate array 

Ideas:

 If we use the usual method, the time complexity is at least O(N), traversing the array once, we can only use the three-step tossing method to get out of this problem (almost difficult to think of)

void reverse(int*nums,int left,int right)
{
    while(left<right)
    {
        int tmp=nums[left];
        nums[left]=nums[right];
        nums[right]=tmp;
        --right;
        ++left;
    }
}


void rotate(int* nums, int numsSize, int k){
    k%=numsSize;    
    reverse(nums,numsSize-k,numsSize-1);
    reverse(nums,0,numsSize-k-1);
    reverse(nums,0,numsSize-1);

}

Guess you like

Origin blog.csdn.net/weixin_63543274/article/details/124237885