Analysis of data structure complexity

Preface

In the process of learning data structure, we often need to analyze the complexity of the code. There are two kinds of complexity analysis, one is time complexity and the other is space complexity . Before analyzing these two, Let's talk about it first, why do we need to analyze space and time complexity?

Why do complexity analysis

You might say, I ran the code once, and through statistics and monitoring, I can get the time of algorithm execution and the amount of memory occupied. Why do we need to analyze the complexity of time and space? Can this kind of analysis method be more accurate than the data that I can get from a real run? Why analyze its complexity yourself?

1. Test results are very dependent on the test environment

The difference in hardware in the test environment will have a great impact on the test results. For example, we take the same piece of code and run it with Intel Core i9 processor and Intel Core i3 processor. Needless to say, the i9 processor executes much faster than the i3 processor. Also, for example, the original code a on this machine executes faster than the code b. When we switch to another machine, there may be completely opposite results.

2. Test results are greatly affected by the scale of the data

For example, for sorting algorithms, for the same sorting algorithm, if the order of the data to be sorted is different, the execution time of sorting will be very different. In extreme cases, if the data is already in order, the sorting algorithm does not need to do anything, and the execution time will be very short. In addition, if the test data size is too small, the test results may not truly reflect the performance of the algorithm. For example, for small-scale data sorting, insertion sort may be faster than quick sort!
From this we can know that if we want to know the efficiency of the algorithm, we must analyze the complexity by ourselves.

Complexity Analysis Law

  1. Single-segment code looks at high frequencies: such as loops.
  2. Take the largest number of pieces of code: For example, if there are single loops and multiple loops in a piece of code, take the complexity of multiple loops.
  3. Multiplication of nested code: such as recursion, multiple loops, etc.
  4. Multi-scale addition: For example, the method has two parameters to control the number of two loops, then the complexity of the two is added at this time.

Analysis of time complexity

Big O This complexity representation method just shows a trend of change. We usually ignore the constants, low-order, and coefficients in the formula, and only need to record the magnitude of the largest order. Therefore, when we analyze the time complexity of an algorithm or a piece of code, we only pay attention to the piece of code that has the most loop execution times. The magnitude of the number of executions of this core code is the time complexity of the entire code to be analyzed.
The common ones we have are the following:

I want to talk about O(1). As long as the execution time of the code does not increase with the increase of n, the time complexity of the code is recorded as O(1). In other words, in general, as long as there are no loop statements or recursive statements in the algorithm, even if there are thousands of lines of code, the time complexity is still Ο(1).
Slightly mention O(logn), the following is the complexity,
2^i=n ----> i=log2n.

i=1; 
while (i <= n) 
{
    
    
 i = i * 2;
}

Break down time complexity

  1. Best-case time complexity: the time complexity of code execution in the most ideal case.
  2. Worst case time complexity: the time complexity of the code execution in the worst case.
  3. Average time complexity: expressed by the weighted average of the number of times the code is executed in all cases.
  4. Amortized time complexity: Most of the complexity of code execution is low-level complexity, and individual cases are high-level complexity and timing relationships occur, and individual high-level complexity can be amortized to low-level complexity. Degree. Basically amortized result is equal to low-level complexity.

How to use average time complexity and amortized time complexity

  1. Average time complexity The complexity of the
    code varies in magnitude in different situations, which is represented by the weighted average of the execution times of the code in all possible situations.
  2. Amortized time complexity
    is used when two conditions are met: 1) The code is low-level complexity in most cases, and only a few cases are high-level complexity; 2) Low-level and high-level complexity appear with timing laws. The amortized result is generally equal to low-level complexity.

Space complexity

There are three common ones:
O(1)
O(n)
O(n^2)
analysis:

void print(int n) 
{
    
    
	 int i = 0; 
	 int[] a = new int[n];
	 for (i; i = 0; --i)
	 {
    
    
	 	print out a[i]
	 }
}

In the second line of code, we apply for a space to store the variable i, but it is of constant order and has nothing to do with the data size n, so we can ignore it. Line 3 applies for an int type array of size n. In addition, the rest of the code does not take up more space, so the space complexity of the entire code is O(n).

At this point, the analysis of complexity is over. If you want to fully grasp the analysis of complexity, you must do more analysis yourself. I hope it can help you. If it is convenient, you may wish to like it.

Guess you like

Origin blog.csdn.net/Freedom_cao/article/details/107285243