Data Structure and Algorithm-01 Time Complexity Analysis

Time complexity analysis

1. Thinking

  • Algorithm (Algorithm) refers to a set of methods used to manipulate data and solve program problems. For the same problem, using different algorithms, the final result may be the same, but the resources and time consumed in the process will be very different.
    For example, to screw a nut, both a wrench and pliers can do the job, but it is definitely more efficient to use a wrench to screw a nut.
  • So how do we measure the pros and cons of different algorithms?
  1. Post-event statistical method
    Through statistics and monitoring, the algorithm execution time and the memory occupied are the most accurate, but there are very big limitations:
    A test results are very dependent on the test environment
    B test results are greatly affected by the scale of the data
  2. Analyze and estimate beforehand.
    Think about it. When we want to implement a function, we hope to quickly know the optimal solution among several solutions and then implement it, instead of making great efforts to make each solution and then test it. As a result, it is too inefficient.
    So we need to make an assessment of the factors that affect code efficiency (such as time, space complexity, etc.) before the code is executed. Therefore, we need to make decisions through complexity analysis. Below we mainly explain the most frequent time complexity in the interview.
    Insert picture description here

Two, time complexity introduction

concept

What is the time complexity? Let's first understand it with a piece of code:

int sum(int n){
    
    
    int sum = 0; 
    int i = 1; 
    for(;i<=n;++i){
    
    
        sum=sum+i; 
    }
    return sum;
}
  • This code is a program for summing operation. The time for the computer to execute each line of code is almost the same. If we use a variable time to represent the execution time of each line of code, the total time of the sum method is calculated as follows:
    • The second line of code is executed only once. Time
    • The third line of code is executed only once. Time
    • Line 4 needs to be executed n times because it needs to loop n times, and it takes n*time
    • Line 5 needs to be executed n times because it needs to loop n times, and it takes n*time
  • The final result is 2n × time + 2 × time, which is 2(n+1) × time. In other words: the total code running time = the execution time of one line of code × how many lines of code are executed in total

to sum up

  1. Because time can be considered fixed, the total code running time is proportional to the number of code executions. Therefore, as long as we judge the number of executions, we know the total time the code runs. We can analyze the pros and cons of the program as long as we pay attention to the number of code executions.
  2. When we denote the total code running time as T(n), the proportional relationship is denoted by O, and the total number of lines of code executed is denoted by fn(), we get a common formula. This form of expressing code time complexity is called Big O time complexity notation.

Tn = O (fn ())

  1. Note: Because O does not represent the real code execution time, but only represents an increasing trend, it is called progressive time complexity, or time complexity for short.
  2. When n is large, the constant/coefficient/low-order will not affect the growth trend, so it will be ignored when calculating the time complexity, and only the largest magnitude will be recorded. Therefore, the time complexity of the above code ignores the constants and coefficients, and finally Tn = O(n)

Insert picture description here

3. Common time complexity and analysis skills

The common time complexity in descending order is: Ο(1) < Ο(logn) <Ο(n) < Ο(nlogn) < Ο(n^2) < Ο(n^3) < Ο(2^n ) <Ο(n!), the time complexity is getting bigger and bigger, and the execution efficiency is getting lower and lower. The following examples are explained in turn

1. Constant order O(1)

  • As long as there is no complex structure such as loops, the code will not grow with the growth of the variable when the code is executed, and the complexity of the code will be O(1) no matter how long the code is, such as:
int i = 1;
int j = 2;
++i;
j++;
int m = i + j;

2. Linear order O(n)

  • "One-level loop", the number of operations that the algorithm needs to perform is represented by a function of input size n, that is, O(n).
for(int i=1;i<=n;i++){
    
    
  System.out.println(i);
}
for(int i=1;i<=n;i++){
    
    
  System.out.println(i);
}
  • Note: Because the for loop of the code is not nested, the number of executions is 2 * n, which is O(2n), and the ignoring constant is O(n).

3. Logarithmic order O(logN)

  • Understand through the code:
int i = 1;
while(i<n){
    
    
    i = i * 2;
}
  • We analyze the time complexity of the above code mainly to see how many times the code is executed.
  • Suppose i>=n after looping x times, because every time it is multiplied by 2, so the x power of 2 is >= n, and the formula 2^x >= n is obtained. So x = log n, which is the logarithm of n based on two. Apply Big O notation, the time complexity is O(log n)

4. Linear logarithmic order O(nlogN)

  • nlog n = n * log n, so it is equivalent to recycling our above case code n times, so the time complexity of the following code is O(nlog n):
for(m=1; m<n; m++){
    
    
    i = 1;
    while(i<n){
    
    
        i = i * 2;
    }
}

5. Square order O(n^2)

  • One layer of loop is executed n times, n*n is two layers of loop, so it can be understood by combining the following code
for(x=1; i<=n; x++){
    
    
   for(i=1; i<=n; i++){
    
    
       j = i;
       j++;
    }
}

6. In other cases and so on

  • The cubic order O(n³) is equivalent to three layers of n cycles, but O(n³) too large n will make the result unrealistic, and the same O(2^n) and O(n!) unless it is a small n Otherwise, even if n is only 100, it is a nightmare running time, so the time complexity of this algorithm is generally not discussed.

Fourth, time complexity analysis skills

Through the above example, I wonder if you have discovered a certain pattern:

  1. The complexity of the nested code is equal to the product of the complexity of the code inside and outside the nest
  2. Only pay attention to the code with the most loop executions, because we will ignore the constants, low-levels, and coefficients in the formula

Five, interview questions to practice hands

Insert picture description here
Topic: Suppose the recursive equation of the time complexity function of an algorithm is T(n)=T(n-1)+n (n is a positive integer) and T(0)=1, then the time complexity of the algorithm is ().

  • A. O (logn)
  • B. O (nlogn)
  • C. O(n)
  • D. O(n^2)

The last answer is D~

to sum up

  • The time complexity is over here. In fact, there are more in-depth average time complexity, amortized time complexity, best time complexity, worst time complexity, etc., if you have time and need it, you may write about it.
  • While consolidating myself, I also hope to help everyone~

Guess you like

Origin blog.csdn.net/Cathy_2000/article/details/114240964