How to judge the quality of an algorithm

1 Introduction

        An excellent algorithm needs to satisfy correctness, readability, robustness, and efficiency. Generally speaking, the first three are easier to achieve. Efficiency is not easy to be discovered by developers themselves. Low efficiency is not terrible. What is terrible is that the people who write the code do not know the low efficiency. Different algorithms have a great impact on performance. This article discusses efficiency.

2. Obtain efficiency through actual testing

        Use testing methods to accurately calculate the execution time of the algorithm.

advantage:

        The specific execution time of the algorithm can be obtained.

shortcoming:

        (1) The algorithm is affected by different devices, and different devices may run out at different times, which ultimately leads to unreliable results;

        (2) If the algorithm is time-consuming, you have to wait until the calculation is completed to get the result, which is a waste of time;

3. Use theoretical knowledge to analyze efficiency in advance

        There are two types of algorithm efficiency analysis: the first is time efficiency, and the second is space efficiency. Time efficiency is called time complexity, while space efficiency is called space complexity. Time complexity mainly measures the running speed of an algorithm, while space complexity mainly measures the additional space required by an algorithm. Sometimes time efficiency and space efficiency are contradictory. If you want time, you have to sacrifice space, if you want space, you have to sacrifice space. Sacrifice time, which is often said to be space for time/time for space.

        In the early days of computer development, computers had very little storage capacity. So we care a lot about space complexity. However, after the rapid development of the computer industry, the storage capacity of computers has reached a very high level. So now we no longer need to pay special attention to the space complexity of an algorithm. This article mainly talks about time complexity.

3.1. Concept:

        The time an algorithm takes is proportional to the number of executions of its statements. The number of executions of basic operations in the algorithm is the time complexity of the algorithm.

3.2. Representation method:

        Using Big O notation to describe the asymptotic behavior of functions has the following main types of complexity:

        

        When writing algorithms, we should try to choose a time complexity that smoothes the curve.

        

3.2.1.Constant O(1)

  • Commonly used in simple operations such as assignment and reference
  • The algorithm consumption does not increase with the growth of variables, and the performance is the best
  • No matter how many lines of code are executed, even if there are tens of thousands of lines, the time complexity is O(1)
  • In the actual development process, the time complexity of one recursion is also O(1). Because O(1^n) is O(1) no matter how much n is
int i = 1;
int j = 2;
i++;
j--;
int k = i + j;

Code analysis: i is 1, j is 2, and k is 3. The time complexity is O(1).

3.2.2. Logarithmic O(log n)

  • The number of execution times of commonly used code is x, and n is the target number. In line with 2^x=n, it is deduced that x=log2(n) (log n)
  • The algorithm consumption increases with the increase of n, and the performance is better.
int i = 100;
int j = 1;
while(j < i){
    j = j * 2
}

Code analysis: j is 128. When i is 100, the time complexity is O(log2(100)). Since Math.log2(100)≈6.64, the final time complexity is O(6.65).

3.2.3. Linear O(n)

  • Commonly seen in a for loop, while loop
  • The algorithm consumption increases with the increase of n, and the performance is average.
  • No matter how large the value of n is, the time complexity is O(n)
int n = 100;
int j = 0;
for(int i = 0; i < n; i++){
    j = i;
}

Code analysis: i is 100 and j is 99. n is 100, the time complexity is O(100).

3.2.4. Linear logarithmic O(n log n)

  • Often used to execute a loop n times on code with a time complexity of O(log2(n))
  • The algorithm consumption increases with the increase of n, and the performance is poor.
int n = 100;
for(int m = 0; m < n; m++){
    int i = 1;
    while(i < n){
        i = i * 2
    }
}

Code analysis: i is 128. m is 100, n is 100, and the time complexity is O(m log2(n)). Because 100* Math.log2(100)≈664.39, the final time complexity is O(664.39).

3.2.5. Square type O(n^2), cubic type O(n^3), K-th power type O(n^k)

  • The most common algorithm time complexity, which can be used to quickly develop business logic
  • Commonly seen in 2 for loops, 3 for loops, and k for loops
  • Algorithm consumption increases with n, and performance is poor
  • In the actual development process, it is not recommended to use a loop with a K value that is too large, otherwise the code will be very difficult to maintain.
int n = 100
int v = 0;
for(int i = 0; i < n; i++){
    for(int j = 0; j < n; j++){
        v = v + j + i;
    }
}

Code analysis: v is 990000, i is 100, j is 100. n is 100, and the time complexity is O(100^2). That is O(10000).

Cubic O(n^3), K-th power O(n^k) and square O(n^2) are similar, except for a few more cycles.

// 立方型O(n^3)
for(int i =0; i < n; i++){
    for(int j = 0; j < n; j++){
        for(int m = 0; m < n; m++){

        }
    }
}
// K次方型O(n^k)
for(int i = 0; i < n; i++){
    for(int j = 0; j < n; j++){
        for(int m = 0; m<n; m++){
            for(int p = 0; p < n; p++){
                ... // for循环继续嵌套下去,k值不断增大
            }
        }
    }
}

3.2.6. Factorial type O(n!)

  • extremely uncommon
  • The algorithm consumption increases with the increase of n, and the performance is extremely poor.
void method(n) {
  for(int i = 0; i < n; i++) {
      method(n-1);
  }
}

The time complexity of factorial type O(n!) is calculated according to the  method of (n!+(n-1)!+(n-2)!+ ··· + 1) ++  .((n-1)!+(n-2)!+ ··· + 1)···


 

Guess you like

Origin blog.csdn.net/qq_42014561/article/details/129557673