Algorithm and data structure interview guide - detailed explanation of time complexity

time complexity

Running time can intuitively and accurately reflect the efficiency of the algorithm. If we want to accurately estimate the running time of a piece of code, how should we do it?

  1. Determine the running platform , including hardware configuration, programming language, system environment, etc. These factors will affect the running efficiency of the code.
  2. Evaluate the running time required for various calculation operations , such as addition operations +requiring 1 ns, multiplication operations *requiring 10 ns, printing operations print()requiring 5 ns, etc.
  3. All calculation operations in the code are counted , and the execution time of all operations is summed to obtain the running time.

For example, in the following code, the input data size is nnn

=== “Python”

```python title=""
# 在某运行平台下
def algorithm(n: int):
    a = 2      # 1 ns
    a = a + 1  # 1 ns
    a = a * 2  # 10 ns
    # 循环 n 次
    for _ in range(n):  # 1 ns
        print(0)        # 5 ns
```

=== “C++”

```cpp title=""
// 在某运行平台下
void algorithm(int n) {
    int a = 2;  // 1 ns
    a = a + 1;  // 1 ns
    a = a * 2;  // 10 ns
    // 循环 n 次
    for (int i = 0; i < n; i++) {  // 1 ns ,每轮都要执行 i++
        cout << 0 << endl;         // 5 ns
    }
}
```

=== “Java”

```java title=""
// 在某运行平台下
void algorithm(int n) {
    int a = 2;  // 1 ns
    a = a + 1;  // 1 ns
    a = a * 2;  // 10 ns
    // 循环 n 次
    for (int i = 0; i < n; i++) {  // 1 ns ,每轮都要执行 i++
        System.out.println(0);     // 5 ns
    }
}
```

=== “C#”

```csharp title=""
// 在某运行平台下
void algorithm(int n) {
    int a = 2;  // 1 ns
    a = a + 1;  // 1 ns
    a = a * 2;  // 10 ns
    // 循环 n 次
    for (int i = 0; i < n; i++) {  // 1 ns ,每轮都要执行 i++
        Console.WriteLine(0);      // 5 ns
    }
}
```

=== “Go”

```go title=""
// 在某运行平台下
func algorithm(n int) {
    a := 2     // 1 ns
    a = a + 1  // 1 ns
    a = a * 2  // 10 ns
    // 循环 n 次
    for i := 0; i < n; i++ {  // 1 ns
        fmt.Println(a)        // 5 ns
    }
}
```

=== “Swift”

```swift title=""
// 在某运行平台下
func algorithm(n: Int) {
    var a = 2 // 1 ns
    a = a + 1 // 1 ns
    a = a * 2 // 10 ns
    // 循环 n 次
    for _ in 0 ..< n { // 1 ns
        print(0) // 5 ns
    }
}
```

=== “JS”

```javascript title=""
// 在某运行平台下
function algorithm(n) {
    var a = 2; // 1 ns
    a = a + 1; // 1 ns
    a = a * 2; // 10 ns
    // 循环 n 次
    for(let i = 0; i < n; i++) { // 1 ns ,每轮都要执行 i++
        console.log(0); // 5 ns
    }
}
```

=== “TS”

```typescript title=""
// 在某运行平台下
function algorithm(n: number): void {
    var a: number = 2; // 1 ns
    a = a + 1; // 1 ns
    a = a * 2; // 10 ns
    // 循环 n 次
    for(let i = 0; i < n; i++) { // 1 ns ,每轮都要执行 i++
        console.log(0); // 5 ns
    }
}
```

=== “Dart”

```dart title=""
// 在某运行平台下
void algorithm(int n) {
  int a = 2; // 1 ns
  a = a + 1; // 1 ns
  a = a * 2; // 10 ns
  // 循环 n 次
  for (int i = 0; i < n; i++) { // 1 ns ,每轮都要执行 i++
    print(0); // 5 ns
  }
}
```

=== “Rust”

```rust title=""
// 在某运行平台下
fn algorithm(n: i32) {
    let mut a = 2;      // 1 ns
    a = a + 1;          // 1 ns
    a = a * 2;          // 10 ns
    // 循环 n 次
    for _ in 0..n {     // 1 ns ,每轮都要执行 i++
        println!("{}", 0);  // 5 ns
    }
}
```

=== “C”

```c title=""
// 在某运行平台下
void algorithm(int n) {
    int a = 2;  // 1 ns
    a = a + 1;  // 1 ns
    a = a * 2;  // 10 ns
    // 循环 n 次
    for (int i = 0; i < n; i++) {   // 1 ns ,每轮都要执行 i++
        printf("%d", 0);            // 5 ns
    }
}
```

=== “Zig”

```zig title=""

```

According to the above method, the algorithm running time can be obtained as 6 n + 12 6n + 126 n+12 ns :

1 + 1 + 10 + ( 1 + 5 ) × n = 6 n + 12 1 + 1 + 10 + (1 + 5) \times n = 6n + 12 1+1+10+(1+5)×n=6 n+12

But in practice, the running time of statistical algorithms is neither reasonable nor realistic . First of all, we do not want to tie the estimated time to the running platform, because the algorithm needs to run on a variety of different platforms. Secondly, it is difficult to know the running time of each operation, which makes the estimation process extremely difficult.

Statistical time growth trend

Time complexity analysis counts not the algorithm running time, but the growth trend of the algorithm running time as the amount of data increases .

The concept of "time growth trend" is relatively abstract. Let's understand it through an example. Assume that the input data size is nnn , given three algorithm functionsA,BandC:

=== “Python”

```python title=""
# 算法 A 的时间复杂度:常数阶
def algorithm_A(n: int):
    print(0)
# 算法 B 的时间复杂度:线性阶
def algorithm_B(n: int):
    for _ in range(n):
        print(0)
# 算法 C 的时间复杂度:常数阶
def algorithm_C(n: int):
    for _ in range(1000000):
        print(0)
```

=== “C++”

```cpp title=""
// 算法 A 的时间复杂度:常数阶
void algorithm_A(int n) {
    cout << 0 << endl;
}
// 算法 B 的时间复杂度:线性阶
void algorithm_B(int n) {
    for (int i = 0; i < n; i++) {
        cout << 0 << endl;
    }
}
// 算法 C 的时间复杂度:常数阶
void algorithm_C(int n) {
    for (int i = 0; i < 1000000; i++) {
        cout << 0 << endl;
    }
}
```

=== “Java”

```java title=""
// 算法 A 的时间复杂度:常数阶
void algorithm_A(int n) {
    System.out.println(0);
}
// 算法 B 的时间复杂度:线性阶
void algorithm_B(int n) {
    for (int i = 0; i < n; i++) {
        System.out.println(0);
    }
}
// 算法 C 的时间复杂度:常数阶
void algorithm_C(int n) {
    for (int i = 0; i < 1000000; i++) {
        System.out.println(0);
    }
}
```

=== “C#”

```csharp title=""
// 算法 A 的时间复杂度:常数阶
void algorithm_A(int n) {
    Console.WriteLine(0);
}
// 算法 B 的时间复杂度:线性阶
void algorithm_B(int n) {
    for (int i = 0; i < n; i++) {
        Console.WriteLine(0);
    }
}
// 算法 C 的时间复杂度:常数阶
void algorithm_C(int n) {
    for (int i = 0; i < 1000000; i++) {
        Console.WriteLine(0);
    }
}
```

=== “Go”

```go title=""
// 算法 A 的时间复杂度:常数阶
func algorithm_A(n int) {
    fmt.Println(0)
}
// 算法 B 的时间复杂度:线性阶
func algorithm_B(n int) {
    for i := 0; i < n; i++ {
        fmt.Println(0)
    }
}
// 算法 C 的时间复杂度:常数阶
func algorithm_C(n int) {
    for i := 0; i < 1000000; i++ {
        fmt.Println(0)
    }
}
```

=== “Swift”

```swift title=""
// 算法 A 的时间复杂度:常数阶
func algorithmA(n: Int) {
    print(0)
}

// 算法 B 的时间复杂度:线性阶
func algorithmB(n: Int) {
    for _ in 0 ..< n {
        print(0)
    }
}

// 算法 C 的时间复杂度:常数阶
func algorithmC(n: Int) {
    for _ in 0 ..< 1000000 {
        print(0)
    }
}
```

=== “JS”

```javascript title=""
// 算法 A 的时间复杂度:常数阶
function algorithm_A(n) {
    console.log(0);
}
// 算法 B 的时间复杂度:线性阶
function algorithm_B(n) {
    for (let i = 0; i < n; i++) {
        console.log(0);
    }
}
// 算法 C 的时间复杂度:常数阶
function algorithm_C(n) {
    for (let i = 0; i < 1000000; i++) {
        console.log(0);
    }
}

```

=== “TS”

```typescript title=""
// 算法 A 的时间复杂度:常数阶
function algorithm_A(n: number): void {
    console.log(0);
}
// 算法 B 的时间复杂度:线性阶
function algorithm_B(n: number): void {
    for (let i = 0; i < n; i++) {
        console.log(0);
    }
}
// 算法 C 的时间复杂度:常数阶
function algorithm_C(n: number): void {
    for (let i = 0; i < 1000000; i++) {
        console.log(0);
    }
}
```

=== “Dart”

```dart title=""
// 算法 A 的时间复杂度:常数阶
void algorithmA(int n) {
  print(0);
}
// 算法 B 的时间复杂度:线性阶
void algorithmB(int n) {
  for (int i = 0; i < n; i++) {
    print(0);
  }
}
// 算法 C 的时间复杂度:常数阶
void algorithmC(int n) {
  for (int i = 0; i < 1000000; i++) {
    print(0);
  }
}
```

=== “Rust”

```rust title=""
// 算法 A 的时间复杂度:常数阶
fn algorithm_A(n: i32) {
    println!("{}", 0);
}
// 算法 B 的时间复杂度:线性阶
fn algorithm_B(n: i32) {
    for _ in 0..n {
        println!("{}", 0);
    }
}
// 算法 C 的时间复杂度:常数阶
fn algorithm_C(n: i32) {
    for _ in 0..1000000 {
        println!("{}", 0);
    }
}
```

=== “C”

```c title=""
// 算法 A 的时间复杂度:常数阶
void algorithm_A(int n) {
    printf("%d", 0);
}
// 算法 B 的时间复杂度:线性阶
void algorithm_B(int n) {
    for (int i = 0; i < n; i++) {
        printf("%d", 0);
    }
}
// 算法 C 的时间复杂度:常数阶
void algorithm_C(int n) {
    for (int i = 0; i < 1000000; i++) {
        printf("%d", 0);
    }
}
```

=== “Zig”

```zig title=""

```

The figure below shows the time complexity of the above three algorithm functions.

  • Algorithm Aonly 1 11 print operation, the algorithm running time does not follownnn increases as it increases. We call the time complexity of this algorithm "constant order".
  • BThe print operation in algorithm requires loop nnn times, the algorithm running time increases withnnn increases linearly. The time complexity of this algorithm is called "linear order".
  • CThe printing operation in the algorithm requires looping 1000000 10000001000000 times, although the running time is long, it is consistent with the input data sizennn is irrelevant. ThereforeC, the time complexity ofAis the same as that of "constant order".

Insert image description here

Compared with directly counting algorithm running time, what are the characteristics of time complexity analysis?

  • Time complexity can effectively evaluate algorithm efficiency . For example, Bthe running time of the algorithm increases linearly for n > 1 n > 1n>1A is slowerthan the algorithmn > 1000000 n > 1000000n>1000000C is slowerthan the algorithmIn fact, as long as the input data sizennIf n is large enough, an algorithm with a "constant order" complexity must be better than a "linear order" algorithm. This is exactly what the time growth trend expresses.
  • The time complexity calculation method is simpler . Apparently, neither the running platform nor the type of computational operations are related to the increasing trend of algorithm running time. Therefore, in time complexity analysis, we can simply regard the execution time of all calculation operations as the same "unit time", thus simplifying "the statistics of the running time of calculation operations" to "the statistics of the number of calculation operations", In this way, the difficulty of estimation is greatly reduced.
  • There are also certain limitations in time complexity . For example, although the time complexity of algorithms Aand is the same, the actual running time is very different. CSimilarly, although Bthe time complexity of the algorithm is Chigher than that, when the input data size is nnWhen n is small, the algorithmBis obviously better than the algorithmC. In these cases, it is difficult for us to judge the efficiency of an algorithm based on time complexity alone. Of course, despite the above problems, complexity analysis is still the most effective and commonly used method to judge the efficiency of an algorithm.

function asymptotic upper bound

Given an input of size nnFunction of n :

=== “Python”

```python title=""
def algorithm(n: int):
    a = 1      # +1
    a = a + 1  # +1
    a = a * 2  # +1
    # 循环 n 次
    for i in range(n):  # +1
        print(0)        # +1
```

=== “C++”

```cpp title=""
void algorithm(int n) {
    int a = 1;  // +1
    a = a + 1;  // +1
    a = a * 2;  // +1
    // 循环 n 次
    for (int i = 0; i < n; i++) { // +1(每轮都执行 i ++)
        cout << 0 << endl;    // +1
    }
}
```

=== “Java”

```java title=""
void algorithm(int n) {
    int a = 1;  // +1
    a = a + 1;  // +1
    a = a * 2;  // +1
    // 循环 n 次
    for (int i = 0; i < n; i++) { // +1(每轮都执行 i ++)
        System.out.println(0);    // +1
    }
}
```

=== “C#”

```csharp title=""
void algorithm(int n) {
    int a = 1;  // +1
    a = a + 1;  // +1
    a = a * 2;  // +1
    // 循环 n 次
    for (int i = 0; i < n; i++) {   // +1(每轮都执行 i ++)
        Console.WriteLine(0);   // +1
    }
}
```

=== “Go”

```go title=""
func algorithm(n int) {
    a := 1      // +1
    a = a + 1   // +1
    a = a * 2   // +1
    // 循环 n 次
    for i := 0; i < n; i++ {   // +1
        fmt.Println(a)         // +1
    }
}
```

=== “Swift”

```swift title=""
func algorithm(n: Int) {
    var a = 1 // +1
    a = a + 1 // +1
    a = a * 2 // +1
    // 循环 n 次
    for _ in 0 ..< n { // +1
        print(0) // +1
    }
}
```

=== “JS”

```javascript title=""
function algorithm(n) {
    var a = 1; // +1
    a += 1; // +1
    a *= 2; // +1
    // 循环 n 次
    for(let i = 0; i < n; i++){ // +1(每轮都执行 i ++)
        console.log(0); // +1
    }
}
```

=== “TS”

```typescript title=""
function algorithm(n: number): void{
    var a: number = 1; // +1
    a += 1; // +1
    a *= 2; // +1
    // 循环 n 次
    for(let i = 0; i < n; i++){ // +1(每轮都执行 i ++)
        console.log(0); // +1
    }
}
```

=== “Dart”

```dart title=""
void algorithm(int n) {
  int a = 1; // +1
  a = a + 1; // +1
  a = a * 2; // +1
  // 循环 n 次
  for (int i = 0; i < n; i++) { // +1(每轮都执行 i ++)
    print(0); // +1
  }
}
```

=== “Rust”

```rust title=""
fn algorithm(n: i32) {
    let mut a = 1;   // +1
    a = a + 1;      // +1
    a = a * 2;      // +1

    // 循环 n 次
    for _ in 0..n { // +1(每轮都执行 i ++)
        println!("{}", 0); // +1
    }
}
```

=== “C”

```c title=""
void algorithm(int n) {
    int a = 1;  // +1
    a = a + 1;  // +1
    a = a * 2;  // +1
    // 循环 n 次
    for (int i = 0; i < n; i++) {   // +1(每轮都执行 i ++)
        printf("%d", 0);            // +1
    }
}  
```

=== “Zig”

```zig title=""

```

Let the number of operations of the algorithm be a n with respect to the input data size nnThe function of n is denoted asT ( n ) T(n)T ( n ) , then the number of operations of the above function is:

T ( n ) = 3 + 2 n T(n) = 3 + 2n T(n)=3+2 n

T ( n ) T(n) T ( n ) is a linear function, which means that the growth trend of its running time is linear, so its time complexity is linear order.

We record the time complexity of linear order as O ( n ) O(n)O ( n ) , this mathematical symbol is called "bigOO"O- logo big-OOO notation", indicating the functionT ( n ) T(n)"asymptotic upper bound" of T ( n ) .

Time complexity analysis is essentially calculating the "operation number function T ( n ) T (n)T ( n ) ” asymptotic upper bound, which has a clear mathematical definition.

!!! abstract “Asymptotic upper bound of function”

若存在正实数 $c$ 和实数 $n_0$ ,使得对于所有的 $n > n_0$ ,均有 $T(n) \leq c \cdot f(n)$ ,则可认为 $f(n)$ 给出了 $T(n)$ 的一个渐近上界,记为 $T(n) = O(f(n))$ 。

As shown in the figure below, calculating the asymptotic upper bound is to find a function f ( n ) f(n)f ( n ) , such that whennnWhen n tends to infinity,T ( n ) T(n)T ( n ) andf ( n ) f(n)f ( n ) is at the same growth level, differing only by a constant termccMultiples of c .

Insert image description here

Calculation method

Asymptotic upper bounds are a bit mathematical, so don't worry if you don't feel you fully understand them. Because in actual use, we only need to master the calculation method, and the mathematical meaning can be gradually understood.

According to the definition, determine f ( n ) f(n)After f ( n ) , we can get the time complexityO (f (n)) O(f(n))O ( f ( n )) . So how to determine the asymptotic upper boundf ( n ) f(n)What about f ( n ) ? The overall process is divided into two steps: first, count the number of operations, and then determine the asymptotic upper bound.

Step 1: Count the number of operations

For the code, just calculate it line by line from top to bottom. However, due to the above c ⋅ f ( n ) c \cdot f(n)cconstant term ccin f ( n )c can be of any size,so the number of operations T ( n ) T(n)Various coefficients and constant terms in T ( n ) can be ignored . Based on this principle, the following counting simplification techniques can be summarized.

  1. Ignore T ( n ) T(n)constant term in T ( n ) . Because they are all related tonnn is irrelevant, so it has no impact on time complexity.
  2. Omit all coefficients . For example, loop 2 n 2n2 n times,5 n + 1 5n + 15n _+1 times, etc., can be simply written asnnn times, becausennThe coefficients in front of n have no effect on the time complexity.
  3. Use multiplication when nesting loops . The total number of operations is equal to the product of the number of operations in the outer loop and the inner loop. Each loop can still apply the techniques at point 1 1.and point 1 respectively.2.

Given a function, we can count the number of operations using the above techniques.

=== “Python”

```python title=""
def algorithm(n: int):
    a = 1      # +0(技巧 1)
    a = a + n  # +0(技巧 1)
    # +n(技巧 2)
    for i in range(5 * n + 1):
        print(0)
    # +n*n(技巧 3)
    for i in range(2 * n):
        for j in range(n + 1):
            print(0)
```

=== “C++”

```cpp title=""
void algorithm(int n) {
    int a = 1;  // +0(技巧 1)
    a = a + n;  // +0(技巧 1)
    // +n(技巧 2)
    for (int i = 0; i < 5 * n + 1; i++) {
        cout << 0 << endl;
    }
    // +n*n(技巧 3)
    for (int i = 0; i < 2 * n; i++) {
        for (int j = 0; j < n + 1; j++) {
            cout << 0 << endl;
        }
    }
}
```

=== “Java”

```java title=""
void algorithm(int n) {
    int a = 1;  // +0(技巧 1)
    a = a + n;  // +0(技巧 1)
    // +n(技巧 2)
    for (int i = 0; i < 5 * n + 1; i++) {
        System.out.println(0);
    }
    // +n*n(技巧 3)
    for (int i = 0; i < 2 * n; i++) {
        for (int j = 0; j < n + 1; j++) {
            System.out.println(0);
        }
    }
}
```

=== “C#”

```csharp title=""
void algorithm(int n) {
    int a = 1;  // +0(技巧 1)
    a = a + n;  // +0(技巧 1)
    // +n(技巧 2)
    for (int i = 0; i < 5 * n + 1; i++) {
        Console.WriteLine(0);
    }
    // +n*n(技巧 3)
    for (int i = 0; i < 2 * n; i++) {
        for (int j = 0; j < n + 1; j++) {
            Console.WriteLine(0);
        }
    }
}
```

=== “Go”

```go title=""
func algorithm(n int) {
    a := 1     // +0(技巧 1)
    a = a + n  // +0(技巧 1)
    // +n(技巧 2)
    for i := 0; i < 5 * n + 1; i++ {
        fmt.Println(0)
    }
    // +n*n(技巧 3)
    for i := 0; i < 2 * n; i++ {
        for j := 0; j < n + 1; j++ {
            fmt.Println(0)
        }
    }
}
```

=== “Swift”

```swift title=""
func algorithm(n: Int) {
    var a = 1 // +0(技巧 1)
    a = a + n // +0(技巧 1)
    // +n(技巧 2)
    for _ in 0 ..< (5 * n + 1) {
        print(0)
    }
    // +n*n(技巧 3)
    for _ in 0 ..< (2 * n) {
        for _ in 0 ..< (n + 1) {
            print(0)
        }
    }
}
```

=== “JS”

```javascript title=""
function algorithm(n) {
    let a = 1;  // +0(技巧 1)
    a = a + n;  // +0(技巧 1)
    // +n(技巧 2)
    for (let i = 0; i < 5 * n + 1; i++) {
        console.log(0);
    }
    // +n*n(技巧 3)
    for (let i = 0; i < 2 * n; i++) {
        for (let j = 0; j < n + 1; j++) {
            console.log(0);
        }
    }
}
```

=== “TS”

```typescript title=""
function algorithm(n: number): void {
    let a = 1;  // +0(技巧 1)
    a = a + n;  // +0(技巧 1)
    // +n(技巧 2)
    for (let i = 0; i < 5 * n + 1; i++) {
        console.log(0);
    }
    // +n*n(技巧 3)
    for (let i = 0; i < 2 * n; i++) {
        for (let j = 0; j < n + 1; j++) {
            console.log(0);
        }
    }
}
```

=== “Dart”

```dart title=""
void algorithm(int n) {
  int a = 1; // +0(技巧 1)
  a = a + n; // +0(技巧 1)
  // +n(技巧 2)
  for (int i = 0; i < 5 * n + 1; i++) {
    print(0);
  }
  // +n*n(技巧 3)
  for (int i = 0; i < 2 * n; i++) {
    for (int j = 0; j < n + 1; j++) {
      print(0);
    }
  }
}
```

=== “Rust”

```rust title=""
fn algorithm(n: i32) {
    let mut a = 1;     // +0(技巧 1)
    a = a + n;        // +0(技巧 1)

    // +n(技巧 2)
    for i in 0..(5 * n + 1) {
        println!("{}", 0);
    }

    // +n*n(技巧 3)
    for i in 0..(2 * n) {
        for j in 0..(n + 1) {
            println!("{}", 0);
        }
    }
}
```

=== “C”

```c title=""
void algorithm(int n) {
    int a = 1;  // +0(技巧 1)
    a = a + n;  // +0(技巧 1)
    // +n(技巧 2)
    for (int i = 0; i < 5 * n + 1; i++) {
        printf("%d", 0);
    }
    // +n*n(技巧 3)
    for (int i = 0; i < 2 * n; i++) {
        for (int j = 0; j < n + 1; j++) {
            printf("%d", 0);
        }
    }
}
```

=== “Zig”

```zig title=""

```

The following formula shows the statistical results before and after using the above techniques. The time complexity derived by both is O ( n 2 ) O(n^2)O ( n2)

T ( n ) = 2 n ( n + 1 ) + ( 5 n + 1 ) + 2 Full statistics (-.-|||) = 2 n 2 + 7 n + 3 T ( n ) = n 2 + n Lazy Statistics (oO) \begin{aligned} T(n) & = 2n(n + 1) + (5n + 1) + 2 & \text{Full statistics (-.-|||)} \newline & = 2n^ 2 + 7n + 3 \newline T(n) & = n^2 + n & \text{Lazy statistics (oO)} \end{aligned}T(n)T(n)=2n ( n _+1)+( 5 n+1)+2=2 n2+7 n+3=n2+nComplete statistics  (-.-|||)Lazy Statistics  (oO )

Step 2: Determine the asymptotic upper bound

The time complexity is given by the polynomial T ( n ) T(n)Determined by the highest order term in T ( n ) . This is because innnAs n approaches infinity, the highest-order term will play a dominant role, and the influence of other terms can be ignored.

The table below shows some examples, with some exaggerated values ​​to emphasize the conclusion that the coefficients cannot shake the order. When nnAs n approaches infinity, these constants become insignificant.

Table: Time complexity corresponding to different number of operations

Number of operations T ( n ) T(n)T(n) Time complexity O ( f ( n ) ) O(f(n))O(f(n))
100000 100000 100000 O ( 1 ) O(1)O(1)
3n + 2 3n + 23n _+2 O ( n ) O(n)O ( n )
2 n 2 + 3 n + 2 2n^2 + 3n + 22 n2+3n _+2 O ( n 2 ) O(n^2)O ( n2)
n 3 + 10000 n 2 n^3 + 10000n^2 n3+10000n2 O ( n 3 ) O(n^3)O ( n3)
2 n + 10000 n 10000 2^n + 10000n^{10000} 2n+10000n10000 O ( 2 n ) O(2^n)O(2n)

Common types

Let the input data size be nnn , the common time complexity types are shown in the figure below (arranged from low to high).

O ( 1 ) < O ( log ⁡ n ) < O ( n ) < O ( n log ⁡ n ) < O ( n 2 ) < O ( 2 n ) < O ( n ! ) Constant order < Logarithmic order < Linear Order < linear logarithmic order < square order < exponential order < factorial order \begin{aligned} O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2 ) < O(2^n) < O(n!) \newline \text{Constant order} < \text{Logarithmic order} < \text{Linear order} < \text{Linear logarithmic order} < \text{ Square order} < \text{Exponential order} < \text{Factorial order} \end{aligned}O(1)<O(logn)<O ( n )<O ( nlogn)<O ( n2)<O(2n)<O ( n !)constant order<Logarithmic order<linear order<linear logarithmic order<square order<Exponential order<factorial

Insert image description here

Constant order O ( 1 ) O(1)O(1)

The number of constant-order operations and the input data size nnn is independent, that is, it does not follownnchanges with n .

In the following function, although the number of operations sizemay be large, due to its relationship with the input data size nnn is irrelevant, so the time complexity is stillO ( 1 ) O(1)O(1)

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{constant}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{constant}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{constant}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{constant}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{constant}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{constant}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{constant}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{constant}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{constant}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{constant}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{constant}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{constant}
```

Linear order O ( n ) O(n)O ( n )

The number of linear-order operations relative to the input data size nnn grows in linear scale. Linear orders usually occur in single-level loops:

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{linear}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{linear}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{linear}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{linear}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{linear}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{linear}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{linear}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{linear}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{linear}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{linear}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{linear}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{linear}
```

The time complexity of operations such as traversing arrays and traversing linked lists is O ( n ) O(n)O ( n ) , of whichnnn is the length of the array or linked list:

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{array_traversal}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{arrayTraversal}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{arrayTraversal}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{arrayTraversal}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{arrayTraversal}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{arrayTraversal}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{arrayTraversal}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{arrayTraversal}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{arrayTraversal}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{array_traversal}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{arrayTraversal}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{arrayTraversal}
```

It is worth noting that the input data size nnn needs to be determined specifically according to the type of input data. For example, in the first example, the variablennn is the input data size; in the second example, the array lengthis nnn is the data size.

Square order O ( n 2 ) O(n^2)O ( n2)

The number of square-order operations relative to the input data size nnn grows at square level. Square order usually occurs in nested loops, both the outer and inner loops areO ( n ) O(n)O ( n ) , so the total isO ( n 2 ) O(n^2)O ( n2)

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{quadratic}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{quadratic}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{quadratic}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{quadratic}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{quadratic}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{quadratic}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{quadratic}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{quadratic}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{quadratic}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{quadratic}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{quadratic}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{quadratic}
```

The figure below compares the three time complexities of constant order, linear order and square order.

Insert image description here

Taking bubble sorting as an example, the outer loop executes n − 1 n − 1n1 time, the inner loop executesn − 1 n-1n1 n − 2 n-2 n2 … \dots 2 2 2 1 1 1 time, averagen/2 n/2n /2 times, so the time complexity isO ( ( n − 1 ) n / 2 ) = O ( n 2 ) O((n - 1) n / 2) = O(n^2)O (( n1)n/2)=O ( n2)

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{bubble_sort}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{bubbleSort}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{bubbleSort}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{bubbleSort}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{bubbleSort}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{bubbleSort}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{bubbleSort}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{bubbleSort}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{bubbleSort}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{bubble_sort}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{bubbleSort}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{bubbleSort}
```

Exponential order O ( 2 n ) O(2^n)O(2n)

Biological "cell division" is a typical example of exponential growth: the initial state is 1 11 cell, after one round of division, becomes2 22 pieces, split into4 44 , and so on, splitnnAfter n rounds, there are2 n 2^n2n cells.

The figure below and the code below simulate the process of cell division, with a time complexity of O ( 2 n ) O(2^n)O(2n)

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{exponential}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{exponential}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{exponential}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{exponential}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{exponential}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{exponential}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{exponential}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{exponential}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{exponential}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{exponential}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{exponential}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{exponential}
```

Insert image description here

In actual algorithms, exponential orders often appear in recursive functions. For example, in the following code, it is recursively divided into two, passing through nnStop after n splits:

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{exp_recur}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{expRecur}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{expRecur}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{expRecur}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{expRecur}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{expRecur}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{expRecur}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{expRecur}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{expRecur}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{exp_recur}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{expRecur}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{expRecur}
```

Exponential growth is very rapid and is common in exhaustive methods (brute force search, backtracking, etc.). For problems with large data sizes, exponential order is unacceptable and usually requires the use of dynamic programming or greedy algorithms to solve.

Logarithmic order O ( log ⁡ n ) O(\log n)O(logn)

In contrast to the exponential order, the logarithmic order reflects a "halving each round" situation. Let the input data size be nnn , since each round is reduced to half, the number of loops islog ⁡ 2 n \log_2 nlog2n , that is2 n 2^n2The inverse function of n .

The following figure and the following code simulate the process of "reducing each round to half", with a time complexity of O ( log ⁡ 2 n ) O(\log_2 n)O(log2n ) , abbreviated asO ( log ⁡ n ) O(\log n)O(logn)

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{logarithmic}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{logarithmic}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{logarithmic}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{logarithmic}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{logarithmic}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{logarithmic}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{logarithmic}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{logarithmic}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{logarithmic}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{logarithmic}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{logarithmic}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{logarithmic}
```

Insert image description here

Similar to the exponential order, the logarithmic order often appears in recursive functions. The following code forms a height of log ⁡ 2 n \log_2 nlog2Recursive tree of n :

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{log_recur}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{logRecur}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{logRecur}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{logRecur}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{logRecur}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{logRecur}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{logRecur}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{logRecur}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{logRecur}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{log_recur}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{logRecur}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{logRecur}
```

The logarithmic order often appears in algorithms based on the divide-and-conquer strategy, embodying the algorithmic ideas of “dividing one into many” and “simplifying complexity.” It grows slowly and is second only to the ideal time complexity of constant order.

!!! tip “ O ( log ⁡ n ) O(\log n) O(logWhat is the base of n ) ? "

准确来说,“一分为 $m$”对应的时间复杂度是 $O(\log_m n)$ 。而通过对数换底公式,我们可以得到具有不同底数的、相等的时间复杂度:

$$
O(\log_m n) = O(\log_k n / \log_k m) = O(\log_k n)
$$

也就是说,底数 $m$ 可以在不影响复杂度的前提下转换。因此我们通常会省略底数 $m$ ,将对数阶直接记为 $O(\log n)$ 。

Linear logarithmic order O ( n log ⁡ n ) O(n \log n)O ( nlogn)

Linear logarithmic order often appears in nested loops. The time complexity of the two levels of loops is O ( log ⁡ n ) O(\log n)O(logn ) sumO ( n ) O(n)O ( n ) . The relevant code is as follows:

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{linear_log_recur}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{linearLogRecur}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{linearLogRecur}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{linearLogRecur}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{linearLogRecur}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{linearLogRecur}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{linearLogRecur}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{linearLogRecur}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{linearLogRecur}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{linear_log_recur}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{linearLogRecur}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{linearLogRecur}
```

The figure below shows how the linear logarithmic order is generated. The total number of operations at each level of the binary tree is nn , the tree has a total oflog ⁡ 2 n + 1 \log_2 n + 1log2n+1 layer, so the time complexity isO ( n log ⁡ n ) O(n \log n)O ( nlogn)

Insert image description here

The time complexity of mainstream sorting algorithms is usually O ( n log ⁡ n ) O(n \log n)O ( nlogn ) , such as quick sort, merge sort, heap sort, etc.

Factorial O ( n ! ) O(n!)O ( n !)

The factorial order corresponds to the "total permutation" problem in mathematics. Given nnFor n non-repeating elements, find all possible arrangements. The number of options is:

n ! = n × ( n − 1 ) × ( n − 2 ) × ⋯ × 2 × 1 n! = n \times (n - 1) \times (n - 2) \times \dots \times 2 \times 1 n!=n×(n1)×(n2)××2×1

Factorial is usually implemented using recursion. As shown in the figure below and the following code, the first layer splits nnn , the second layer splits outn − 1 n - 1n1 , and so on, until thennthStop splitting at level n :

=== “Python”

```python title="time_complexity.py"
[class]{}-[func]{factorial_recur}
```

=== “C++”

```cpp title="time_complexity.cpp"
[class]{}-[func]{factorialRecur}
```

=== “Java”

```java title="time_complexity.java"
[class]{time_complexity}-[func]{factorialRecur}
```

=== “C#”

```csharp title="time_complexity.cs"
[class]{time_complexity}-[func]{factorialRecur}
```

=== “Go”

```go title="time_complexity.go"
[class]{}-[func]{factorialRecur}
```

=== “Swift”

```swift title="time_complexity.swift"
[class]{}-[func]{factorialRecur}
```

=== “JS”

```javascript title="time_complexity.js"
[class]{}-[func]{factorialRecur}
```

=== “TS”

```typescript title="time_complexity.ts"
[class]{}-[func]{factorialRecur}
```

=== “Dart”

```dart title="time_complexity.dart"
[class]{}-[func]{factorialRecur}
```

=== “Rust”

```rust title="time_complexity.rs"
[class]{}-[func]{factorial_recur}
```

=== “C”

```c title="time_complexity.c"
[class]{}-[func]{factorialRecur}
```

=== “Zig”

```zig title="time_complexity.zig"
[class]{}-[func]{factorialRecur}
```

Insert image description here

Please note that because when n ≥ 4 n \geq 4nThere is always n at 4 ! > 2 nn! > 2^nn!>2n , so the factorial order grows faster than the exponential order, innnLarger n is also unacceptable.

Worst, best, average time complexity

The time efficiency of an algorithm is often not fixed, but is related to the distribution of input data . Assume that the input is a length of nnarray of nnums , wherenumsconsists of from1 11 tonnn consists of numbers, each number appears only once; but the order of the elements is randomly disrupted, and the task goal is to return element1 1index of 1 . We can draw the following conclusions.

  • When nums = [?, ?, ..., 1], that is, when the last element is 1 1When 1 , it is necessary to completely traverse the array,reaching the worst time complexity O ( n ) O (n)O ( n )
  • When nums = [1, ?, ?, ...], that is, when the first element is 1 1When 1 , no matter how long the array is, there is no need to continue traversing,reaching the optimal time complexity Ω (1) \Omega(1)Oh ( 1 ) .

"Worst time complexity" corresponds to the asymptotic upper bound of the function, using big OOO mark indicates. Correspondingly, the "optimal time complexity" corresponds to the asymptotic lower bound of the function, expressed byΩ \OmegaThe Ω symbol represents:

=== “Python”

```python title="worst_best_time_complexity.py"
[class]{}-[func]{random_numbers}

[class]{}-[func]{find_one}
```

=== “C++”

```cpp title="worst_best_time_complexity.cpp"
[class]{}-[func]{randomNumbers}

[class]{}-[func]{findOne}
```

=== “Java”

```java title="worst_best_time_complexity.java"
[class]{worst_best_time_complexity}-[func]{randomNumbers}

[class]{worst_best_time_complexity}-[func]{findOne}
```

=== “C#”

```csharp title="worst_best_time_complexity.cs"
[class]{worst_best_time_complexity}-[func]{randomNumbers}

[class]{worst_best_time_complexity}-[func]{findOne}
```

=== “Go”

```go title="worst_best_time_complexity.go"
[class]{}-[func]{randomNumbers}

[class]{}-[func]{findOne}
```

=== “Swift”

```swift title="worst_best_time_complexity.swift"
[class]{}-[func]{randomNumbers}

[class]{}-[func]{findOne}
```

=== “JS”

```javascript title="worst_best_time_complexity.js"
[class]{}-[func]{randomNumbers}

[class]{}-[func]{findOne}
```

=== “TS”

```typescript title="worst_best_time_complexity.ts"
[class]{}-[func]{randomNumbers}

[class]{}-[func]{findOne}
```

=== “Dart”

```dart title="worst_best_time_complexity.dart"
[class]{}-[func]{randomNumbers}

[class]{}-[func]{findOne}
```

=== “Rust”

```rust title="worst_best_time_complexity.rs"
[class]{}-[func]{random_numbers}

[class]{}-[func]{find_one}
```

=== “C”

```c title="worst_best_time_complexity.c"
[class]{}-[func]{randomNumbers}

[class]{}-[func]{findOne}
```

=== “Zig”

```zig title="worst_best_time_complexity.zig"
// 生成一个数组,元素为 { 1, 2, ..., n },顺序被打乱
pub fn randomNumbers(comptime n: usize) [n]i32 {
    var nums: [n]i32 = undefined;
    // 生成数组 nums = { 1, 2, 3, ..., n }
    for (nums) |*num, i| {
        num.* = @intCast(i32, i) + 1;
    }
    // 随机打乱数组元素
    const rand = std.crypto.random;
    rand.shuffle(i32, &nums);
    return nums;
}

// 查找数组 nums 中数字 1 所在索引
pub fn findOne(nums: []i32) i32 {
    for (nums) |num, i| {
        // 当元素 1 在数组头部时,达到最佳时间复杂度 O(1)
        // 当元素 1 在数组尾部时,达到最差时间复杂度 O(n)
        if (num == 1) return @intCast(i32, i);
    }
    return -1;
}
```

It is worth mentioning that we rarely use the optimal time complexity in practice, because it can usually only be achieved with a small probability, which may be misleading. The worst time complexity is more practical because it gives an efficiency and safety value so that we can use the algorithm with confidence.

As can be seen from the above examples, the worst or best time complexity only appears in "special data distribution". The probability of occurrence of these situations may be very small and cannot truly reflect the efficiency of the algorithm. In contrast, the average time complexity can reflect the operating efficiency of the algorithm under random input data , using Θ \ThetaRepresented by the symbol Θ .

For some algorithms, we can simply extrapolate the average situation under a random data distribution. For example, in the above example, since the input array is scrambled, the element 1 1The probability of 1 appearing at any index is equal, then the average number of loops of the algorithm is half of the array lengthn / 2 n / 2n /2 , the average time complexity isΘ ( n / 2 ) = Θ ( n ) \Theta(n / 2) = \Theta(n)Θ ( n /2 )=Θ ( n ) .

But for more complex algorithms, it is often difficult to calculate the average time complexity because it is difficult to analyze the overall mathematical expectation under the data distribution. In this case, we usually use the worst time complexity as the criterion for algorithm efficiency.

Guess you like

Origin blog.csdn.net/zy_dreamer/article/details/132910908