Java time complexity and space complexity (detailed explanation)

Table of contents

1. Complexity analysis

2. Time complexity

Big O asymptotic notation

3. Space complexity


1. Complexity analysis

When we design an algorithm, how do we measure its quality?

After the algorithm is written as an executable program, it requires time resources and space (memory) resources to run. Therefore, measuring the quality of an algorithm is generally measured from two aspects: time and space .

2. Time complexity

We can run the code to get the specific time required for the execution of an algorithm. However, if each algorithm is tested once, it is very time-consuming and energy-consuming. Moreover, the test results are highly dependent on the test environment. In the test environment Different hardware will affect the test results, and the scale of the test data will also affect the test results.

If we assume that the execution time of each line of code is the same, then the factor that affects the time at this time is the number of executions of the statement , that is, the time is proportional to the number of executions of its statement.

Time complexity: the number of times the basic operations in the algorithm are performed

public static int factorial(int N){
        int ret = 1;
        for (int i = 2; i <= N; i++) {
            ret *= i;
        }
        return ret;
    }

The above code is used to find the factorial of n. The code before for is executed once per line, and the for loop (i++) and the code in it (ret *= i) are executed N-1 times per line. Then the code It has been executed a total of (1+2*(N-1)) times, that is, (2*N-1) times. The specific number of times the code has been executed is related to the incoming data n.

When N = 100, the code is executed 199 times

When N = 1000, the code is executed 1999 times

When N = 10000, the code is executed 19999 times

……

The larger N, the smaller the impact of constant -1 and coefficient 2 on the number of executions.

Therefore, when we calculate the time complexity, we do not need to calculate the precise number of executions. We only need to calculate the approximate number of executions. We use the asymptotic representation of Big O to express it.

Big O asymptotic notation

Big O notation : mathematical notation used to describe the asymptotic behavior of functions

Derive the Big O method:

1. Replace all additive constants in runtime with constant 1

2. In the modified running times function, only the highest order term is retained

3. If the highest order term exists and is not 1, remove the constant multiplied by this term

For example, in the above code for calculating factorial, the number of executions is 2*N-1. Using the asymptotic representation of big O, remove the term -1 that has little impact on the result. The highest-order term removes the constant multiplied by it, which is O(N)

Within the algorithm, there are different implementations,

For example, when searching for data x in an array of length N, it may be found after executing it once, or it may not be found after executing it n times.

We break it down into best, average and worst case scenarios

Best case : minimum number of runs for any input size

Average case : expected number of runs for any input size

Worst case : maximum number of runs for any input size

What we generally focus on is the worst-case operating situation of the algorithm

We use some examples to further familiarize ourselves with the asymptotic notation of Big O:

Since we need to remove terms that have a small impact on the results, we can no longer analyze statements that have a small impact on the results.

Example 1:

public static int add(int a, int b){
        int ret = a+b;
        return ret;
    }

The above code is executed twice, replacing the constants with constant 1, which is O(1)

Example 2:

 int fun(int N,int M){
        int count = 0;
        for (int i = 0; i < N; i++) {
            count++;
        }
        for (int i = 0; i < M; i++) {
            count++;
        }
        return count;
    }

The above code has a total of two for loops. Among them, the statement that has the greatest impact on the number of executions (ie, the most number of executions) is count++, which has been executed a total of N+M times.

When N is much larger than M, the time complexity is O(N)

When M is much larger than N, the time complexity is O(M)

When M is about the same size as N, the time complexity is O(M+N)

Since the size relationship between N and M is not explained, the time complexity is O(N+M)

Example 3:

    void fun(int N){
        int count = 0;
        for (int i = 0; i < N; i++) {
            for (int j = 0; j < i; j++) {
                count += j;
            }
        }
    }

 There is a nested loop in the above code. The statement that has the greatest impact on the number of executions is count += j. Let’s analyze it.

When i = 0, the statement is executed 0 times,

When i = 1, the statement is executed once,

When i = 2, the statement is executed 2 times,

……

When i = N-1, the statement is executed N-1 times

Then the total number of executions is 0 + 1 + 2 + ... + N-1. Using the arithmetic sequence summation formula, the result is that the \frac{(N-1)*N}{2}highest-order term is \frac{^{N{_{}}^{2}}}{2}, removing the coefficient, the time complexity is O(N^2)

Example 4:


    void fun(int M){
        int count = 0;
        for (int i = 0; i < 10; i++) {
            count++;
        }
    }

 The time complexity is O(1)

Notice! The count++ statement has been executed a total of 10 times and has nothing to do with M.

Example 5:

  
    public static int binarySearch(int[] arr, int x){
        if(arr.length == 0){
            return -1;
        }
        int left = 0;
        int right = arr.length-1;
        while(left <= right){

           int mid = left + ((right-left)>>1);
            if(arr[mid] == x){
                return mid;
            }else if(arr[mid] > x){
                right = mid;
            }else{
                left = mid+1;
            }
        }
        return -1;
    }

The above code is a binary search. Consider the worst case scenario, that is, it cannot be found:

Let’s first analyze how binary search finds data:

The premise of binary search is that the array arr being searched is in order. By determining the middle position mid, the array is divided into two parts. If the value being searched for x < arr[mid], the right part of the value is excluded, and the binary search is continued on the left ; If x > arr[mid], exclude some values ​​on the left and continue the binary search on the right; if x = arr[mid], find the value being searched and return it.

The process is shown in the figure below:

Since a binary search will exclude half of the elements in the array, that is

N/2

N/4

N/8

……

1 The loop ends when the result is 1

Therefore, 1*2*2*……*2 = N, that is, 2^x = N, then the number of executions x = {log_{2}}^{}N,

The time complexity isO(log_{2}^{}N)

Example 6:

    long factorial(int N){
        return N < 2? N: factorial(N-1)*N;
    }

 The end condition of the above recursion is N < 2. When N >= 2, the recursion will continue:

N

N-1

N-2

……

2

1 The recursion ends at this point

The number of recursions is N, so the time complexity is O(N)

Example 7:

    int fibonacci(int N){
        return N < 2? N: fibonacci(N-1)+ fibonacci(N-2);
    }

Fibonacci recursion can be regarded as a binary tree, and the stack frame opened during the recursion is regarded as a node. Each node is a function call. The time complexity of the recursion is the number of binary tree nodes. The specific recursion The process is:

 

The last layer may be unsatisfied. Since the calculation time complexity only requires calculating the approximate number of executions, the impact of the vacancies in the last layer on the calculation can be ignored.

Then the number of recursions is Fib(N) = 2^0 + 2^1 + ……+ 2^(N-1) 

 Using the geometric sequence summation formula, we can get 2^n - 1, then the time complexity is O(2^N)

3. Space complexity

Space complexity is a measure of the amount of storage space temporarily occupied by a total algorithm during operation. Space complexity calculates the number of variables. Its calculation rules are basically similar to time complexity. It also uses the asymptotic representation of big O.

Example 1:

    void fun(int N){
        int count = 0;
        for (int i = 0; i < N; i++) {
            for (int j = 0; j < i; j++) {
                count += j;
            }
        }
    }

 A constant number of additional variables (count, i, j) are opened, so the space complexity is O(1)

Example 2:

    int[] fun(int N){
        int[] arr = new int[N];
        for (int i = 0; i < N; i++) {
            arr[i] = i;
        }
        return arr;
    }

 A total of integer arrays of size N and constant extra variables are opened, so the space complexity is O(N)

Example 3:

    long factorial(int N){
        return N < 2? N: factorial(N-1)*N;
    }

The above code is called recursively N times and opens up N stack frames. Each stack frame uses a constant amount of space, so the space complexity is O(N).

Guess you like

Origin blog.csdn.net/2301_76161469/article/details/132646916