[Algorithm] Time complexity and space complexity analysis

Insert picture description here

Preface

Now that the interview is getting harder and harder, basically it’s about building rockets, and algorithms are indispensable interview questions. For programmers, the requirements for algorithms are getting higher and higher. If you don’t have a good algorithm foundation, it’s a good idea to enter a family. The company is basically out of luck, so let’s review it here, and make up for what was thrown to the teacher (TT)

Algorithm time complexity analysis

definition:

In the algorithm analysis, the total number of executions of the sentence T(n) is a function of the problem size n, and then the change of T(n) with n is analyzed and the magnitude of T(n) is determined.

The time complexity of the algorithm is the time measurement of the algorithm, denoted as: T(n)=O(f(n)). It means that as the problem size n increases, the growth rate of the algorithm execution time is the same as the growth rate of f(n), which is called the asymptotic time complexity of the algorithm, or time complexity for short, where f(n) is the problem size Some function of n.

Here, one thing needs to be clear:

Number of executions = execution time

Use capital O() to reflect the notation of algorithm time complexity, we call it big O notation. In general, as the input size n increases, the algorithm with the slowest increase in T(n) is the optimal algorithm.

The following uses Big O notation to express the time complexity of some summation algorithms:

Algorithm 1:

public static void main(String[] args) {
    
    
    int sum = 0;       //执行1次
    int n=100;         //执行1次
    sum = (n+1)*n/2;   //执行1次
    System.out.println("sum="+sum);
}

Algorithm 2:

public static void main(String[] args) {
    
    
    int sum = 0;      //执行1次
    int n=100;        //执行1次
    for (int i = 1; i <= n; i++) {
    
    
        sum += i;     //执行了n次
    }
    System.out.println("sum=" + sum);
}

Algorithm three:

public static void main(String[] args) {
    
    
    int sum=0;//执行1次
    int n=100;//执行1次
    for (int i = 1; i <=n ; i++) {
    
    
        for (int j = 1; j <=n ; j++) {
    
    
            sum+=i;//执行n^2次
        }
    }
    System.out.println("sum="+sum);
}

If the execution times of the judgment condition and the execution times of the output statement are ignored, when the input scale is n, the execution times of the above algorithm are:

  • Algorithm 1: 3 times

  • Algorithm 2: n+3 times

  • Algorithm 3: n^2+2 times

Based on our analysis of the asymptotic growth of the function, the
following rules can be used to derive the representation of the big O order :

1. Replace all addition constants in the running time with a constant 1;

2. In the modified number of runs, only high-level items are retained;

3. If the highest-order term exists, and the constant factor is not 1, remove the constant multiplied by this term;

Therefore, the big O notation of the above algorithm is:

  • Algorithm 1: O(1)

  • Algorithm 2: O(n)

  • Algorithm 3: O(n^2)

Common Big O

1. Linear order

public static void main(String[] args) {
    
    
    int sum = 0;
    int n=100;
    for (int i = 1; i <= n; i++) {
    
    
        sum += i;
    }
    System.out.println("sum=" + sum);
}

The above code, its loop time complexity is O(n) , because the code in the loop body needs to be executed n times

2. Square order

public static void main(String[] args) {
    
    
    int sum=0,n=100;
    for (int i = 1; i <=n ; i++) {
    
    
        for (int j = 1; j <=n ; j++) {
    
    
            sum+=i;
        }
    }
    System.out.println(sum);
}

In the above code, n=100, that is to say, every time the outer loop is executed, the inner loop is executed 100 times, so if the program wants to
come out of these two loops , it needs to be executed 100*100 times. It is the square of n, so the time complexity of this code is O(n^2).

3. Cubic order

public static void main(String[] args) {
    
    
    int x=0,n=100;
    for (int i = 1; i <=n ; i++) {
    
    
        for (int j = i; j <=n ; j++) {
    
    
            for (int j = i; j <=n ; j++) {
    
    
                x++;
            }
        }
    }
    System.out.println(x);
}

The above code, n=100, that is to say, every time the outer loop is executed, the middle loop is executed 100 times, and every time the middle loop is executed, the innermost loop needs to be executed 100 times, so the total program wants to start from here Out of three loops, you need to execute 100 100 100 times, which is the cube of n, so the time complexity of this code is O(n^3).

4. Logarithmic order

int i=1,n=100;

    while(i<n){
    
    
    
        i = i * 2;
        
    }

Since each time after i*2, it is one step closer to n, assuming that x 2s are multiplied and greater than n, the loop will exit. Since it is 2^x=n, x=log(2)n is obtained, so the time complexity of this loop is O(logn);

5. Linear logarithmic order

for(m=1; m<n; m++)
{
    
    
    i = 1;
    while(i<n)
    {
    
    
        i = i * 2;
    }
}

The outer loop is n times, each time the while loop inside will execute log(2)n times, so the time complexity of this loop is O(nlogn)

6. Constant order

public static void main(String[] args) {
    
    
    int n=100;
    int i=n+2;
    System.out.println(i);
}

The above code is executed twice regardless of the input scale n. According to the Big O derivation rule, the constant is replaced with 1, so the time complexity of the above code
is O(1)

The following is a summary of common time complexity:
their complexity from low to high is as follows:

Ο(1)<Ο(log2n)<Ο(n)<Ο(nlog2n)<Ο(n2)<Ο(n3)<…<Ο(2n)<Ο(n!)

Insert picture description here

According to the previous analysis of the line chart, we will find that starting from the square order, as the input scale increases, the time cost will increase sharply. Therefore, our algorithm pursues O(1), O( logn), O(n), O(nlogn) these kinds of time complexity, and if the time complexity of the algorithm is found to be square, cubic or more complex, then we can divide this algorithm into undesirable , Needs optimization.

Worst case

From a psychological point of view, everyone has an expectation of what happens. For example, when they see a half glass of water, some people will say: Wow, there is still half a glass of water! But
some people will say: Oh my God, there is only half a glass of water. Most people are worried about future failures, and tend to plan for the worst when they are anticipating. In this way,
even if the worst results occur, the parties are psychologically prepared and easier to accept the results. If the worst result does not occur, the person will be
happy soon .

Algorithm analysis is similar. If there is a requirement:
there is an array storing n random numbers, please find the specified number from it.

public int search(int num){
    
    
    int[] arr={
    
    11,10,8,9,7,22,23,0};
    for (int i = 0; i < arr.length; i++) {
    
    
        if (num==arr[i]){
    
    
            return i;
        }
    }
    return -1;
}

Best case:

If the number we are looking for is 11 and the first number we find is the expected number, then the time complexity of the algorithm is O(1)

Worst case:

If the last number 0 is the expected number, the time complexity of the algorithm is O(n)

Average situation:

The average cost of any number lookup is O(n/2)

The worst case is a guarantee. In the application, this is the most basic guarantee. Even in the worst case, the service can be provided normally. Therefore, unless specifically specified, the running time we mentioned refers to Worst case running time.

Algorithm's space complexity analysis

Computer hardware and software have gone through a relatively long history of evolution, as providing the environment for the operation of memory, especially, from 512k earlier, the
experience of 1M, 2M, 4M ... and so on, to today's 8G, Even 16G and 32G, so in the early days, the memory occupancy of the algorithm during operation is also
a problem that often needs to be considered. We can use the space complexity of the algorithm to describe the memory usage of the algorithm.

Common memory usage in java

1. Memory usage of basic data types:
Insert picture description here

2. The way computers access memory is one byte at a time
Insert picture description here

3. A reference (machine address) requires 8 bytes to represent:
for example:

Date date = new Date()

The date variable needs to take up 8 bytes to represent

4. Create an object, such as

new Date()

In addition to the memory occupied by the data stored in the Date object (such as year, month, day, etc.), the object itself also
has memory overhead. The overhead of each object is 16 bytes, which is used to store the object's header information (JVM related knowledge ).

5. General memory usage, if it is not enough 8 bytes, it will be automatically filled with 8 bytes:

public class Demo1{
    
    
    public int a  = 1;
}

Creating an object through new Demo1() occupies memory as follows:

1. Integer member variable a occupies 4 bytes

2. The object itself occupies 16 bytes

Then creating the object requires a total of 20 bytes, but since it is not in units of 8, it will automatically be filled with 24 bytes

6. Arrays in java are limited to objects. They generally require additional memory due to record length. An array of primitive data types generally requires
24 bytes of header information (16 own object overhead, 4 bytes for Save the length and 4 padding bytes) plus the memory required to save the value.

The space complexity of the algorithm

Understanding the most basic mechanism of java memory can effectively help us estimate the memory usage of a large number of programs.
The space complexity calculation formula of the algorithm is written as:

S(n)=O(f(n))

Among them, n is the input size, and f(n) is a function of the storage space occupied by n in the sentence.

Reverse the specified array element and return the reversed content.

Solution 1: Use the temp intermediate variable to directly swap the first and last positions of the array, the second and the penultimate...

public static int[] reverse1(int[] arr){
    
    

int n=arr.length;//申请4个字节
int temp;//申请4个字节

for(int start=0,end=n-1;start<=end;start++,end--){
    
    
    temp=arr[start];
    arr[start]=arr[end];
    arr[end]=temp;
}
return arr;
}

Solution 2: Create a new array, and put the original array

public static int[] reverse2(int[] arr){
    
    
    int n=arr.length;       //申请4个字节
    int[] temp = new int[n];//申请n*4个字节+数组自身头信息开销24个字节
    for (int i = n-1; i >=0; i--) {
    
    
        temp[n-1-i]=arr[i];
    }
return temp;
}

Ignoring the memory occupied by the judgment condition, we get the following memory usage:

Algorithm 1:
Regardless of the size of the incoming array, always apply for an additional 4+4=8 bytes;

Algorithm 2:
4+4n+24=4n+28;

According to the big-O derivation rule, the space complexity of algorithm one is O(1), and the space complexity of algorithm two is O(n), so from the perspective of space occupation, algorithm one is better than algorithm two.

In addition, if you think the article is useful to you, please click like below!
Your support is my motivation for persistence.

Guess you like

Origin blog.csdn.net/i_nclude/article/details/112622265