Time complexity T(n) = O( f(n)) and space complexity S(n) = O( f(n))

Original link: https://blog.csdn.net/jsjwk/article/details/84315770

Algorithm (Algorithm) refers to a set of methods used to manipulate data and solve program problems. For the same problem, using different algorithms may end up with the same results, but the resources and time consumed in the process will be very different.

So how should we measure the pros and cons of different algorithms?

It is mainly considered from the two dimensions of "time" and "space" occupied by the algorithm.

Time dimension: refers to the time consumed to execute the current algorithm, we usually use "time complexity" to describe.

Space dimension: refers to how much memory space is required to execute the current algorithm. We usually use "space complexity" to describe it.

Therefore, evaluating the efficiency of an algorithm mainly depends on its time complexity and space complexity. However, sometimes time and space are "fish and bear's paws" and cannot have both, so we need to strike a balance between them.

Let me introduce the calculation methods of "time complexity" and "space complexity" respectively.

1. Time complexity.
In the time frequency T(n), n represents the scale of the problem. When n keeps changing, T(n) will keep changing accordingly. In order to understand the law of this change, the concept of time complexity was introduced. In general, the number of repeated executions of this operation based on the algorithm is a function of the problem size n, which is the time frequency T(n). If there is a certain auxiliary function f(n), when it tends to infinity, the limit value of T(n)/f(n) is a certain constant that is not zero, then f(n) is T(n) Functions of the same order of magnitude, denoted as T(n)=O(f(n)), are called the asymptotic time complexity of the algorithm, and also referred to as the time complexity for short.

Therefore, another more general method came out: "Big O notation", that is, T(n) = O(f(n))
and O(n) to reflect the notation of the algorithm time complexity is There are three rules for deriving the big O order called the big O notation
:
1. Replace all additive constants in the running time with a constant 1.
2. Keep only the highest order term
3. Remove the highest order constant

Let's look at an example first

for(i=1; i<=n; ++i)
{
    
    
   j = i;
   j++;
}

Through the "big O notation", the time complexity of this code is: O(n), why?

In the big O notation, the formula of time complexity is: T(n) = O( f(n) ), where f(n) represents the sum of the number of executions of each line of code, and O represents the proportional relationship, this formula The full name of is: the progressive time complexity of the algorithm.

Let's continue to look at the above example, assuming that the execution time of each line of code is the same, we use 1 particle time to represent, then the first line of this example takes 1 particle time, and the third line of execution time is n Particle time, the execution time of the fourth line is also n particle time (the second and fifth lines are symbols, temporarily ignored), then the total time is 1 particle time + n particle time + n particle time, that is (1+2n ) Particle time, that is: T(n) = (1+2n)*particle time. From this result, we can see that the time consumption of this algorithm changes with the change of n. Therefore, we can simplify this The time complexity of the algorithm is expressed as: T(n) = O(n)
Why can we simplify this way, because the big O notation is not used to truly represent the execution time of the algorithm, it is used to represent code execution The trend of time growth.
So in the above example, if n is infinite, the constant 1 in T(n) = time(1+2n) is meaningless, and the multiple of 2 is meaningless. Therefore, it can be directly simplified as T(n) = O(n).

Common time complexity measures are:
1. Constant order O(1)
2. Logarithmic order O(logN)
3. Linear order O(n)
4. Linear logarithmic order O(nlogN)
5. Square order O(n²) )
6. Cubic order O(n³)
7. K-th order O(n^k) [n to the k-th power, the symbol will not be knocked]
8. Exponential order (2^n)
common time complexity comparison
O( 1)<O(logn)<O(n)<O(nlogn)<O(n²)<O(n³)<O(2ⁿ)<O(n!)

Here are some of the more commonly used ones to explain (not strictly in accordance with the order):
Constant order O(1):
No matter how many lines of code are executed, as long as there is no complex structure such as loops, the time complexity of this code is O( 1), such as:

int i = 1;
int j = 2;
++i;
j++;
int m = i + j;

When the above code is executed, its consumption does not increase with the growth of a certain variable, so no matter how long this type of code is, even if there are tens of thousands of hundreds of thousands of lines, it can be represented by O(1) Time complexity.

Linear order O(n):
This was explained in the first code example, such as:

for(i=1; i<=n; ++i)
{
    
    
   j = i;
   j++;
}

In this code, the code in the for loop will be executed n times, so the time it consumes varies with the change of n, so this type of code can use O(n) to express its time complexity.

Logarithmic order O(logN):
Let’s look at the code first:

int i = 1;
while(i<n)
{
    
    
    i = i * 2;
}

As you can see from the above code, in the while loop, i is multiplied by 2 each time. After the multiplication, i is getting closer and closer to n. Let’s try to solve it. Assuming that after the loop x times, i is greater than 2, then the loop exits, that is to say, the x power of 2 is equal to n, then x = log2n [here is log 2 to the n power , The symbol will not be knocked] In
other words, when the loop log2n [here is log 2 to the power of n, the symbol will not be knocked] times, the code will end. Therefore, the time complexity of this code is: O(logn)

Linear logarithmic order O(nlogN):
Linear logarithmic order O(nlogN) is actually very easy to understand. If the code with time complexity of O(logn) is looped N times, then its time complexity is n * O(logN ), which is O(nlogN).

Take the above code with a little modification as an example:

for(m=1; m<n; m++)
{
    
    
    i = 1;
    while(i<n)
    {
    
    
        i = i * 2;
    }
}

Square order O(n²): The
square order O(n²) is easier to understand. If the O(n) code is nested and looped again, its time complexity will be O(n²).
For example:

for(x=1; i<=n; x++)
{
    
    
   for(i=1; i<=n; i++)
    {
    
    
       j = i;
       j++;
    }
}

This code actually nests 2 levels of n loops, and its time complexity is O(n*n), that is, O(n²)
If you change the n of one level of loop to m, that is:

for(x=1; i<=m; x++)
{
    
    
   for(i=1; i<=n; i++)
    {
    
    
       j = i;
       j++;
    }
}

Then its time complexity becomes O(m*n)

Cubic order O(n³), K-th order O(n^k):
Refer to O(n²) above to understand, O(n³) is equivalent to three layers of n cycles, and the others are similar.

In addition, there are actually average time complexity, amortized time complexity, worst time complexity, and best time complexity analysis methods, which are a bit complicated, so I won't expand it here.

2. Space complexity

Since time complexity is not used to calculate the specific time consuming of the program, then I should also understand that space complexity is not used to calculate the actual space occupied by the program.

Space complexity is a measure of the amount of storage space that an algorithm temporarily occupies during its operation. It also reflects a trend, and we use S(n) to define it.

Space complexity is commonly used: O(1), O(n), O(n²), let’s take a look below:

Space complexity O(1)

If the temporary space required for the execution of the algorithm does not change with the size of a certain variable n, that is, the space complexity of the algorithm is a constant, which can be expressed as O(1)
Example:

int i = 1;
int j = 2;
++i;
j++;
int m = i + j;

The space allocated by i, j, and m in the code does not change with the amount of processed data, so its space complexity S(n) = O(1)

Space complexity O(n)

We first look at a code:

int[] m = new int[n]
for(i=1; i<=n; ++i)
{
    
    
   j = i;
   j++;
}

In this code, the first line of the code is a new array. The size of this data is n. In lines 2-6 of this code, although there is a loop, no new space is allocated. Therefore, the space of this code The complexity mainly depends on the first line, that is, S(n) = O(n)
————————————————
Copyright statement: This article is the original "More than thinking" of the CSDN blogger The article complies with the CC 4.0 BY-SA copyright agreement. Please attach the original source link and this statement for reprinting.
Original link: https://blog.csdn.net/jsjwk/article/details/84315770

Guess you like

Origin blog.csdn.net/weixin_43864187/article/details/103983179