The meaning of time and space complexity in algorithm analysis

Let’s start with a trick at the beginning:

The so-called program is the data structure plus the algorithm, and the design method, language tools and environment are also added to the modern program design. Among them, the algorithm refers to the description of the operation, the methods and steps taken without solving a problem. Like the sheet music for a song, the recipe for a dish.

The algorithm complexity refers to the amount of computer resources that the algorithm we design requires during its operation. Among them, the amount of time resources we need is called time complexity, and the amount of space resources we need is called space. complexity .

In complexity, there are three cases:

Average case: expressed with

Worst case: use large

Best case: express with

Regarding the representation of complexity, we generally use:

these representations.

For the meaning of these symbols, for the convenience of understanding, we can assume that there are two

Function: f(n), g(n).


(1) For the big O notation:  if there is a natural number n and a constant c>0, such that we call the complexity of f(n) O(g(n))

What does that mean? Such as:

               f(n)=8n,   g(n)=10n;

When n>=1, there are 8n<=10n, that is, f does not grow faster than a certain constant of g (upper bound), so the complexity of f(n) is O(5n), and because when the complexity is expressed , its complexity depends on the scale of n. When n is very large, the number 5 becomes meaningless, so the number is generally omitted and only O(n) is taken.

Expressed as a limit:



(2) For , if there is a natural number n and a constant C > 0, such that the complexity of f(n) is called ( g(n)).

 It is the opposite of Big O, O seeks the worst case, it seeks the best case, and corresponds to the lower bound of the algorithm complexity.

 Expressed as a limit:


 (3) For symbols , if there is a natural number n and a constant c1 and c2 , such that , , then f(n) is called.

gives a precise description of the growth rate of the algorithm's running time,

f(n) = if and only if  and why?

Because big O notation requires less than c2*g(n), and requires more than c1*g(n), so, because it is between the two, there is the above equation.

(4) For little O notation, the difference between it and big O notation is that little O requires strictly less than c*g(n),

That is , here is no longer less than or equal.

This equation shows that f is negligible for g when n is sufficiently large.


For an algorithm, we generally analyze the worst case, that is, the big O notation, because as long as the worst case is satisfied, the others must also be satisfied.

The basic operation rules are given below:


For the first item, for example:

, we can record it as O(n^2), because when n is very large, 20n is negligible relative to n^2. In our program design it is like a for loop, its termination condition is set to n,

 
 
int i=0;
for(int k=1;k<n;k++){

	i++			
}


We can call the running time of this loop O(n).

2. The second, like two nested for loops, like this, we say the running time of these two loops is O(mn)

 
 
 
 
int i=0;
for(int r=1;r<m;r++){
	for(int i=1;i<n;i++){
       i++;
  }
}		

3. The third item is actually two parallel for, like this, their running time is O(max(n,m))
 
 
 
 
 
 
for(int k=1;k<n;k++){
	i++			
}


for(int k=1;k<m;k++){
	i++			
}



4.对于第四条来说,其实就是一个常数乘上一个数据规模n就像在一个for循环的终止条件n上乘以一个常数,这样我们用大O来表示它最坏情况时,仍然可以写成O(n)。


5.第五条,如果g(n)函数的运行时间为O(f)那么g(n)+f(n),我们仍然可以将它当做两个并行的for循环一样,这样就像第三条,这样它的运行时间还是为O(f)。


二.

空间复杂度的计算和时间复杂类似,但是它的计算更为简单,因为它只需计算临时占用的存储空间的大小,意思就是说只考虑在运行过程中为局部变量分配的存储空间的大小,它包括为参数表中形参变量分配的存储空间和为在函数体中定义的局部变量分配的存储空间两个部分。像我们的插入排序,虽然它的时间复杂度为O(n^2),但空间复杂度只为O(1),因为它只需要一个局部变量。




后面会对具体的排序进行复杂度的分析,各种排序算法通过下面衔接查找。

三种简单排序:点击打开链接

希尔排序:点击打开链接

快速排序:点击打开链接








Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325902642&siteId=291194637