Calculation of time complexity

1. Concept
The time complexity is the item that is most affected by the change of n in the expression of the total number of operations (excluding coefficients)
For example: the general total number of operations expression looks like this:
a*2^n+b*n^3+c*n^2+d*n*lg(n)+e*n+f
a ! When =0, the time complexity is O(2^n);
a=0,b<>0 =>O(n^3);
a,b=0,c<>0 =>O(n^2) and so on
eg:
(1) for(i=1;i<=n;i++)<span style="color: rgb(0, 102, 0);"> //loop n*n times, of course it is O(n^2 )
            for(j=1;j<=n;j++)
                 s++;
(2) for(i=1;i<=n;i++)//looped (n+n-1+n-2+...+1)≈(n^2)/2, because of the time complexity It does not consider coefficients, so it is also O(n^2)
            for(j=i;j<=n;j++)
                 s++;
(3) for(i=1;i<=n;i++)//loop (1+2+3+...+n)≈(n^2)/2, of course it is also O(n^2)
            for(j=1;j<=i;j++)
                 s++;
(4)   i=1;k=0;     
while(i<=n-1)
k+=10*i;i++;      
}
//loop n-1≈n times, so it is O(n)

(5)

 
 
for(i=1;i<=n;i++)
             for(j=1;j<=i;j++)
                 for(k=1;k<=j;k++)
                       x=x+1;
//looped (1^2+2^2+3^2+...+n^2)=n(n+1)(2n+1)/6 (remember this formula) ≈(n ^3)/3, regardless of coefficients, it is naturally O(n^3)
2. Calculation method
 
 
1. The time it takes for an algorithm to execute cannot be calculated theoretically. It must be run on the computer to know it. But it is impossible and unnecessary for us to test every algorithm on the computer. We just need to know which algorithm takes more time and which algorithm takes less time. And <span style="color: rgb(255, 0, 0);">The time spent by an algorithm is proportional to the number of executions of the statements in the algorithm</span>, whichever algorithm has more statements executed, it takes less time. many. The number of statements executed in an algorithm is called statement frequency or time frequency. Denoted as T(n).
2. In general, the number of times the basic operation of the algorithm is repeated is a certain function f(n) of module n. Therefore, the time complexity of the algorithm is written as: T(n)=O(f(n)) With As the module n increases, the growth rate of the algorithm execution time is proportional to the growth rate of f(n), so the smaller f(n), the lower the time complexity of the algorithm and the higher the efficiency of the algorithm.
When calculating the time complexity, first find out the basic operation of the algorithm, then determine its execution times according to the corresponding statements, and then find the same order of magnitude of T(n) (its same order of magnitude is as follows: 1, Log2n, n , nLog2n , the square of n, the cube of n, the nth power of 2, n!), after finding out, f(n) = this order of magnitude, if the limit of T(n)/f(n) is calculated, we can get a Constant c, then the time complexity T(n)=O(f(n)).
3. Common time complexity
Arranged in increasing order of magnitude, common time complexities are:
Constant order O(1), log order O(log2n), linear order O(n), linear log order O(nlog2n), square order O(n^2), cubic order O(n^3),. .., k-th order O(n^k), exponential order O(2^n) .
1. O(n), O(n^2), cubic order O(n^3),..., k-th order O(n^k) is polynomial order time complexity, which are called first order time respectively Complexity, second-order time complexity. . . .
2.O(2^n), exponential time complexity, this kind is not practical
3. Logarithmic order O(log2n), linear logarithmic order O(nlog2n), except for constant order, this kind of efficiency is the highest
Example: Algorithm:
  for(i=1;i<=n;++i)
  {
     for(j=1;j<=n;++j)
     {
         c[ i ][ j ]=0; <span style="color: rgb(0, 153, 0);">//该步骤属于基本操作 执行次数:n^2
          for(k=1;k<=n;++k)
               c[ i ][ j ]+=a[ i ][ k ]*b[ k ][ j ];
 //该步骤属于基本操作 执行次数:n^3
     }
  }
  则有 T(n)= n^2+n^3,根据上面括号里的同数量级,我们可以确定 n^3为T(n)的同数量级
  则有f(n)= n^3,然后根据T(n)/f(n)求极限可得到常数c
  则该算法的 时间复杂度:T(n)=O(n^3)

四、


定义:如果一个问题的规模是n,解这一问题的某一算法所需要的时间为T(n),它是n的某一函数 T(n)称为这一算法的“时间复杂性”。

当输入量n逐渐加大时,时间复杂性的极限情形称为算法的“渐近时间复杂性”。

我们常用大O表示法表示时间复杂性,注意它是某一个算法的时间复杂性。大O表示只是说有上界,由定义如果f(n)=O(n),那显然成立f(n)=O(n^2),它给你一个上界,但并不是上确界,但人们在表示的时候一般都习惯表示前者。

此外,一个问题本身也有它的复杂性,如果某个算法的复杂性到达了这个问题复杂性的下界,那就称这样的算法是最佳算法。

“大O记法”:在这种描述中使用的基本参数是 n,即问题实例的规模,把复杂性或运行时间表达为n的函数。这里的“O”表示量级 (order),比如说“二分检索是 O(logn)的”,也就是说它需要“通过logn量级的步骤去检索一个规模为n的数组”记法 O ( f(n) )表示当 n增大时,运行时间至多将以正比于 f(n)的速度增长。

这种渐进估计对算法的理论分析和大致比较是非常有价值的,但在实践中细节也可能造成差异。例如,一个低附加代价的O(n2)算法在n较小的情况下可能比一个高附加代价的 O(nlogn)算法运行得更快。当然,随着n足够大以后,具有较慢上升函数的算法必然工作得更快。

O(1)

Temp=i;i=j;j=temp;                    

以上三条单个语句的频度均为1,该程序段的执行时间是一个与问题规模n无关的常数。算法的时间复杂度为常数阶,记作T(n)=O(1)。如果算法的执行时间不随着问题规模n的增加而增长,即使算法中有上千条语句,其执行时间也不过是一个较大的常数。此类算法的时间复杂度是O(1)。

O(n^2)

2.1. 交换i和j的内容
     sum=0;                 (一次)
     for(i=1;i<=n;i++)       (n次 )
        for(j=1;j<=n;j++) (n^2次 )
         sum++;       (n^2次 )
解:T(n)=2n^2+n+1 =O(n^2)

2.2.   
    for (i=1;i<n;i++)
    {
        y=y+1;         ①   
        for (j=0;j<=(2*n);j++)    
           x++;        ②      
    }         
解: 语句1的频度是n-1
          语句2的频度是(n-1)*(2n+1)=2n^2-n-1
          f(n)=2n^2-n-1+(n-1)=2n^2-2
          该程序的时间复杂度T(n)=O(n^2).         

O(n)      
                                                      
2.3.
    a=0;
    b=1;                   

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325733422&siteId=291194637