The basic algorithm to review the nature and timing _ computational complexity of the algorithm comparison method

 

 

The basic elements of the algorithm [three]

  (1) sequential structure

  (2) Condition structure

  (3) the cyclic structure

 

The general nature of the algorithm [four]

  (1) General: for any of those of the input data match the input type, can be carried out according to the algorithm for solving the problem, to ensure the correctness of the packet structure.

  (2) Effectiveness: the composition of each instruction algorithm must be able to be exact execution of human or machine.

  (3) Uncertainty: the algorithm after each step is performed, for its next step, there should be clear instructions. That is, to ensure that after every step instructions on the next move, the next instruction can not be lacking or containing only vague instructions.

  (4) there is poor: the execution of the algorithm must end in a finite number of steps.

 

 

First, the time complexity

  (1) the frequency of a time-consuming algorithm execution time, in theory can not be counted out, the machine must be running tests to know. But we can not and is not necessary for every algorithm on the test machine, just know that it takes more than the time which algorithm, which algorithm takes less time on it. And time spent with a number of algorithm execution algorithm is proportional to the statement, the statement is executed which algorithm more often, it takes time and more. A number of execution algorithms statement called the statement frequency or time frequency. Referred to as T (n). 
 
  (2) at the time of the time complexity just mentioned frequency, n is called the scale of the problem, when changing n, the frequency of the time T (n) will continue to change. But sometimes we want to know what time it renders the law changes. To this end, we introduce the concept of time complexity. In general, the number of basic arithmetic operations are repeatedly performed a function of the problem size n, with T (n) represents, if an auxiliary function F (n), such that when n approaches infinity, T ( n) / f (n) is not equal to zero limit constant, called f (n) is T (n) is a function of the same order. Denoted by T (n) = O (f (n)), said O (f (n)) is a progressive time complexity of the algorithm, referred to as time complexity.
  
  (3) the time asymptotic time complexity algorithm performance evaluation primarily by the number of times of time complexity of an algorithm performance evaluation stage (i.e. asymptotic time complexity of the algorithm). 

  

 

  • Concrete steps to solve the time complexity of the algorithm is:

  ⑴ find basic statements algorithm;

  Algorithm to perform the highest number of that statement is the basic statement, usually the innermost loop cycle.

  ⑵ calculated number of times of execution of the basis statement level;

  Just calculate the number of executions basic statement of magnitude, which means that as long as a function of the number of executions of basic statement of the right to the highest power, you can ignore all the low power factor and the highest power. This can simplify the analysis of algorithms, and the focus on the most important point: the growth rate.

  ⑶ represents time performance of the algorithm with a large Ο mark.

  The basic statement is executed into a large number of orders of magnitude Ο notation.

  If the algorithm contains nested loops, the basis statement is usually the innermost loop, if the algorithm loop comprising parallel, then the cycle time complexity of parallel addition. E.g:

  for (i=1; i<=n; i++)
  x++;

  for (i=1; i<=n; i++)
  for (j=1; j<=n; j++)
  x++;

  The time complexity for a first cycle was Ο (n), for the second cycle of the time complexity is Ο (n2), the time complexity of the algorithm is Ο (n + n2) = Ο (n2).

  Common algorithm time complexity ascending order:

  C (1) <O (log2N) <G (n) <C (nlog2n) <O (n2) <O (n3) <... <G (2n) <G (n!)

  Ο (1) represents the number of a statement is executed substantially constant, in general, does not exist as long as the loop algorithm, the time complexity is Ο (1). Ο (log2n), Ο (n), Ο (nlog2n), Ο (n2) and o (N3) is called a polynomial time, but Ο (2n) and Ο (n!) Is called exponential time. Computer scientists generally believe that the former is an effective algorithm, the P class of problems known as this kind of problem, while the latter is called NP problem.

 

 

Next, the time complexity of several common examples illustrate, respectively:

One embodiment, the time complexity of O (1)

  1.1

Temp=i;  i=j;  j=temp;          
   Solution: frequency of more than 1 are three individual statements, execution time of the program segment and the size of the problem is a constant independent of n . The time complexity of the algorithm is a constant order, denoted by T (n) = O (1  ).
 If the algorithm does not perform with the increase in problem size n and growth time, even if there are thousands of statements algorithm execution time is nothing but a large constant. The time complexity of such algorithms is O (1).

 

 

 

Example 2, the time complexity of O (n ^ 2)

  2.1

     sum=0;                 (一次)
     for(i=1;i<=n;i++)       (n次 )
        for(j=1;j<=n;j++) (n^2次 )
         sum++;       (n^2次 )

  Solution: T (n) = 2n ^ 2 + n + 1 = O (n ^ 2)

 

  2.2

    for (i=1;i<n;i++)
    { 
        y=y+1;         ①   
        for (j=0;j<=(2*n);j++)    
           x++;        ②      
    }       

  Solution: frequency of statement 1 is n-1
    statement is frequency 2 (n-1) * (2N + 1) = 2N ^ 2N-1
     F (n-) = 2N ^ 2N-1 + ( n-1) = 2n 2-2 ^
  time complexity of the program T (n) = O (n ^ 2).

     

 

Example 3, the time complexity of O (n) 

  a=0;
  b=1;                      ①
  for (i=1;i<=n;i++) ②
  {  
       s=a+b;    ③
       b=a;     ④  
       a=s;     ⑤
  }

  Solution: Statements frequency of 1: 2,        
    the statement frequency 2: n,        
    the statement frequency 3: n-1,        
    the frequency of statement 4: n-1,    
    the frequency of statement 5: n-1,                                  
  T ( n) = 2 + n + 3 (n-1) = 4n-1 = O (n).

 

 

Example 4, the time complexity of O ( log2n)

  i=1;       ①
  while (i<=n)
    i=i*2; ②

  Solution: Statements frequency of 1 is 1,     provided statements frequency is 2 f (n),    then: 2 ^ F (n-) <= n-; F (n-) <= log2n    maximum value f (n) = log2n ,   T (n-) = O (log2n)  
    

 

 

Example 5, the time complexity of O (n ^ 3)

    for(i=0;i<n;i++)
    {  
       for(j=0;j<i;j++)  
       {
          for(k=0;k<j;k++)
             x=x+2;  
       }
    }

  Solution: When i = m, j = k, when k is the number of times the inner loop when i = m, j can take 0,1, ..., m-1, so here were carried out in the innermost loop 0 +1 + ... + m-1 = (m-1) m / 2 times so, i take from 0 to n, the cycle were carried out: 0 + (1-1) * 1/2 + ... + (n-1) n / 2 = n (n + 1) (n-1) / 6 Therefore, the time complexity is O (n ^ 3).

 

 

  FIG time complexity linear size

  Common calculation complexity in ascending order of: o (. 1) <o (log 2 n- ) <o (n-) <o (nlog 2 n- ) <o ( n- 2 ) <o ( n- . 3 ) <... <o ( 2 n- ) <o (n-!)

 

 

Second, the space complexity

  Similar to the discussion of the time complexity, the complexity (Space Complexity) S (n) defined for the spent algorithm storage space, it is also a problem of size n an algorithm function space. Asymptotic space complexity is often referred to as spatial complexity. 
  The complexity of the space (Space Complexity) is an algorithm during operation temporary occupation measure of the size of the storage space . An algorithm stored on computer memory space occupied, including storage and input output data storage algorithm algorithm itself occupies the space occupied by the algorithm during the operation of the temporary storage space in these three areas. Input and output data of the algorithm storage space occupied is determined by the problem to be solved is to come to pass through by calling the function parameter list, it does not vary with the algorithm change. Storage algorithm itself occupies the space proportional to the length algorithm written to compressed storage space in this area, you must write a short algorithm. Algorithm during operation temporary occupation of storage space varies with different algorithms, some algorithms need only take up a small amount of temporary work unit, but not with the size of the problem and the scale of change, we call this algorithm is "in situ \" in it is to save storage algorithms, such as this section introduced several algorithms are true; some algorithms require the number of units occupied by temporary work and solve problems related to the size of n, n, as it increases when n is large, it will take up more memory cells,

  Such as when a space complexity of the algorithm is a constant that does not vary with the processing amount data n of size change can be expressed as O (1); when a space complexity of the algorithm and binary logarithm of n when the proportional, can be expressed as 0 (10g2n); when I complexity space Division algorithm with a linearly proportional to n, can be expressed as 0 (n) If the parameter is an array, it is only necessary to assign a. storage space by the argument of a transmitted address pointer, i.e. a machine word space; if the parameter is a reference, it need only allocate storage space for an address, use it to store the address of the corresponding variable argument for variable argument automatically referenced by the system. 

 

Guess you like

Origin www.cnblogs.com/1138720556Gary/p/11069570.html