Time and space complexity of the computer program

EDITORIAL: \ (log_2n \) are referred to as (logN \) \ , actually in the computer, \ (LNX, LGX \) and \ (log_2x \) consistent values, as:
\ [log_ab = \ {log_ca FRAC } {log_cb} \]

Therefore:
\ [log_2x = \ FRAC LNX} {} = {LN2 LNX (\ Lim \ limits_ {X \ to + \ infty}) \]

\[log_2x=\frac{lgx}{lg2}=lgx(\lim\limits_{x\to+\infty})\]

First, the time complexity

(A) concept

If the problem is the size of a \ (n \) , a time algorithm for solving this problem needed to \ (T (the n-) \) , it is \ (n \) is a function, \ (T ( n) \) is called "time complexity" of this algorithm.

When the input \ (n \) when the gradual increase in the complexity of the case the time limit is called "asymptotic time complexity" algorithm can also be expressed as a time complexity. The time complexity is affected by the total number of arithmetic expressions \ (n-\) changes that affect a maximum (excluding coefficient).

"Large \ (O \) notation": the basic parameters used in this description are \ (n-\) , i.e. the problem instance size, complexity, or the running time is expressed as \ (n-\) function. Here " \ (O \) " represents the magnitude of \ ((the Order) \) , such as "binary search is \ (O (nlogn) \) ", that it needs "by the order \ (logn \ ) steps to retrieve a scale of \ (n-\) array "notation indicates when \ (n-\) is increased, \ (O (F (n-)) \) run up time will be proportional to the \ ( f (n) \) growth rate.

This incremental estimation theory and analysis algorithms to approximate comparison is very valuable, but in practice it may cause differences in the details. For example, a low-cost additional \ (O (n ^ 2) \) algorithm \ (n-\) is smaller than the case may be a high-cost additional \ (O (nlogn) \) algorithm run faster. Of course, with \ (n \) after large enough, with a slower rise function algorithm necessarily work faster.

The key factor computation time complexity: number of cycles and recursion

(B) the complexity of the program for several time

\ (O (1) \)

temp=i;i=j;j=temp; 

We often hear bigwigs said: "Use \ (O (1) \) queries ..." was actually the \ (n \) has nothing to do, just making statements.

\ (O (n) \)

Example 1

1 sum=0;             
2 for(i=1;i<=n;i++) {  
3 for(j=1;j<=n;j++) {  
4 sum++;   } 
5 }

Line 1: \ (1 \) times

Line 1: \ (the n-\) times

Line. 1: \ (n-2 ^ \) times

Line. 1: \ (n-2 ^ \) times

Solution: \ (T (N) = 2N ^ 2 Tasu N Tasu 1 = O (N ^ 2) \)

Above is the way to solve the time complexity of normal procedures, ignoring \ (T (n) \) in \ (n \) of the highest order term coefficients and other items (aim is considering only the largest items)

(Here thought physics teacher, when \ (m << M \) when the expression \ (a = \ frac {M } {M + m} g \) ignored \ (m \) values i.e. \ (m \) small to result not play a role)

Example 2

for (i=1;i<n;i++) {
    y=y+1;         ①   
    for(j=0;j<=(2*n);j++)   { 
    x++; }       ②      
}   

Solution:
frequency 1 is a statement \ (n-1 \)

2 is a frequency statement $ (n-1) * (2n + 1) = 2n ^ 2n-1 $

$ F (n) = 2n ^ 2n-1 + (n-1) = 2n ^ 2-2 $

The time complexity of the program \ (T (n) = n O (^ 2) \)

\ (O (logn) \)

i=1;       ①
while (i<=n)
    i=i*2; ②

Solution: frequency is 1 sentence 1

Provided statement frequency is 2 \ (F (n-) \) , then: $ 2 ^ {f (n )} \ leq n; f (n) \ leq logn $

Takes a maximum value \ (f (n) = logn , T (n) = O (logn) \)

\ (O (n ^ 3) \)

 for(i=0;i<n;i++)
    {  
       for(j=0;j<i;j++)  
       {
          for(k=0;k<j;k++)
             x=x+2;  
       }
    }

Solution: When \ (i = m, j = k \) when the number of times the inner loop is \ (K \)

When \ (i = m \) when, \ (J \) can take \ (0,1, ...,. 1-m \) ,

So here it was carried out in the innermost loop \ (0 + 1 + ... + m-1 = \ frac {(m-1) m} {2} \) times

Therefore, \ (I \) from (0 \) \ take into \ (n-\) , the total of the cycle: \ (0+ \ {FRAC (1-1 of) \ Times. 1} {2} + ... + \ frac {(n-1) n } {2} = \ frac {n (n + 1) (n-1)} {6} \)

Therefore, the time complexity is \ (O (n ^ 3) \)

(C) some special time complexity

  1. Instability time complexity

We should also distinguish between the worst-case algorithm of behavior and expectations of behavior. Such as quick sort of worst-case running time is $ (^ the n-2) O \ (, but time is expected \) O (nlogn) \ (. By carefully choosing every time the reference value, we are likely to square case ( i.e., \) O (^ n-2) \ (the case) is reduced to nearly equal probability \) 0 $. In practice, the quick sort can often carefully implemented to \ (O (nlogn) \) running time.

  1. Some common notation:

(1) to access elements in the array is a constant time operation, or say \ (O (1) \) operation.

(2) If the algorithm can remove a data element in each half step, such as a binary search, it will typically take \ (O (logn) \) time.

(3) \ (strcmp \) comparator having two \ (n-\) string of characters requires \ (O (n) \) time.

(4) a conventional matrix multiplication algorithm \ (O (n-^. 3) \) , was calculated as each element needs to be \ (n-\) the elements are added together and multiplied by the number of all elements is \ ( 2 ^ n-\) .

(5) exponential time algorithm usually comes from need to find all the possible outcomes. For example, \ (n-\) a set of elements shared \ (2N \) subsets, so that the algorithm requires all subsets will be \ (O (2 ^ n) \) a.

General index algorithm is too complicated, unless \ (n \) value is very small, because, add an element in this problem will lead to double the machining time. Unfortunately, there are indeed many problems \ ((\) such as the famous "traveling salesman problem" \ () \) , the algorithm found so far is the index. If we really this happens, usually the best results should be used to find the approximate replacement of the algorithm.

The previous paragraph from the network

Calculation

In general, the number of basic operation algorithm performed repeatedly to a module \ (n-\) of one function \ (F (n-) \) , therefore, the time complexity of the algorithm denoted: \ (T (n-) = O (F (n-)) \) . With the module \ (n-\) is increased, the growth rate and the execution time of the algorithm \ (f (n) \) is proportional to the rate of growth, so that \ (f (n) \) is smaller, the time complexity of the algorithm the lower the level, the higher the efficiency of the algorithm.

When computational complexity time, first find the basic operation algorithm then determines its execution frequency in accordance with respective ones of the sentence, and then find \ (T (n) \) of the same order (which is of the same order are the following: \ (logN, n-, nlogn, ^ n-2, n-n-^ ^ 3,2-, n-! \) ), to find out the, \ (F (n-) = \) of this order, if the \ (\ frac {T ( n)} {f (n) } \) required to obtain a constant limit \ (C \) , the time complexity \ (T (n-) = O (F (n-)) \) .

  1. Common time complexity

Press magnitude ascending, common time complexity are: constant order \ (O (. 1) \) , of the order of \ (O (logN) \) , linear order \ (O (n-) \) , linearity of the order \ (O (nlogn) \) , order of the square \ (O (n ^ 2) \) , the cubic order \ (O (n ^. 3) \) , ..., \ (K \) th order \ (O (n K ^) \) , exponential order \ (O (n-2 ^) \) .

among them,

1. \ (O (n) \) , \ (O (n ^ 2) \) , the cubic order \ (O (n ^. 3) \) , ..., \ (K \) th order \ (O (n ^ k) \) is the order of polynomial time complexity, it referred to as a time complexity of order, second order time complexity ......

2. \ (O (n-2 ^) \) , the time complexity is exponential order, the kind is not practical

3. order \ (O (logN) \) , the order of the linear \ (O (nlogn) \) , in addition to constant order, the kind of the most efficient

Example: Algorithms:

 for(i=1;i<=n;++i)
  {
     for(j=1;j<=n;++j)
     {
         c[i][j]=0; //1

          for(k=1;k<=n;++k)
               c[i][j]+=a[i][k]*b[k][j]; //2
     }
  }

1: The number of times steps: \ (n-2 ^ \)

2: The number of times steps: \ (n-^. 3 \)

There is T $ (n-) + 2 = n-n-^ ^. 3 \ (, according to the above parentheses the same order, we can determine \) n-^. 3 \ (as \) T (n-) of the same order of $

There \ (F (n-) = n-^. 3 \) , then in accordance with \ ((\ lim \ limits_ { n \ to + \ infty}) \ frac {T (n)} {f (n)} \) of Limit $ c $ constant obtained

The complexity of this algorithm: \ (T (n-) = O (n-^. 3) \)

important:

\(O(1)<O(logn)<O(n)<O(nlogn)<O(n^2)<O(n^3)<...<O(n^k)<O(2^n)<O(n!)\)

(A method from the network, for reference)

You can try to note the time the head and tail ends of the program, and the method by subtraction to calculate the time!

#include <time.h>This seems to be the header file (Be sure to include time header file)

float start_time = clock();The program you want to count on the time of the start

float end_time = clock();The program you want to count on the time of the end

#include<ctime>

printf("Time used=%.2lf\n",(double)clock()/CLOCKS_PER_SEC);

Exercise: Calculate the time complexity of the following programs \ (O (k) \)

#include <cstdio>
using namespace std;
int a[100],b[100];
int k;
int main(){
    for(int i=0;i<10;i++){//1
        a[i]=i+1;
    }
    for(int i=0;i<10;i++){//2
        for(int j=0;j<a[i];j++){//3
            b[k]=a[i];
            k++; 
        }
    }
    for(int i=0;i<k;i++){//4
        if(i==0){
            printf("%d",b[i]);
        }
        else{
            printf(",%d",b[i]);
        }
    }
    return 0;
}

Second, the space complexity

Space complexity of an algorithm is a measure of the temporary occupation of the storage space during operation, a tendency is also reflected by $ S (n) $ defined.

Space complexity of the algorithm is a measure of the temporary storage space is occupied during operation, denoted \ (S (n-) = O (F (n-)) \) . Such as direct insertion sort time complexity is \ (O (^ n-2) \) , the spatial complexity is \ (O (. 1) \) . And there should be general recursive algorithm \ (O (n) \) space complexity, because each recursive return information to be stored. The pros and cons of an algorithm to measure the two major aspects of the execution time of the algorithm and the need to take up storage space.

Space complexity are commonly used: \ (O (. 1) \) , \ (O (n-) \) , \ (O (^ n-2) \) :

  1. Space complexity $ O (1) $

If the temporary space required for execution of the algorithm does not change with a change in the size of the variable n, i.e., the space complexity of the algorithm is a constant, can be expressed as \ (O (1) \)

For example:

int i = 1;
int j = 2;
++i;
j++;
int m = i + j;

$ I code, space j, m $ is not assigned with the processing amount data changes, so that the space complexity \ (S (n) = O (1) \)

  1. Space complexity \ (O (n) \)

We look at the code:

int n=1000;
int new[n];
for(i=1; i=n; ++i)
{
   j = i;
              
   j++;
}

In this code, the first row \ (new new \) an array out, the size of the data occupancy \ (n-\) , this code \ (2-6 \) line, although the cycle, but no longer allocating new space, and therefore, the space complexity of this code can mainly see the first line, i.e. \ (S (n) = O (n) \)

\(125MB=131072000B\approx10^8B\)

Types of Accounting byte
\(int\) 2
$long $ \(long\) 4
\(unsigned\) $ long$ \(long\) 8
\(float\) 4
\(double\) 8
\(long\) \(double\) 16
\(bool\) 1

Calculation method:

If the open \ (n-\) different types of arrays, each of the open type \ (D_i \) a, then a total of N_T = $ \ sum_. 1 = {I}} ^ {n-D_i \ (each type bytes occupied \) P_i \ (, each type of each array of size \) Y_ {i_j} (J \ in [. 1, D_i], I \ in [. 1, n-]) $

The maximum space for the program \ (P \) \ (MB \) \ (= 1048576P \) \ (B = C \) \ (B \)

For \ (S = \ sum_ {i = 1} ^ {n} \ lbrack \ sum_ {j = 1} ^ {d_i} (p_i * y_ {i_j}) \ rbrack \ leq C \) satisfies the condition, i.e., in line with

Example:
three arrays, \ (BOOL [1000], int [20000], Double [300000] \)

\(S=1\times1000+2\times20000+8\times3000000=24041000>13107200\)

So the space complexity does not comply

2019.11.28

Guess you like

Origin www.cnblogs.com/liuziwen0224/p/fuzadu.html