Data Structures and Algorithms (a) - time complexity and space complexity

Data Structures and Algorithms (a) - time complexity and space complexity

First to a triple soul to ask:

1, Why the time complexity and space complexity of this stuff?

2, time complexity and space complexity of what stuff?

3, how to determine the time of an algorithm complexity and space complexity.

First, why have time complexity and space complexity of this stuff?

At first you may not care about efficiency problem many people encounter direct write an algorithm to achieve a scene in practical work, as long as realize the function just fine. Did not see anything in the small amount of data, the explosive growth in the amount of data but when you write a bad algorithm might bring a disaster to the system. It's good to write an algorithm to measure how many people will think the first time, fast execution and readable. Few people mention another key indicator of the province does not save space. One might say that this hardware now also look at the age of advanced yet, this understanding is very narrow. When the explosive growth in the amount of data per a small memory space and execution time of each millisecond, will be infinitely enlarged. In the hardware resource constraints of the last century, even more so, so scientists have proposed an algorithm to measure the time complexity and space complexity of the concept.

Second, what is the time complexity and space complexity.

When the algorithm analysis, the total number of executions statements T (n) n is a question about the size of a function, then analyze T (n) with n and determine changes in T (n) of the order of magnitude. The time complexity of the algorithm, the algorithm is a measure of time, denoted by: T (n) = O (f (n)). N represents a problem which increases with the size, execution time of the algorithm is the same growth rates and f (n) growth, called asymptotic time complexity of the algorithm, referred to as time complexity. Where f (n) is a function of problem size n. Uppercase O () to reflect the time complexity of the algorithm notation, which we call the big O notation. In general, as n increases the size of the input, T (n) algorithm is the slowest growth optimal algorithm.

Space complexity of the required storage space by a calculation algorithm implemented, the space complexity of the algorithm referred to as the formula: S (n) = O (f (n)), where, n-scale of the problem is, f (n ) as a function of n statement occupied storage space. In general, we are using "time complexity" to refer to the needs of running time, it is to use "space complexity" refers to the space requirements.

About the time complexity of the algorithm and the complexity of the formula for the derivation of more complex space, interested students can go to Google to Wikipedia to find out.

Third, how to determine the time of an algorithm complexity and space complexity.

* All substituted by a constant addition of 1 runtime constant.

The following code, no cycle:

         int sum=0;
         System.out.println("hello");
         System.out.println("hello");
         System.out.println("hello");
         System.out.println("hello");
         sum = (1+n)*n/2;

* The number of runs a function in the revised, retaining only the highest order term.

The following code, there is a constant, there is a double loop, a for loop is executed N to N times

The two nested for is n ^ 2. , Where time is the computational complexity of the algorithm will be to look at the most, this is n ^ 2.

        int i, j, n = 100;
        for( i=0; i < n; i++ ){
            for( j=0; j < n; j++ ){
                System.out.println("hello");
           }
       }

* If the highest order term exists and is not 1, the constant term is multiplied with the removal.

The following code: Because when i = 0, the loop is executed n times, when i = 1, the loop is executed n-1 times ...... when i = n-1, the inner loop is executed once, so the total executions should be:

n + (n-1) + (n-2) + ... + 1 = n (n + 1) / 2 n (n + 1) / 2 = n ^ 2/2 + n / 2 first ignored, because there is no constant sum.

The second retaining only the highest item, so the n / 2 this removed.

Third, removing the highest and multiplied by a constant term, to give the final O (n ^ 2).

        int i, j, n = 100;
        for( i=0; i < n; i++ )
       {
            for( j=i; j < n; j++ )
           {
                 System.out.println("hello");
           }
       }

* The final result is the big O-order.

Posted a picture of it:

659

FIG efficiency corresponding linear:

603

Typical times the complexity of the time spent in ascending order is: O (1) <O (logn) <(n) <O (nlogn) <O (n ^ 2) <O (n ^ 3) <O ( 2 ^ n) <O (n!) <O (n ^ n)

In general, we are using "time complexity" to refer to the needs of running time, it is to use "space complexity" refers to the space requirements. When we seek to make direct "complexity" usually refers to the time complexity.

Rush to judgment about it, look at the time and space you write the complexity of the algorithm. Here are a pair of data structures and algorithms are a good place to practice, presumably most of the students have used, that is,

[leecode] https://leetcode.com/

In the above answered each question submitted time there, the evaluation of a complexity of time and space your feedback algorithms, great, this is long:

image-20200320194830953

Rush to try it.

Guess you like

Origin blog.51cto.com/14745357/2480558