Data Structure Review Induction Essay (Part 1)

        It has been a year since I studied data structure, but I believe that most of my partners do not necessarily have a good systematic study of this subject, and the status of this subject in IT is extremely important, so I will write about this programming knowledge. This essay will help you recall the course of data structure. Friends, please note that data structure is destined to be boring. Please read it patiently. It will be helpful for programming. Experts are welcome to point out the shortcomings of the article. Like, you can leave a message, I will visit regularly and learn together;

       First of all, at the beginning of the first chapter, it is natural to introduce what is data structure. There is no need to describe it in such professional terms here (I don't really understand the professional terminology haha!).

           Data structure + related algorithm = programming. In layman's terms, data structure is the relationship between data elements, such as queues, stacks, trees, graphs, etc. mentioned above, and like recursion used to traverse trees, This is an algorithm, an algorithm to manipulate this data structure, for the data structure.

          The data structure is divided into logical structure and physical structure; the physical structure is not mainly discussed, the main object is memory, mainly in terms of storage, and the way to store data elements in memory is mainly studied.

          Physical structure storage is divided into sequential and chained storage; array is sequential, and pointer type is chained. Their main feature is that sequential storage means that the address in memory is continuous, and the characteristic of pointer type is that the address in memory is not fixed and random. , relying on pointers to maintain the connection between them;

          Logical structure: set structure, linear structure, tree structure, graph structure;

          The content of the first chapter is almost these points, the second chapter is about efficiency, and the space problem begins to focus;

          It is the first step for beginners to make programs to realize functions. With the accumulation of experience, they should also start to consider optimization problems, how to program better, save resources and improve efficiency;

         Of course, most of the students who are reading do not need to consider this, but they must learn it well.

      The second chapter space complexity and time complexity:

         The performance of the computer hardware and the choice of editor will determine the time the machine runs, but this is the external factor, and the internal factor is the size of the input instructions and the quality of the algorithm, for example, a program, 1000 people and 100000 people access the machine execution time. The difference is that traversing an array and traversing a tree, do you always use loops or loops for arrays and recursion for trees.

          Therefore, it is necessary to study the complexity well;

          So what is the focus of studying the complexity of algorithms?

          If you study its exact execution times and compare it with the total execution times, can you compare it in case the amount of data is very large?

         So the focus is on the abstraction of how much the algorithm grows with the size of the input.

         The important thing to analyze the running time of an algorithm is to associate the number of basic operations with the input mode. Suppose there is such a function: 2n+3, you can understand that a loop is executed, then a loop is executed, and then three times, and then another function 3n +1, (execute 3 non-nested loops in a row and execute it again), then these two functions intersect at y=7. When it is greater than 7, the curve of 3n+1 will grow faster than that of 2N. At this time, 3N +1 for worse performance than 2n. Because of the same value of N, the 2n algorithm can be completed with fewer executions, while the 3N algorithm is more. Introduce a concept.

         

         If 3N and 2N determine the steepness of the function curve when the scale is particularly large, the latter constants can even be ignored.

         Then, after citing examples of higher-order functions, we can summarize such a rule:

         The highest term (the one with the most exponents) is the primary concern, the constant multiplied by it can be ignored, and the other secondary terms are simply ignored.

         Well, the above is all foreshadowing, pay attention:

         

         Number of executions = unit time of execution (definition of cpu);

         

         Inside O() is the F(n) expression. Another point is the understanding of the growth rate, which is the derivation of the function curve. The larger the derivative, the easier it is to rise, that is, the algorithm is not good enough.

         Then the formula for deriving the big O order is:

         

         Here are a few classic examples:

          Constant order, linear order, square order, logarithmic order;

          Look at the constant order:

          int n = 6, sum;

          printf("aaa");

          sum=n*(n*45)+1;

          These definitions and output parts are not what we are concerned with. We are directly concerned with the instructions that perform the operation, and we are concerned with the scale of the operation. Like the summation statement above, if you write a few more sentences, the result is still O(1 ), because the operation scale is only one;

          Then to the linear order:     

          for(i=0;i<n;i++){}

           The size of the execution of this section is n, so O(n);

          Then to the square order:

        for(){

        for(){}

         }

        Nesting like this is O(n^2);

       There is also a special example:

       for(i=o;i<n;i++){

        for(j=i;j<n;j++){}      

        }

        Under the analysis here, when i=0, J can cycle N times, and when i=1, J can cycle N-1 times, so we get:

        sum=n+n-1+n-2+.....+1=(n+1)*n/2=N^2/2+N/2;

         According to the rule:

        Remove the other minor terms and the constants of the highest terms to get: O(n^2);

       Of course, if there is one more nesting in the loop, it is the cubic order, but the specific problem still needs to be analyzed in detail;

       

        Logarithmic order:

         while(i<n){

           i=i*i;}

           That is to say, it will end when i>=N, then n=i^2, i=logN (the small sign 2 is omitted); at this time, O(logN);

          
           I will write here first today, the first part of the note will be written to the queue, the editor is very painful to write, and the formatting and other problems are very annoying

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325577552&siteId=291194637