C language data structures - Algorithm

@ (Data structure algorithms)

1. Definition of the algorithm

== == arithmetic: describing solve a specific problem solution step, the performance of the priority sequence of instructions in a computer, and each represents one or more instruction operations. (Algorithm is a method to solve the problem description.)

2. Characteristics of the algorithm

  1. Input-output
    algorithm has zero or more inputs. As print "hello world!" Does not require any input parameters.
    That algorithm can not output, but there must be input .
  2. There are of poor
    == == finite nature: refers to the algorithm after the execution of a limited step, but will not automatically end an infinite loop, and each step is completed within an acceptable period of time
    , of course, in real life, if an algorithm It takes years to complete, that the mathematical sense is finite, but it is not the meaning of the algorithm.
  3. Uncertainty
    == == certainty: each step of the algorithm have the meanings defined, does not appear ambiguous.
    Algorithm Under certain conditions, only one execution path, the same input can have a unique output. Each step of the algorithm to be precisely defined without ambiguity.
  4. Feasibility
    == == Feasibility: Each step of the algorithm must be feasible, that is, each step can be completed by performing a limited number of times.
    Feasibility means that the algorithm can be converted to run on a machine program and get the right result.

3. algorithm design requirements

  1. Correctness
    == == correctness: refers to the algorithm should at least have input, output and processing unambiguous, correctly reflect the problem of demand, you can get the right answer.
    Divided into four levels:
    1. algorithm program no syntax errors.
    2. The algorithm program can generate an output result satisfies the requirements for valid input data.
    3. algorithm program for illegal input data can meet the specifications of the outcome.
    4. algorithm program for carefully selected, even an extremely difficult to satisfy both the test data output requirements.
  2. Readability
    == == Readability: It is another object of the algorithm is designed for ease of reading, understanding and communication.
    Algorithm mainly for people to read and exchange, followed by the machine to perform. Good readability contribute to people's understanding of the algorithm; obscure program easy to hide many errors, difficult to debug and modify.
  3. Robustness
    == == Robustness: when the input data is not legitimate, the algorithm can make the relevant process, rather than an exception or inexplicable results.
    For example, the input time or distance should not be negative.
  4. High efficiency and low storage time
    == == time efficiency: refers to the execution time of the algorithm.
    == == storage requirements: refers to the algorithm needed during execution maximum storage space, mainly referring to the algorithm running time occupied by memory or external hard disk storage space.
    Design algorithm should try to meet the high time efficiency and low memory requirements.

4. Algorithm Efficiency Measurement Method

  • Later statistical methods
    == == afterwards statistical methods: mainly through good design test procedures and data, using a computer algorithm different timer running time of the preparation of the program were compared to determine the level of efficiency of the algorithm.
    This method has a major drawback:
    1. The program must be run according to the algorithm prepared;
    statistics 2. The resulting time depends on the environmental factors of computer hardware, software, and sometimes easy to cover up weaknesses of the algorithm itself.

  • Prior analysis estimation methods
    at the computer programming, algorithms based on statistical methods to estimate: == == prior analysis estimation methods.
    A program written in a high level programming language run time on a computer when consumed depends on the following factors:
    1. The use of algorithmic strategies, methods. (Fundamental algorithm is good or bad)
    2. The quality of the code generated by the compiler. (Have software support)
    entered 3. problem of scale. (Refers to the number of inputs)
    4. The speed of the machine to perform instructions. (Depends on hardware performance)
    , for example, the following two methods:

int i,sum = 0,n=100;      // 执行1次
for(i=1;i<=n;i++)         // 执行了n+1次
{
    sum = sum + i;        // 执行n次
}
printf("%d",sum);         // 执行1次

Performed 1+ (n + 1) + n + 1 = 2n + 3 times times

int sum = 0,n = 100;      // 执行一次
sum = (1 + n) * n/2;      // 执行一次
printf("%d",sum);         // 执行一次

The implementation of the 1 + 1 + 1 = 3
is clearly the second algorithm better.
Determination of run-time calculation is the most reliable way to perform basic operations with a number of running time of consumption. Running time is proportional to this count.
Finally, when analyzing the running time of the program, the most important thing is to be seen as a program independent of the programming language algorithm or series of steps.

Progressive growth 5. Functions

Progressive growth function == ==: Given two functions f (n) and g (n), if there exists an integer N, such that for all n> N, f (n) is always greater than the g (n) , then we say that f (n) the progressive increase faster than g (n).
In general, the following conclusions:

  1. Additive constant + 3, + 1, and so can be ignored.
  2. Multiplied by the highest-order term constant is not important.
  3. Highest degree term index large function with the growth of n, the results will become particularly fast growth.
  4. When determining the efficiency of an algorithm, function constants and other minor items can often be ignored, but should be concerned about the main items (the highest order term) of the order.
  5. An algorithm, as n increases, it will become increasingly algorithm is better than another, or is getting worse in the other algorithms.

6. algorithm time complexity

  1. The time complexity of the algorithm is defined
    == == defined algorithm time complexity: The algorithm analysis, the total number of executions statements T (n) n is a function of the size of the problem, and then analyze the changes with n, T (n) and determining T (n) order of magnitude. The time complexity of the algorithm, the algorithm is a measure of time, denoted by: T (n) = O ( f (n)). N represents a problem which increases with the size, execution time of the algorithm is the same growth rates and f (n) growth, called progressive time complexity of the algorithm, referred to as time complexity . Where f (n) is a function of problem size n.
    Uppercase O () to reflect the time complexity of the algorithm notation, become a big O notation.
    In general, as n increases, T (n) algorithm is the slowest growth optimal algorithm.
    • O (1) referred to as the constant stage
    • O (n) is called a linear order
    • O (n ^ 2 ^) is called order of the square
  2. The method of derivation of large order O
    == == order method to derive large O: 1 with a constant operating time replacing all additive constant 1. 2. In the Run function in the revised number, retaining only the highest order term. 3. If the highest order term is present and is not 1, the constant term is multiplied removed. The result is that the large O-order.

  3. Constant Order
    No matter how much constant, are recorded as O (1),
    rather than O (3), O (4 ) , etc. Any other number.

  4. Order linear
    analysis on the operation cycle structure is the key analysis algorithm complexity.
    The following code example, the time complexity is O (n)
int i;
for(i=0;i<n;i++)
{
    某个时间复杂度为O(1)的步骤   // 执行了n次
}
  1. To order
int count = 1;
while(count < n)
{
    count = count * 2;
    某个时间复杂度为O(1)的步骤
}

Analysis: Because each count * 2 after it closer to n, that is, x will be greater than 2 months after n multiplied and exit the loop. The 2 ^ x ^ = n have x = log2 (n). Therefore, the time complexity of the above code is O (log (n)).

  1. Order of the square
    when a plurality of cycles, the cycle time is equal to the complexity of the complexity of the loop multiplied by the number of times of the cycle operation.
    For example the following code:
int i,j;
for(i=0;i<n;i++)
{
    for(j=i;j<n;j++)
    {
        时间复杂度为O(1)的步骤
    }
}

The total number of executions is n + (n-1) + (n-2) + ... + 2 + 1 = (n ^ 2 ^ / 2) + (n / 2), can obtain the segment of code time complexity is O (n ^ 2 ^).

General:
O (. 1) <O (log (n-)) <O (n-) <O (nlog (n-)) <O (n-^ 2 ^) <O (n-^ ^. 3) <O (2 ^ n ^) <O (n! ) <O (n ^ n ^)

7. Other

== == worst case is a guarantee, referring to the running time will not be bad. Then apply, this is one of the most important needs, unless specified otherwise, the running time running time we mentioned are worst-case.
== == average running time is the most significant in all cases, because it is the desired operating time.
== == spatial complexity required storage space by a calculation algorithm implemented, denoted formula: S (n) = O ( f (n)), where, n-scale of the problem is, f (n) is a statement about function storage space occupied by n.

Guess you like

Origin www.cnblogs.com/PursuingtheLight/p/11359721.html