Chapter 2: Algorithms

Algorithm: A description of the steps to solve a specific problem, expressed in a computer as a finite sequence of instructions, and each instruction represents one or more operations. is a description of a solution to a problem

Characteristics of an algorithm: input, output, finiteness, determinism, and feasibility.

    Input/Output: An algorithm has 0 or more inputs; an algorithm has at least one or more outputs.

    Finiteness: means that the algorithm automatically ends without an infinite loop after executing a finite number of steps, and each step is completed in an acceptable time.

    Deterministic: Each step of the algorithm has a deterministic meaning and no ambiguity occurs.

    Feasibility: Each step of the algorithm must be feasible, that is, each step can be completed by executing it a finite number of times.

 

Algorithm Design Requirements

    Correctness: The correctness of the algorithm means that the algorithm should at least have no ambiguity in input, output and processing, can correctly reflect the needs of the problem, and can obtain the correct answer to the problem.

        1. The algorithm program has no syntax errors

        2. The algorithm program can produce output results that meet the requirements for legal input data

        3. The algorithm program can obtain results that meet the specifications for illegal input data

        4. The algorithm program has output results that meet the requirements for carefully selected and even difficult test data

    Readability: Another purpose of algorithm design is to facilitate reading, understanding and communication

    Robustness: When the input data is invalid, the algorithm can do relevant processing, instead of producing abnormal or inexplicable results.

    High time efficiency and low storage capacity: Time efficiency refers to the execution time of the algorithm; storage capacity requirement refers to the maximum storage space required by the algorithm during the execution process, mainly referring to the memory or external hard disk storage space occupied by the algorithm program during operation.

    

A measure of algorithm efficiency

    Post hoc statistics:

    Pre-analysis and estimation method: Before the calculation program is compiled, the algorithm is estimated according to the statistical method.

            The time it takes to run depends on:

                1. The method and strategy adopted by the algorithm---the basis for the quality of the algorithm

                2. The quality of the compiled code

                3. The input size of the problem

                4. The speed at which the machine executes the code

        In analyzing program runtime, it is most important to view the program as an algorithm or a series of steps independent of the programming language, and to relate the number of basic operations to the input scale, i.e. the number of basic operations must be expressed as a function of the input scale .

        Asymptotic growth of functions: Given two functions f(n) and g(n), if there exists an integer N such that f(n) is always greater than g(n) for all n>N, then, we Say that f(n) grows asymptotically faster than g(n).

    When judging the efficiency of an algorithm, the constant term and other secondary terms of the function can often be ignored, and more attention should be paid to the order of the primary term (the highest-order term).

----------------------------

Theoretical basis for the ex ante estimation method: Comparing the asymptotic growth of the key execution times of different algorithms, it can be basically analyzed that: an algorithm, as n increases, it will become better and better than another algorithm, or it will become worse and worse in another algorithm.

 

Algorithm time complexity : When performing algorithm analysis, the total execution times T(n) of the statement is a function of the problem size n, and then analyze the variation of T(n) with n and determine the data magnitude of T(n). The time complexity of the algorithm, that is, the time measurement of the algorithm, is written as: T(n)=O(f(n)). It means that with the increase of the problem size n, the growth rate of the execution time of the algorithm is the same as the growth rate of f(n), which is called the asymptotic time complexity of the algorithm , or the time complexity for short . where f(n) is some function of problem size n. -------Using uppercase O() to reflect the notation of the time complexity of the algorithm, called the big O notation.

Derive the big-O method:

    1. Replace all additive constants in runtime with constant 1

    2. In the modified run times function, only the highest order terms are retained

    3. If the highest-order term exists and is not 1, remove the constant multiplied by this term

    The result is a big O order

Constant order: Regardless of the size of the problem (how much n), an algorithm with constant execution time is called a time complexity of O(1), also known as a constant order.

*****

To analyze the complexity of the algorithm, the key is to analyze the operation of the loop structure.  

*****

Linear order: O(n)

Logarithmic order: O(logn)

Square order: O(n2)

Common time complexity:

    O(1)< O(logn)< O(n)< O(nlogn)< O(n2)< O(n3)< O(2n)< O(n!)< O(nn)

 

worst case and average case

Worst-case runtime is a guarantee that runtime will not get bad again. In general, unless otherwise specified, the runtimes mentioned are worst-case runtimes.

 

Algorithm space complexity: It is realized by calculating the storage space required by the algorithm. The calculation of the algorithm space complexity is written as: S(n)=O(f(n)), where n is the scale of the problem, and f(n) is Statement about the function of the storage space occupied by n.

If the storage space required for the execution of the algorithm is a constant relative to the amount of input data, the sub-algorithm is said to work in place, and the space complexity is O(1).

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325482554&siteId=291194637