Data structure TWO

Data structure TWO

1. Algorithm

Algorithm is a collection of instructions, a series of operations prescribed to solve characteristic problems. It is a clearly defined computable process that takes a data set as input and produces a data set as output.
An algorithm generally has the following five characteristics:
1) Input: An algorithm should take the information of the problem to be solved as input.
2) Output: Input the information obtained after the corresponding instruction set is processed.
3) Feasibility: The algorithm is feasible, that is, every instruction in the algorithm is achievable and can be completed in a limited time.
4) Finiteness: The number of instructions executed by the algorithm is limited, and each instruction is completed in a finite time, so the entire algorithm can also be completed in a finite time.
5) Determinism: For a specific legal input, the corresponding output of the algorithm is unique. That is, when the algorithm starts from a specific input, the result of executing the same instruction set multiple times is always the same.
Simply put, an algorithm is the process of computer solving problems.
In this process, whether it is forming a problem-solving idea or writing a program, a certain algorithm is implemented.
The former is the logical form of the algorithm, and the latter is the code form of the algorithm.

2. Definition of Time Complexity

1. Time frequency:
The time consumed by an algorithm to execute cannot be calculated theoretically, and it can only be known by running a test on the computer.
But it is impossible and unnecessary for us to test every algorithm on the computer.
The time spent by an algorithm is directly proportional to the number of executions of the statement in the algorithm. The more the number of executions of the statement in the algorithm, the more time it takes.
The number of sentence executions in an algorithm is called sentence frequency or time frequency, which is expressed as T (n), and n represents the scale of the problem.

2. Time complexity (the following nonsense is the same order of time frequency),
but sometimes we want to know what law it presents when it changes, and we want to know the scale of the problem, not the specific number of times. At this time, time complexity is introduced.
In general, the number of repeated executions of the basic operation in the algorithm is a function of the problem scale n, represented by T(n),
if there is an auxiliary function f(n), so that when n approaches infinity, T( The limit value of n)/f(n) is a constant that is not equal to zero, so f(n) is a function of the same order of magnitude of T(n).
Denoted as T(n)=O(f(n)), call O(f(n)) the progressive time complexity of the algorithm, referred to as time complexity.
T(n)=O(f(n))
or: Time complexity is the time frequency to remove the low-order term and the first constant.
Note: Time frequency and time complexity are different. Time frequency is different but time complexity may be the same.
For example: the time frequency of certain two algorithms is T (n) = 100000n2+10n+6 T(n) = 10n2 + 10n+6 T(n) = n2
but the time complexity is T (n) =0( n2) In fact, it is the same order in higher numbers, and it is done by comparing the highest order, skr~

3. Worst time complexity and average time complexity The
worst case time complexity is called the worst time complexity. Generally speaking, the time complexity discussed is the worst-case time complexity.
The reason for this is: the worst-case time complexity is the upper bound of the running time of the algorithm on any input instance, which ensures that the running time of the algorithm will not be longer than any.
In the worst case, the time complexity is T(n)=O(n), which means that for any input instance, the running time of the algorithm cannot be greater than 0(n).
The average time complexity refers to the expected running time of the algorithm when all possible input instances appear with equal probability. In view of the average complexity
first, it is difficult to calculate.
Second, there are many algorithms whose average and worst-case complexity are the same.
So generally discuss the worst time complexity.

The O symbol gives the upper bound of the algorithm time complexity (worst case <=)
Ω symbol gives the lower bound of the time complexity (best case >=)
θ gives the precise order of the algorithm time complexity (the best Same order as the worst =)

4. Time complexity calculation

There is no need to calculate the time frequency at all. Even if the calculation process still ignores the coefficients of constants, low powers and highest powers, the following simple methods can be used:
(1) Find the basic statements in the
algorithm; the most frequently executed in the algorithm That sentence is the basic sentence, usually the loop body of the innermost loop.
(2) Calculate the order of magnitude of the number of executions of the basic statement;
only need to calculate the order of the number of executions of the basic statement, which means that as long as the highest power in the function of the number of executions of the basic statement is correct,
all low power sums can be ignored The coefficient of the highest power. This simplifies the analysis of the algorithm and focuses attention on the most important point: the growth rate.
(3) The time performance of the algorithm is represented by a big zero notation.
Put the magnitude of the number of executions of the basic statement into the big zero mark.

举例

100个简单语句的时间复杂度(100是常数,不是趋向于无穷)
Int count = 0;
……
Count = 0;
 
T(n) = 1
T(n) = O(n) = 1


循环的时间复杂度
int n = 8,count = 0;
for(int i = 0 ;i<=n;i++)
count++T(n) = n
T(n) = O(n) = n

int n = 8,count = 0;
for(int i = 0 ;i<=n;i*=2)
count++

 O(log2n)





int n = 8,count = 0;
for(int i = 0 ;i<=n;i++)
for(int j= 0 ;j<=n;j++)
count++

O(n2)



int n = 8,count = 0;
for(int i = 0 ;i<=n;i*=2)
for(int j= 0 ;j<=n;j++)
count++

O(n*log2n)


int n = 8,count = 0;
for(int i = 0 ;i<=n;i*=2)
for(int j= 0 ;j<=i;j++)
count++1+2+3++n = (n+1)*n/2

O(n2)
常用的时间复杂度的级别排行
常数阶O(1)
对数阶O(log2n)
线性阶O(n)
线性对数阶O(n*log2n)
平方阶O(n2)
立方阶O(n3)
K次方阶O(nk)
指数阶O(2n)
阶乘阶O(n!)
越往下执行的时间就越多

time complexity5. Space complexity
The storage capacity of the algorithm includes:
1. The space occupied by the program itself
2. The space occupied by the input data;
3. The space occupied by the auxiliary variables The space occupied by the
input data depends only on the problem itself, and has nothing to do with the algorithm. Need to analyze the extra space occupied by auxiliary variables other than inputs and programs.
Space complexity is a measure of the amount of storage space temporarily occupied by an algorithm during its operation. It is also generally given as a function of the problem size n, given in the form of magnitude, denoted as: S(n) = O(g(n) )

Note:
1. The space complexity is less than the time complexity analysis
. 2. For recursive algorithms, the code is generally shorter, and the algorithm itself occupies less storage space, but requires more temporary work units during runtime. ;
If written as a non-recursive algorithm, the code may generally be longer, and the algorithm itself occupies more storage space, but it may require fewer storage units at runtime.

Guess you like

Origin blog.csdn.net/MAKEJAVAMAN/article/details/106785746