The time complexity _ space complexity of understanding

The time complexity _ space complexity of understanding

1. The efficiency of the algorithm
complexity time 2.
3. The space complexity

1. The efficiency of the algorithm

Algorithm efficiency analysis is divided into two: the first is time efficiency, the second is space efficiency. Time efficiency is called time complexity, and space efficiency
is referred to as spatial complexity. The time complexity of the main measures the speed of an algorithm, and the main measure of spatial complexity of an algorithm needed
additional space, storage capacity in the development of early computer, the computer is very small. So the space complexity is very care about. But after the computer
rapid development of the industry, the storage capacity of computers has reached a very high degree. So now we do not need to focus on a particular algorithm
space complexity.

2. Time complexity

2.1 time complexity concept
defined time complexity: In computer science, the time complexity of the algorithm is a function, which quantitatively describes the running time of the algorithm. An algorithm to perform time-consuming, in theory, can not be counted out, only you put your program on the machine up and running in order to know. But we need every algorithm on a machine to test it? You can test all the machines, but it is very troublesome, so only the time complexity of this
analysis method. The time it takes a number of algorithm execution example, the basic operation of the algorithm in which the execution count is proportional to the statement, the time complexity of the algorithm.
But when we calculate the actual time complexity, we actually do not have to calculate the exact number of executions, but only about the number of executions, so here we use big O notation progressive .
Big O notation (Big O notation): is used to describe the asymptotic behavior of the function of mathematical symbols.
Large O-order derivation methods:
1, all substituted by a constant addition of 1 runtime constant.
2, the number of runs a function in the revised, retaining only the highest order term.
3, if the highest order term exists and is not 1, then remove multiplied by a constant to this project. The result is the big O-order

// 请计算一下Func1基本操作执行了多少次?
void Func1(int N)
{
int count = 0;
for (int i = 0; i < N ; ++ i)
{
	 for (int j = 0; j < N ; ++ j)
	 {
	 ++count;
	 }
}
for (int k = 0; k < 2 * N ; ++ k)
{
	 ++count;
}
 int M = 10;
while (M--)
{
	++count;
}
printf("%d\n", count);
}

The basic number of operations performed Func1:
N = 10 F. (N) = 130.
N = 100 F. (N) = 10210
N = 1000 F. (N) = 1.00201 million
when we calculate the actual time complexity, we actually do not have to calculate the exact number of executions, but only about the number of executions, so here we use progressive big O notation.
Big O notation (Big O notation): is used to describe the asymptotic behavior of the function of mathematical symbols.
Large O-order derivation methods:
1, all substituted by a constant addition of 1 runtime constant.
2, the number of runs a function in the revised, retaining only the highest order term.
3, if the highest order term exists and is not 1, then remove multiplied by a constant to this project. The result is the big O-order.
After large O progressive notation, time complexity Func1 is:
N = 10 F. (N) = 100
N = 100 F. (N) = 10000
N = 1000 F. (N) = 1000000
we will find big O by the above progressive representation rid of those little impact on the results of items, expressed concise execution times.
In addition some time complexity of the algorithm there is the best, average and worst-case:
worst case: the maximum number of running any input size (upper bound)
Average case: arbitrary input size desired operating frequency
best case: arbitrary input size the minimum number of runs (lower bound)
for example: N array of data x in a search for a length of
1 found: best case
worst case: N times found

// 请计算一下Func1基本操作执行了多少次?
void Func1(int N)
{
int count = 0;
for (int i = 0; i < N ; ++ i)
{
	 for (int j = 0; j < N ; ++ j)
	 {
	 ++count;
	 }
}
for (int k = 0; k < 2 * N ; ++ k)
{
 	++count;
 }
int M = 10;
while (M--)
{
 	++count;
}
printf("%d\n", count);
}

Basic operation number Func1 performed: F (N) = N ^ 2 + 2 * N + M
when N is infinite, the latter addition may be omitted,
so the derived time complexity of the large O-order method is O (N ^ 2)

Examples of special circumstances
other times there is some complexity of the algorithm is best, average and worst-case :
worst case: the maximum number of runs any input size (upper bound)
Average case: arbitrary input size desired number of times to run
best case: any the minimum number of input scale run (lower bound)
, for example: a length of N x data array search for a
best case: 1 found
worst case: N times to find
the average case: N / 2 times found
in practice general interest the operation of the algorithm is the worst, the data array search time complexity of O (N)
**

3. Space complexity

Space complexity of the algorithm is in the process of running a temporary occupation measure of the size of the storage space. Space complexity is not how many bytes are taking up space, because it did not make much sense, so the space complexity count is the number of variables. Space complexity of computing basic rules with time complexity is similar, using progressive big O notation.

  1. Constants using additional space, so the space complexity is O (1)
  2. N spatial dynamic opened, the space complexity is O (N)
  3. Recursive function calls N times, N opened stack frames, each stack frame using spatial constants. Space complexity is O (N)
Published 29 original articles · won praise 8 · views 2004

Guess you like

Origin blog.csdn.net/qq_44785014/article/details/103199385