Look at the algorithm time complexity from some simple examples

Look at the algorithm time complexity from some simple examples

    In programming, the execution efficiency of a piece of code is actually difficult to estimate and predict, which is mainly affected by the following aspects:

1. The mathematical basis on which the algorithm is based.

2. The quality of the code produced by the compiler and the execution efficiency of the language.

3. The input scale of the problem.

4. The execution speed of the hardware.

Often, the size of the input to the problem and the mathematical underpinnings of the algorithm are what the coders need to consider. Time complexity is an important criterion used to describe the efficiency of algorithm execution.  

    Before understanding time complexity, you should first understand what is the time frequency of an algorithm. The so-called time frequency is the time it takes for an algorithm to solve a problem. But in general, the time consumed by an algorithm to solve a problem is usually related to the input value. For example, we input an integer and find all positive even numbers smaller than it. The code is as follows:

let n = 10;

for (var i = 0; i < n; i++) {
	if (i%2==0) {
		console.log(i);
	}
}

In the above code, when the input n is 10, the loop will be executed 10 times. If the time frequency is t, then when the input n is 20, the time frequency is 2t. Time complexity is used to describe the change law of time frequency t with the change of problem size n. Here's a more mathematical-style description:

 In general, the number of repetitions of the basic operations in the algorithm is a function of the problem size n, which is represented by T(n). If there is an auxiliary function f(n), so that when n approaches infinity, T( The limit value of n)/f(n) is a constant not equal to zero, then f(n) is said to be a function of the same order of magnitude as T(n). Denoted as T(n)=O(f(n)), O(f(n)) is called  the asymptotic time complexity of the algorithm, referred to as the time complexity.

    When calculating the time complexity of an algorithm, we can decompose the algorithm into statements one by one, calculate the time complexity of each statement and then accumulate it. The function of the following code is to accumulate the input:

let n = 10;  
let res = 0; //1
for (var i = n; i > 0; i--) { //1+(n+1)+(n+1)
	res = i+res;  //n
}
console.log(res);//1  

When the n input is 10, the time frequency is 1+1+n+1+n+1+n+1 = 3n+5. Let the time complexity function of the algorithm be f(n), (3n+5)/f(n). When n tends to infinity, the above formula can be simplified to 3n/f(n), taking f(n)=n, The last result was a non-zero constant, so the time complexity of this algorithm is f(n)=n, denoted as O(n).

    When the execution time frequency of the algorithm has nothing to do with n, the time complexity of the algorithm is O(1), which is the function with the smallest time complexity, but it should be noted that the small time complexity does not mean that the algorithm takes a short time to execute. For example, the algorithm time complexity of 10,000 lines of code executed only once per line is also O(1).

     The time complexity of common algorithms from small to large is as follows:

Ο (1) < Ο (log² n ) < Ο (n) < Ο (nlog² n ) < Ο ( n 2) < Ο ( n 3) <… < Ο ( 2 n) < Ο (n!)

where O(log² n) is the function with the smallest time complexity other than O(1), for example the following code:

let n = 10;  
var i = 1;

while(i<n){//2^t<n t<log2(n)  t为时间频度
	i = i * 2;
	tip++;
}

The above time frequency is 1+1+log2(n)+log2(n)+log2(n), after removing the constant term, it is 3log2(n), and the time complexity is O(log2(n)). If you add a layer of loop to the above code, the time complexity will become O(nlog3(n)):

let n = 10;  


for (var i = 0; i < n; i++) {// 1+n+1+n+1
	var j = 1;   //n
	while(j<n){//[3^t<n t<log3(n)]n  t为时间频度
		j = j*3;
	}
}

    From the above example, it is also easy to see that the increase in the number of loop layers will drastically increase the time complexity of the algorithm. If loops are used in recursive functions, it is easy to generate code with a time complexity of O(n!) , from a mathematical point of view, the performance of this kind of code will drop sharply as the input complexity increases. When using recursion and looping, you should pay more attention. The sample code is as follows:

function func(n) {  //n
	if (n<0) {
		return;
	}
	var i = 0;    //n
	for (; i < n; i++) { //n*(n-1)*(n-2)...*1
		console.log("tip");
	}
	func(--i);
}
func(10); 

The time-consuming of the JavaScript code in the above example when the input n is 150 is the same as that of the normal loop 10,000 times.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324910823&siteId=291194637