Four major sorts and their time and space complexity analysis

time complexity

In the analysis of the algorithm, the number of executions T(n) of the statement is a function of n (the size of the problem). Analyze the change of n to cause the change of T(n), and then obtain the magnitude of T(n), that is, the time frequency. If there is an auxiliary function f(n), when n tends to infinity, the value of T(n)/f(n) is a constant that is not 0, there is T(n)=O(f(n)) , This is the asymptotic time complexity of the algorithm , which is what we often call the time complexity .

Big O notation : The method of using O(f(n)) to reflect the time complexity is called Big O notation; Big
O derivation method :
O(1) is called constant order; O(n) is called linear order; O (n^2) is called the square order.

  1. Replace all additive constants in run time with the constant 1.
  2. In the modified run count function, only the highest order term is kept.
  3. If the highest order term exists and is not 1, the constant multiplied by this term is removed. The result is big O order.

give a simple example

int i;
for(i=0;i<n;i++)//该语句的复杂度为O(n)
{
    
    
	cout<<i;    //该语句的复杂度为O(1);
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5

The time complexity of this code is O(n), and the number of executions of "cout<<i" is 1, which is a constant order, so its complexity is O(1); "for(i=0;i< The execution times of n;i++)" is n, which is linear order, so its complexity is O(n). The complexity of the entire code is O(1*n), which is O(n).

Common time complexity:
常数阶O(1),

If the execution time of the algorithm does not increase with the increase of the problem size n, even if there are thousands of statements in the algorithm,
its execution time is just a relatively large constant. The time complexity of such algorithms is O ( 1 ) .

logarithmic order O ( log2 n ) ,

Linear order O ( n ) ,

Linear-logarithmic order O ( n log2 n ) ,

square order O ( n ^ 2 ) ,

Cubic order O ( n ^ 3 )

kth order O ( n ^ K ) ,

Exponential order O ( 2 ^ n ) .

Other time complexities will gradually increase with the change of n, and the algorithm overhead will also increase.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20

Common time complexity time loss sorting:
O(1)<O(log n)<O(n)<O(nlogn)<O(n ^2)<O(n ^3)<O(2 ^n )<O(n!)<O(n ^n)

space complexity

Space complexity refers to the size of the temporary memory space occupied by a program during execution: S(n)=O(f(n)); n is the scale of the problem, and f(n) is the space occupied by the statement about n function of memory.

To give a simple example:
To exchange the values ​​of two variables, it needs to define a temporary variable, which causes space complexity, because it is a constant order, so its space complexity is O(1).

	int i = 10, j = 100;//两个需要交换的值
	 //交换变量的值
	int temp=i;//定义临时变量
	i=j;
	j=temp;

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5

When it comes to space complexity, we need to mention recursion. When performing a recursive algorithm, each recursion will use a temporary variable to save the recursive information, so the recursive method consumes a lot of memory, that is, the space complexity is relatively low. high. If there are too many recursions, the memory will be overloaded, and the required data cannot be calculated. Therefore, when there are too many loops, try not to use recursive methods.

insertion sort

Principle analysis

Insert a record into an already sorted sequence, resulting in a new sorted sequence

The first data of the sequence is regarded as an ordered subsequence, and then the ordered subsequence is inserted into the ordered subsequence one by one from the second record until the entire sequence is ordered

Code

#include<iostream>
using namespace std;

void printArray(int *arr, int len)
{
for (int i = 0; i < len; ++i)
{
cout << arr[i] << " ";
}
cout << endl;
}

void InsertSort(int *arr, int len)
{

<span class="token keyword">for</span> <span class="token punctuation">(</span><span class="token keyword">int</span> i <span class="token operator">=</span> <span class="token number">1</span><span class="token punctuation">;</span> i <span class="token operator">&lt;</span> len<span class="token punctuation">;</span> <span class="token operator">++</span>i<span class="token punctuation">)</span>
<span class="token punctuation">{</span>

	<span class="token keyword">if</span> <span class="token punctuation">(</span>arr<span class="token punctuation">[</span>i<span class="token punctuation">]</span> <span class="token operator">&gt;</span> arr<span class="token punctuation">[</span>i <span class="token operator">-</span> <span class="token number">1</span><span class="token punctuation">]</span><span class="token punctuation">)</span>
	<span class="token punctuation">{</span>
		<span class="token keyword">int</span> temp <span class="token operator">=</span> arr<span class="token punctuation">[</span>i<span class="token punctuation">]</span><span class="token punctuation">;</span>
		<span class="token keyword">int</span> j <span class="token operator">=</span> i <span class="token operator">-</span> <span class="token number">1</span><span class="token punctuation">;</span>
		<span class="token keyword">for</span> <span class="token punctuation">(</span><span class="token punctuation">;</span> j <span class="token operator">&gt;=</span> <span class="token number">0</span> <span class="token operator">&amp;&amp;</span> temp <span class="token operator">&gt;</span> arr<span class="token punctuation">[</span>j<span class="token punctuation">]</span><span class="token punctuation">;</span> j<span class="token operator">--</span><span class="token punctuation">)</span>        
		<span class="token punctuation">{</span>
			arr<span class="token punctuation">[</span>j <span class="token operator">+</span> <span class="token number">1</span><span class="token punctuation">]</span> <span class="token operator">=</span> arr<span class="token punctuation">[</span>j<span class="token punctuation">]</span><span class="token punctuation">;</span>
		<span class="token punctuation">}</span>
		arr<span class="token punctuation">[</span>j <span class="token operator">+</span> <span class="token number">1</span><span class="token punctuation">]</span> <span class="token operator">=</span> temp<span class="token punctuation">;</span>
	<span class="token punctuation">}</span>
<span class="token punctuation">}</span>

}

int main ( )
{
int arr [ ] = { 4 , 5 , 8 , 9 , 1 , 2 } ;
int len ​​= sizeof ( arr ) / sizeof ( int ) ; // calculate the size of the array by bytes
printArray ( arr , len ) ;
InsertSort ( arr , len ) ;
printArray ( arr, len ) ;
return 0 ;
}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40

Complexity Calculation

1. When the initial sequence is a positive sequence, only n-1 outer loops are required, and each comparison is performed without moving elements. At this point the number of comparisons ( C min ) and the number of moves ( M min ) reach the minimum.
C min=n-1;
M min=0;

at this time, the time complexity is O(n) .
2. When the initial sequence is in reverse order, n-1 outer loops are required, and the elements to be inserted in each sorting must be Compare with the i elements in [0,i-1] and move the i elements back i times. At this time, the number of comparisons and the number of moves reach the maximum.
C min=1+2+3+4+……+n-1=(n-1)n/2;
M min=1+2+3+…+n-1=(n-1)n/ 2;

At this time, the time complexity is O(n^2) .
3. Only the three auxiliary elements i, j, and tmp are used in direct insertion sorting, which has nothing to do with the scale of the problem, and the space complexity is O(1)

Bubble Sort

Concept and idea

Repeatedly visit the array to be sorted, compare two elements at a time, and swap them if they are in the wrong order. The work of visiting the sequence is repeated until there is no need to exchange, that is to say, the sequence has been sorted. The name of this algorithm comes from the fact that larger elements will slowly "float" to the top of the sequence through exchange, so it is called "bubble sort".

Code

#include<iostream>
using namespace std;
void BubbleSort(int *a, int size)
{
    
    
 for (int i = 0; i < size; i++)//外循环,循环每个元素
 {
    
    
  for (int j = 1; j < size - i; j++)//内循环进行元素的两两比较
  {
    
    
   if (a[j] < a[j - 1])//判断相邻元素并进行交换
   {
    
    
    int temp = a[j];
    a[j] = a[j - 1];
    a[j - 1] = temp;
   }
  }
 }
}
int main()
{
    
    
 int a[10] = {
    
     2, 7, 34, 54, 12, 5, 19, 33, 88, 23 };
 cout << "原来的数组为:" << endl;
 for (int i = 0; i < 10; i++)
 {
    
    
  cout << a[i] << " ";
 }
 cout << endl;
 BubbleSort(a, 10);
 cout << "冒泡排序后的数组为:" << endl;
 for (int i = 0; i < 10; i++)
 {
    
    
  cout << a[i] << " ";
 }
 return 0;
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34

time complexity

The time overhead of the outer loop and inner loop, as well as judging and exchanging elements.
The optimal situation is that the order has been sorted from the beginning, then there is no need to exchange elements. Since the outer loop is n, the number of loop comparisons required by the inner layer is (n-1), (n-2)... 1 The time cost of summing arithmetic series is: [ n(n-1) ] / 2; so the optimal time complexity is: O( n^2 ) .
The worst case is that the elements are in reverse order at the beginning, then two elements must be exchanged for each sorting, and the time spent is: [ 3n(n-1) ] / 2; (the best case above The time spent is the three steps of exchanging elements); so the worst case time complexity is: O( n^2 ) ;

space complexity

The auxiliary variable space of the bubble sort is only a temporary variable, and it will not change with the expansion of the sorting scale, so the space complexity is O(1).

selection sort

void Efferve()
{
    
    
 	int m[5] = {
    
     12, 8, 6, 9, 10 };
	 int max = m[0];
	 for (int i = 0; i < 4; i++)
	 {
    
    
  		for (int j = i; j < 4; j++)
 		 {
    
    
 		 	 if (m[j] < m[j + 1])
  			 {
    
    
    				max = m[j + 1];
   				 m[j + 1] = m[j];
    				m[j] = max;
 			  }
 		 }
	 } 
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18

From the above code, it can be seen that the selection sort applies two loops as follows:

 for (int i = 0; i < 4; i++)
  {
    
    
    for (int j = i; j < 4; j++)
    {
    
    
    }
  }
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

When i=0, the loop is performed 4 times, and each time the loop below i+1 is executed 4-i times, so the number of loops above is T=(4-1))+(4-2)+(4-3)+ 1;
T=[4*(4-1)]/2 times
. When N numbers are sorted, T=[N*(N-1)]/2 times will be performed, and the highest order N ^ will be reserved according to the calculation method 2, so the time complexity of selection sorting is O(N^2) ;

Because only the space of the size of the array is always used in sorting, which is a constant, the space complexity is O(1).

quick sort

The basic idea

In an array, find a number as the base number, place all the numbers in this number that are larger than the base number to the right of the number, and place the numbers that are smaller than the base number to the left of the number.

For example, the array "6 1 2 7 9 3 4 5 10 8"
uses 6 as the base number, puts numbers smaller than 6 on the left of the group of 6, and numbers larger than 6 on the right of 6 to
get: 3 1 2 5 4 6 9 7 10 8
It can be seen that the numbers on the left of 6 are smaller than 6, and the numbers on the right are larger than 6. At this time, 6 has returned to its place

The specific steps are:

  1. First find a reference number (usually the first one), then start from the right and look to the left to find the first number smaller than the reference value, then start from the left to find the first number smaller than the reference value, and then exchange , until the left and right sides meet, the reference number is exchanged with the found position
  2. Use the benchmark number as the dividing line, divide it into two left and right arrays, and then recurse with step 1
  3. When the recursive array cannot continue to recurse, the sequence ends, and the array at this time has been sorted

Code

void QuickSort(int *n, int left, int right)
{
    
    
	if (left > right)
		return ;
	int temp = n[left];					//temp中存的数为基准数
	int i = left, j = right;
	int t;
	while (i != j)
	{
    
    
		//一定要先从右向左找
		while (i < j && n[j] >= temp)
			j--;
		while (i < j && n[i] <= temp)
			i++;
		if (i < j)
		{
    
    
			t = n[i];
			n[i] = n[j];
			n[j] = t;
		}
	}
	n[left] = n[i];
	n[i] = temp;
	QuickSort(n, left, i - 1);			//处理左边的数
	QuickSort(n, i + 1, right);			//处理右边的数
}

 
 
  
  
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

Complexity Calculation

Space complexity: logn
is mainly due to the use of stack space caused by recursion.
In the best case, the depth of the tree is: log2(n)
space complexity is O(logn)
and in the worst case: n-1 is required calls, every 2 numbers need to be exchanged, and at this time it degenerates into bubble sorting.
The space complexity is O(n)
and the average time complexity is: O(logn)

Time complexity: O(nlogn)
Since quicksort uses recursive calls, calculating its time complexity also requires the use of recursive algorithms.
Time complexity formula for recursive algorithms: T[n] = aT[n/b] + f (n)
**

Time complexity in the optimal case
The optimal case for quick sorting is that every element fetched just divides the entire array equally

The time complexity formula at this time is: T[n] = 2T[n/2] + f(n) ; T[n/2] is the time complexity of the sub-array after equal division, and f[n] is equal division The time spent on this array
First recursion:
T[n] = 2T[n/2] + n;
Second recursion:
Let n = n/2
= 2^2 T[ n/ (2^2) ] + 2n
The third recursion:
Let: n = n/(2^2)
= 2^3 T[ n/ (2^3) ] + 3n
...
The mth recursion:
Let: n = n/( 2^ (m-1) )
= 2^m T[1] + mn

When the last equal share can no longer be divided equally, that is to say, the formula has been falling down until T[1] is finally obtained, which means that the formula has been iterated (T[1] is a constant).
Get: T[n/ (2^m) ] = T[1] ===>> n = 2^m ====>> m = logn;
T[n] = 2^m T[1] + mn; where m = logn;
T[n] = 2^(logn)T[1]+lognn = n +nlogn
and because when n >= 2: nlogn >= n (that is, logn > 1), so take the following nlogn;

To sum up: the time complexity of the optimal quick sort is: O( nlogn )

Time complexity in the worst case
The worst case is that the element fetched each time is the smallest/largest in the array. This situation is actually bubble sorting (the order of one element is arranged every time). The time at this
time The complexity is: T[n] = n * (n-1) = n^2 + n;
in summary: the worst case time complexity of quick sorting is: O( n^2 )

Average time complexity
The average time complexity of quicksort is also: O(nlogn)

Guess you like

Origin blog.csdn.net/Wu_0526/article/details/107959363