Common sorting algorithms (implemented in C language)

Introduction to sorting

  Sorting is the operation of arranging a string of records in ascending or descending order according to the size of one or some of the keywords.
  In the sequence of records to be sorted, there are multiple records with the same keyword. If the relative order of these records remains unchanged after sorting, the sorting algorithm is said to be stable. For example, A = B, A is in front of B in the original sequence, and A is still in front of B after sorting, which is stable.
  An ordering in which the data elements are all placed in memory is called an internal ordering.
  There are too many data elements to be stored in memory at the same time, and the sorting that needs to continuously move data between internal and external memory according to the requirements of the sorting process is called external sorting.
insert image description here

insertion sort

  Basic idea: Insert the records to be sorted one by one into an already sorted sequence according to the consignment of their key code values, until all the records are inserted, and a new sequence is obtained.

direct insertion sort

  1. Basic introduction:
      In the array to be sorted, we assume that the first n-1 elements are already in order, and then compare the nth elements one by one, and then put the nth element into the appropriate position.
      The closer a set of elements is to order, the more time-efficient the direct insertion algorithm will be.
  2. code:
void InsertSort(int* a, int n)
{
    
    
	for (int i = 0; i < n - 1; i++)
	{
    
    
		int end = i;
		int tmp = a[end + 1];
		while (end >= 0)
		{
    
    
			if (tmp < a[end])
			{
    
    
				a[end + 1] = a[end];
				end--;
			}
			else
			{
    
    
				break;
			}
		}
		a[end + 1] = tmp;
	}

}
  1. Time complexity and space complexity
      The average time complexity of insertion sorting is also O(n 2 ), and the space complexity is constant order O(1). The specific time complexity is also related to the orderliness of the array.
      In insertion sorting, when the array to be sorted is in order, it is the optimal situation. You only need to compare the current number with the previous number. At this time, you need to compare N-1 times in total, and the time complexity is O(N) . The worst case is that the array to be sorted is in reverse order, and the number of comparisons is the largest at this time, and the worst case is O(n 2 ).
  2. Animation demo:insert image description here

Hill sort

  1. Basic introduction:
      Hill sort is an insertion sort, also known as shrinking incremental sort. Hill sorting introduces grouping on the basis of direct insertion sorting, first grouping for pre-sorting, and then completing the final sorting through direct insertion sorting.
      First select a number gap (less than the total number of data to be sorted) as the first increment, and the elements with a gap between elements as a group, and then group them for direct insertion sorting. After the sorting is completed, the increment is reduced, and the above operations are repeated , until gap=1, to achieve the final sorting.
  2. code:
void ShellSort(int* a, int n)
{
    
    
	int gap = n;
	while (gap > 1)
	{
    
    
		gap = gap / 3 + 1;

		for (int i = 0; i < n - gap; i++)
		{
    
    
			int end = i;
			int tmp = a[end + gap];
			while (end >= 0)
			{
    
    
				if (tmp < a[end])
				{
    
    
					a[end + gap] = a[end];
					end -= gap;
				}
				else
				{
    
    
					break;
				}
			}
			a[end + gap] = tmp;
		}
	}
}
  1. Time complexity and space complexity:
      The time complexity of Hill sorting is O(n 1.3-2 ), and the space complexity is constant order O(1). Hill sorting is not as fast as the quick sorting algorithm with a time complexity of O(n(logn)), so it performs well for medium-sized scales, but it is not the best choice for very large-scale data sorting. In short, it is better than general O(n 2 ) Algorithms of complexity are much faster.
      During the execution of the algorithm, only a few defined temporary variables are needed, so the space complexity is constant level O(1).
  2. Graphic:
    insert image description here

selection sort

  Basic idea: Select the smallest (or largest) element from the data elements to be sorted each time, and place it at the beginning of the sequence until all the data elements to be sorted are exhausted.

selection sort

  1. Basic introduction:
      Direct selection sorting is also a simple sorting method. Its basic idea is: select the minimum value from R[0]~R[n-1] for the first time, exchange with R[0], and the second time Select the minimum value from R[1]~R[n-1], exchange with R[1], ..., select the minimum value from R[i-1]~R[n-1] for the ith time, and exchange with R [i-1] exchange, ..., select the minimum value from R[n-2]~R[n-1] for the n-1th time, exchange with R[n-2], and pass n-1 times in total to get An ordered sequence arranged in ascending order by sort key.
      A simple optimization is to traverse once anyway, then you can find the maximum and minimum values ​​and put them at the end and the beginning respectively, which can improve some efficiency.
  2. code:
void SelectSort(int* a, int n)
{
    
    
	int begin = 0, end = n - 1;

	while (begin < end)
	{
    
    
		int minI = begin, maxI = begin;
		for (int i = begin + 1; i <= end; i++)
		{
    
    
			if (a[i] < a[minI])
				minI = i;
			if (a[i] > a[maxI])
				maxI = i;
		}
		int tmp = a[begin];
		a[begin] = a[minI];
		a[minI] = tmp;
		if (maxI == begin)
			maxI = minI;
		tmp = a[end];
		a[end] = a[maxI];
		a[maxI] = tmp;

		begin++;
		end--;
	}
}
  1. Time complexity and space complexity:
      The time complexity of direct selection sorting is O(n 2 ), so when the record occupies a large number of bytes, it is usually faster than the execution speed of direct insertion sorting;
      for space complexity, Simple selection sorting requires only one storage space for the temporary storage unit of record exchange, that is, the space complexity is O(1);
      since there are exchanges between non-adjacent elements in direct selection sorting, direct selection sorting is an unstable sorting method.
  2. Graphic:
    insert image description here

heap sort

  1. Basic introduction:
      Heap sort refers to a sorting algorithm designed using the data structure of the heap, which is a type of selection sort. It uses the heap for data selection. In ascending order, you need to build a large pile, and in descending order, you need to build a small pile.
  2. code:
void Swap(int* p1, int* p2)
{
    
    
	int tmp = *p1;
	*p1 = *p2;
	*p2 = tmp;
}

void AdjustDown(int* a, int n, int parent)
{
    
    
	int child = parent * 2 + 1;
	while (child < n)
	{
    
    
		// 确认child指向大的那个孩子
		if (child + 1 < n && a[child + 1] > a[child])
		{
    
    
			++child;
		}
		// 1、孩子大于父亲,交换,继续向下调整
		// 2、孩子小于父亲,则调整结束
		if (a[child] > a[parent])
		{
    
    
			Swap(&a[child], &a[parent]);
			parent = child;
			child = parent * 2 + 1;
		}
		else
		{
    
    
			break;
		}
	}
}

void HeapSort(int* a, int n)
{
    
    
	// 向下调整建堆 -- O(N)
	// 升序:建大堆
	for (int i = (n - 1 - 1) / 2; i >= 0; --i)
	{
    
    
		AdjustDown(a, n, i);
	}
	// O(N*logN)
	int end = n - 1;
	while (end > 0)
	{
    
    
		Swap(&a[0], &a[end]);
		AdjustDown(a, end, 0);
		--end;
	}
}
  1. Time complexity and space complexity:
      The worst case of using heap sorting is that you always need to exchange nodes, and you need to loop h - 1 times (h is the height of the tree). And h = log(N+1) (N is the number of summary points of the tree), the time complexity of sorting is O(logN). But before heap sorting needs to build a heap, the time complexity of building a heap is O(N).
      For space complexity, heap sorting only needs a few storage spaces to record some subscript positions, so the space complexity is O(1);
  2. Graphic:
    insert image description here

swap sort

  The so-called exchange is to exchange the positions of the two records in the
sequence Smaller records are moved toward the front of the sequence.

Bubble Sort

  1. Basic introduction:
      Bubble sorting is a very vivid description, because it sorts like bubbling, and elements pop up one by one. Starting from the first element, compare them pairwise, and move the larger or smaller element backwards according to ascending or descending order.
      If there is no exchange after one traversal, it means that the traversal sequence is in order and you can jump out directly. This can improve a little efficiency, but the efficiency is still not as good as other sorting algorithms.
  2. code:
void BubbleSort(int* a, int n)
{
    
    
	for (int i = 0; i < n; i++)
	{
    
    
		int exchange = 0;
		for (int j = 1; j < n - i; j++)
		{
    
    
			if (a[j - 1] > a[j])
			{
    
    
				int tmp = a[j - 1];
				a[j - 1] = a[j];
				a[j] = tmp;
				exchange = 1;
			}
		}
		if (exchange == 0)
			break;
	}
}
  1. Time complexity and space complexity:

  2. Graphic:
    insert image description here

quick sort

  Quick sorting is a binary tree structure exchange sorting method proposed by Hoare in 1962. Its basic idea is: any element in the sequence of elements to be sorted is taken as its reference value (often the first element or the last element) , divide the set to be sorted into left and right subsequences according to the sorting code. All elements in the left subsequence are smaller than the reference value, all elements in the right subsequence are greater than the reference value, and then the left and right subsequences repeat the process until all elements are arranged in their respective positions.

recursive implementation

Hoare version
  1. Basic introduction:
      Select a key value, define an L and an R, L goes to the right and R goes to the left. (It should be noted that if the key value is the first element, R will go first. If the key value is the last element, L will go first.)
      R moves first, and stops when it encounters a value smaller than the key value, and then L starts to move. Encounter a stop larger than the key value, then exchange the values ​​of L and R at this time, and then repeat the above steps. Until L and R meet, the value at this time must be less than the key value, and then exchange it with the key. At this time, the key value is placed where it should be, and the sequence is divided into left and right subsequences.
      Why is the value of R and L must be smaller than the key value when they meet? R will stop when it encounters a value smaller than the key value, and wait for L to move. There are two results for L to move, one is to encounter a value larger than the key value, the two are exchanged, and then R continues to move; the other is not encountered When the key value is large, until it meets, this is a value smaller than the key value. Before L stops, R must stop when R encounters a value smaller than key, and the two will be exchanged. Therefore, the reason why the value where R and L meet must be smaller than the key value is that R goes first, which is a very clever step. If L goes first, then the value of the meeting position must be greater than the key value, and the key value at this time should be the last element.
  2. code:
void QuickSortHoare(int* a, int begin, int end)
{
    
    
	if (begin >= end)
		return;

	int left = begin, right = end;
	int keyI = left;
	while (left < right)
	{
    
    
		// 右边先走,找小于key值的值
		while (left < right && a[right] >= a[keyI])
			right--;

		// 左边后走,找大于key值的值
		while (right < right && a[left] <= a[keyI])
			left++;

		int tmp = a[left];
		a[left] = a[right];
		a[right] = tmp;
	}

	int tmp = a[left];
	a[left] = a[keyI];
	a[keyI] = a[left];
	keyI = left;
	// 左子序列[begin,keyI),右子序列(keyI,end]
	QuickSortHoare(a, begin, keyI - 1);
	QuickSortHoare(a, keyI + 1, end);
}
  1. Time complexity and space complexity:
      For quick sorting, the number of comparisons is fixed and will not exceed O(n), so the number of divisions is very important. If the initial sequence is ordered, then the sorting process at this time is very similar to bubble sorting, and the time complexity is O(n), and the worst case time complexity is O(n 2 ) .
      If the key value is in the middle every time, it is a bit like a dichotomy, the time complexity is O(logn), and the time complexity at this time is O(n*logn).
      Because recursion is used, relevant information needs to be saved in the stack during the execution process. The required space is related to the number of recursions, and the recursion is related to the number of divisions. That is, the best is O(logn), and the worst is O( n).
  2. Graphic:
    insert image description here
pit digging
  1. Basic introduction:
      The pit-digging method is based on Hoare. He avoids the problem that the value of R and L when they meet must be less than the key value through the method of digging, because not everyone can understand why the value of R and L when they meet is less than key value. But like Hoare, it also needs to decide whether to go first or L first according to the selection of the key value.
      The pit digging method will take out the key value at the beginning, then R moves to find a value smaller than the key value, fills the value here into the position of L, and then L starts to move, finds a value larger than the key value and fills it in the position of R , then R starts to move again, repeat the above steps until R and L meet, and finally fill in the key value.
  2. code:
void QuickSortPit(int* a, int begin, int end)
{
    
    
	// 当只有一个数据或数列不存在时
	if (begin >= end)
		return;

	int left = begin;
	int right = end;
	int key = a[left];
	int pit = left;
	while (left < right)
	{
    
    
		// 右边先走,找比key值小的值
		while (left < right && a[right] >= key)
		{
    
    
			right--;
		}
		a[pit] = a[right];
		pit = right;

		// 左边再走,找比key值大的值
		while (left < right && a[left] <= key)
		{
    
    
			left++;
		}
		a[pit] = a[left];
		pit = left;
	}

	a[pit] = key;
	QuickSortPit(a, begin, pit - 1);
	QuickSortPit(a, pit + 1, end);
}
  1. Time complexity and space complexity:
      The core idea has not changed, and the soup does not change the medicine, so the time complexity is the same as the Hoare version.
  2. Graphic:
    insert image description here
Front and back pointer version
  1. Basic introduction:
      This is a deformation of Hoare, or take a key value, then take prev and cur to point to the first element and the second element respectively, then cur moves backward, encounters a value smaller than the key, cur's The value is exchanged with the value of prev, and when the value cur is larger than the key, it continues to go. In this way, prev is at the same position as cur in two cases, or stays at a position where the value is greater than the key value. Finally, after cur reaches the end, prev and key are exchanged, thus completing the task of distinguishing the left and right subsequences. .
      This is a variant of Hoare, the process is not easy to understand, but the code is easy to implement.
  2. code:
void QuickSortPoint(int* a, int begin, int end)
{
    
    
	if (begin >= end)
		return;

	int keyI = begin;
	int prev = begin;
	int cur = begin + 1;
	while (cur <= end)
	{
    
    
		// 找到比key小的值时,与prev++位置交换,小的向前移动,大的向后移动
		if (a[cur] < a[keyI] && ++prev != cur)
		{
    
    
			int tmp = a[prev];
			a[prev] = a[cur];
			a[cur] = tmp;
		}
		cur++;
	}

	int tmp = a[prev];
	a[prev] = a[keyI];
	a[keyI] = tmp;

	keyI = prev;

	QuickSortPoint(a, begin, keyI - 1);
	QuickSortPoint(a, keyI + 1, end);
}
  1. Time complexity and space complexity:
      The time complexity and space complexity are still the same as the Hoare version.
  2. Graphic:
    insert image description here

Non-recursive implementation

  First of all, we need to know a point. Each recursion will open up a stack frame space, and the stack frame space has a characteristic that the space opened first will be destroyed last, but this also causes a problem. If the recursion level is too deep, the stack will overflow. . However, quick sort relies on the feature of first pushing to the stack and then destroying to complete the sorting. So if we want to implement non-recursive quick sorting, we need to implement this feature, and there happens to be a stack data structure in our data structure that has this feature, so if we want to implement non-recursive quick sorting, we must use The stack data structure.

Hoare version
int Hoare(int* a, int begin, int end)
{
    
    
	int left = begin, right = end;
	int keyI = left;
	while (left < right)
	{
    
    
		// 右边先走,找小于key值的值
		while (left < right && a[right] >= a[keyI])
			right--;

		// 左边后走,找大于key值的值
		while (right < right && a[left] <= a[keyI])
			left++;

		int tmp = a[left];
		a[left] = a[right];
		a[right] = tmp;
	}

	int tmp = a[left];
	a[left] = a[keyI];
	a[keyI] = a[left];
	keyI = left;

	return keyI;
}

void QuickSortNonR(int* a, int begin, int end)
{
    
    
	// 创建、初始化栈,将begin、end插入栈中
	Stack st;
	StackInit(&st);
	StackPush(&st, begin);
	StackPush(&st, end);
	// 栈非空就循环
	while (!StackEmpty(&st))
	{
    
    
		int right = StackTop(&st);
		StackPop(&st);
		if (StackEmpty(&st))
			break;
		int left = StackTop(&st);
		StackPop(&st);
		if (StackEmpty(&st))
			break;

		int keyI = Hoare(a, left, right);

		if (keyI + 1 < right)
		{
    
    
			StackPush(&st, keyI + 1);
			StackPush(&st, right);
		}

		if (left < keyI - 1)
		{
    
    
			StackPush(&st, left);
			StackPush(&st, keyI - 1);
		}
	}

	StackDestroy(&st);
}
pit digging
int Pit(int* a, int begin, int end)
{
    
    
	int left = begin;
	int right = end;
	int key = a[left];
	int pit = left;
	while (left < right)
	{
    
    
		// 右边先走,找比key值小的值
		while (left < right && a[right] >= key)
		{
    
    
			right--;
		}
		a[pit] = a[right];
		pit = right;

		// 左边再走,找比key值大的值
		while (left < right && a[left] <= key)
		{
    
    
			left++;
		}
		a[pit] = a[left];
		pit = left;
	}

	a[pit] = key;

	return pit;
}
void QuickSortNonR(int* a, int begin, int end)
{
    
    
	// 创建、初始化栈,将begin、end插入栈中
	Stack st;
	StackInit(&st);
	StackPush(&st, begin);
	StackPush(&st, end);
	// 栈非空就循环
	while (!StackEmpty(&st))
	{
    
    
		int right = StackTop(&st);
		StackPop(&st);
		if (StackEmpty(&st))
			break;
		int left = StackTop(&st);
		StackPop(&st);
		if (StackEmpty(&st))
			break;

		int keyI = Pit(a, left, right);

		if (keyI + 1 < right)
		{
    
    
			StackPush(&st, keyI + 1);
			StackPush(&st, right);
		}

		if (left < keyI - 1)
		{
    
    
			StackPush(&st, left);
			StackPush(&st, keyI - 1);
		}
	}

	StackDestroy(&st);
}
Front and back pointer version
int Point(int* a, int begin, int end)
{
    
    
	int keyI = begin;
	int prev = begin;
	int cur = begin + 1;
	while (cur <= end)
	{
    
    
		// 找到比key小的值时,与prev++位置交换,小的向前移动,大的向后移动
		if (a[cur] < a[keyI] && ++prev != cur)
		{
    
    
			int tmp = a[prev];
			a[prev] = a[cur];
			a[cur] = tmp;
		}
		cur++;
	}

	int tmp = a[prev];
	a[prev] = a[keyI];
	a[keyI] = tmp;

	keyI = prev;

	return keyI;
}
void QuickSortNonR(int* a, int begin, int end)
{
    
    
	// 创建、初始化栈,将begin、end插入栈中
	Stack st;
	StackInit(&st);
	StackPush(&st, begin);
	StackPush(&st, end);
	// 栈非空就循环
	while (!StackEmpty(&st))
	{
    
    
		int right = StackTop(&st);
		StackPop(&st);
		if (StackEmpty(&st))
			break;
		int left = StackTop(&st);
		StackPop(&st);
		if (StackEmpty(&st))
			break;

		int keyI = Point(a, left, right);

		if (keyI + 1 < right)
		{
    
    
			StackPush(&st, keyI + 1);
			StackPush(&st, right);
		}

		if (left < keyI - 1)
		{
    
    
			StackPush(&st, left);
			StackPush(&st, keyI - 1);
		}
	}

	StackDestroy(&st);
}

Quick sort optimization

Take the middle of three values

  As we mentioned before, if the position of the key value happens to be on the farthest side each time, the time efficiency of quick sorting will become O(n 2 ). Although the probability of this is very small, there is still a probability that it will happen. At this time, we can use the three-value method to avoid this situation. Because the key value is the key that affects the number of divisions, taking the middle of the three values ​​means finding the first, middle, and last values, comparing the values, and exchanging the middle value with the key value, so that the position of the key value is guaranteed to be the same. will be on the far side.

int GetMidIndex(int* a, int begin, int end)
{
    
    
	int mid = (begin + end) / 2;
	if (a[begin] < a[mid])
	{
    
    
		if (a[mid] < a[end])
		{
    
    
			return mid;
		}
		else if (a[begin] > a[end])
		{
    
    
			return begin;
		}
		else
		{
    
    
			return end;
		}
	}
	else // a[begin] > a[mid]
	{
    
    
		if (a[mid] > a[end])
		{
    
    
			return mid;
		}
		else if (a[begin] < a[end])
		{
    
    
			return begin;
		}
		else
		{
    
    
			return end;
		}
	}
}
Optimization between cells

  The recursion of each layer will increase by a factor of 2, that is, 1, 2, 4, 8, 16... Through this sequence, we can find that logically, as long as we reduce one layer of recursion, the number of recursions can be reduced by about half . So we can combine other sorts to make a judgment, and use other sorts when there are only a few numbers, so that we can effectively avoid too deep recursion.

void QuickSort(int* a, int begin, int end)
{
    
    
	if (begin >= end)
	{
    
    
		return;
	}

	if ((end - begin + 1) < 15)
	{
    
    
		// 小区间用直接插入替代,减少递归调用次数
		InsertSort(a + begin, end - begin + 1);
	}
	else
	{
    
    
		int keyi = PartSort3(a, begin, end);

		// [begin, keyi-1]  keyi [keyi+1, end]
		QuickSort(a, begin, keyi - 1);
		QuickSort(a, keyi + 1, end);
	}
}

merge sort

recursive implementation

  1. Basic introduction:
      Merge sort is an effective sorting algorithm based on the merge operation, which uses the idea of ​​​​divide and conquer. Merge already sorted subsequences to get a fully sorted sequence. That is, each subsequence is sorted first, and then merged into an ordered sequence. Merging two sorted lists into one sorted list is called a two-way merge. Merge sort needs to be decomposed first and then merged.
    insert image description here

  2. code:

void _MergeSort(int* a, int begin, int end, int* tmp)
{
    
    
	if (begin >= end)
		return;

	int mid = (begin + end) / 2;
	// [begin, mid] [mid+1, end] 递归让子区间有序
	_MergeSort(a, begin, mid, tmp);
	_MergeSort(a, mid+1, end, tmp);

	// 归并[begin, mid] [mid+1, end]
	int begin1 = begin, end1 = mid;
	int begin2 = mid+1, end2 = end;
	int i = begin;
	while (begin1 <= end1 && begin2 <= end2)
	{
    
    
		if (a[begin1] <= a[begin2])
		{
    
    
			tmp[i++] = a[begin1++];
		}
		else
		{
    
    
			tmp[i++] = a[begin2++];
		}
	}

	while (begin1 <= end1)
	{
    
    
		tmp[i++] = a[begin1++];
	}

	while (begin2 <= end2)
	{
    
    
		tmp[i++] = a[begin2++];
	}

	memcpy(a + begin, tmp + begin, sizeof(int)*(end - begin + 1));
}

void MergeSort(int* a, int n)
{
    
    
	int* tmp = (int*)malloc(sizeof(int)*n);
	if (tmp == NULL)
	{
    
    
		perror("malloc fail");
		exit(-1);
	}

	_MergeSort(a, 0, n - 1, tmp);


	free(tmp);
	tmp = NULL;
}

  1. Time complexity and space complexity:
      Merge sort is somewhat similar to a binary tree structure, with a height of O(logn), and each layer loops n times, so the time complexity is O(n*logn);
      merge sort opens up additional n spaces plus recursion logn, so the space complexity is O(n+logn), but logn can be ignored, and the final complexity is O(n).
  2. Graphic:
    insert image description here

Non-recursive implementation

  1. Basic introduction:
      The non-recursive algorithm of merge sort does not need to use the data structure of the stack to implement. If the stack is used, it will be very troublesome. We only need to control the number of elements participating in the merge each time, and finally the sequence can be made become orderly.
      But we need to take into account some special cases, because the merge is done in pairs, that is, the number of elements it merges is 1, 2, 4, 8, 16... Such an increase, then if the number of elements is not such a standard What about multiples? At this time, there will be three situations.
      ①: In the last grouping, the number of elements in the right interval is not enough. At this time, we need to control the boundary of this interval when we merge sequences; ②: In the last grouping,
      there are no elements in the right interval, that is, the elements are just enough to the left side interval, then we don't need to merge this group, because it is already ordered;
      ③: In the last group, the number of elements in the left interval is not enough, then we don't need to merge this group up.
    insert image description here

  2. code:

void MergeSortNonR(int* a, int n)
{
    
    
	int* tmp = (int*)malloc(sizeof(int)*n);
	if (tmp == NULL)
	{
    
    
		perror("malloc fail");
		exit(-1);
	}

	// 归并每组数据个数,从1开始,因为1个认为是有序的,可以直接归并
	int rangeN = 1;
	while (rangeN < n)
	{
    
    
		for (int i = 0; i < n; i += 2 * rangeN)
		{
    
    
			// [begin1,end1][begin2,end2] 归并
			int begin1 = i, end1 = i + rangeN - 1;
			int begin2 = i + rangeN, end2 = i + 2 * rangeN - 1;
			int j = i;

			// end1 begin2 end2 越界
			// 修正区间  ->拷贝数据 归并完了整体拷贝 or 归并每组拷贝
			if (end1 >= n)
			{
    
    
				end1 = n - 1;
				// 不存在区间
				begin2 = n;
				end2 = n - 1;
			}
			else if (begin2 >= n)
			{
    
    
				// 不存在区间
				begin2 = n;
				end2 = n - 1;
			}
			else if (end2 >= n)
			{
    
    
				end2 = n - 1;
			}

			while (begin1 <= end1 && begin2 <= end2)
			{
    
    
				if (a[begin1] <= a[begin2])
				{
    
    
					tmp[j++] = a[begin1++];
				}
				else
				{
    
    
					tmp[j++] = a[begin2++];
				}
			}

			while (begin1 <= end1)
			{
    
    
				tmp[j++] = a[begin1++];
			}

			while (begin2 <= end2)
			{
    
    
				tmp[j++] = a[begin2++];
			}
		}

		// 也可以整体归并完了再拷贝
		memcpy(a, tmp, sizeof(int)*(n));

		rangeN *= 2;
	}

	free(tmp);
	tmp = NULL;
}

counting sort

  1. Basic introduction:
      Counting sorting, also known as pigeonhole sorting, is a modified application of the hash direct addressing method, and is a non-comparative sorting. It first counts the number of occurrences of the same element, and then recycles the sequence to the original sequence according to the statistical results.
      Counting sorting is suitable for sequences in the data range set, which is very efficient at this time, but the applicable scope and scenarios are limited.
  2. code:
void CountSort(int* a, int n)
{
    
    
	int max = a[0], min = a[0];
	for (int i = 1; i < n; i++)
	{
    
    
		if (a[i] < min)
			min = a[i];

		if (a[i] > max)
			max = a[i];
	}

	int range = max - min + 1;
	int* countA = (int*)calloc(range, sizeof(int));
	if (NULL == countA)
	{
    
    
		perror("calloc fail\n");
		exit(-1);
	}
	// 统计次数
	for (int i = 0; i < n; i++)
		countA[a[i] - min]++;

	// 排序
	int k = 0;
	for (int j = 0; j < range; j++)
	{
    
    
		while (countA[j]--)
			a[k++] = j + min;
	}

	free(countA);
}
  1. Time complexity and space complexity:
      Its time complexity and space complexity are determined by the interval span of its own elements. The time complexity is O(MAX(n, range)), and the space complexity is O(range ).
  2. Graphic:
    insert image description here

Complexity and Stability Analysis of Sorting Algorithms

Sorting Algorithm average case best case worst case auxiliary space stability
Bubble Sort O(n 2 ) O(n) O(n 2 ) O(1) Stablize
simple selection sort O(n 2 ) O(n 2 ) O(n 2 ) O(1) unstable
direct insertion sort O(n 2 ) O(n) O(n 2 ) O(1) Stablize
Hill sort O(nlogn)~O(n2) O(n 1.3 ) O(n 2 ) O(1) unstable
heap sort O(nlogn) O(nlogn) O(nlogn) O(1) unstable
merge sort O(nlogn) O(nlogn) O(nlogn) O(n) Stablize
quick sort O(nlogn) O(nlogn) O(n 2 ) O(nlogn)~O(n) unstable

The operating efficiency of different algorithms

  Too large data may cause stack overflow, so choose non-recursive quick sort and merge sort to test.

void TestOP()
{
    
    
	srand(time(0));
	const int N = 50000;
	int* a1 = (int*)malloc(sizeof(int) * N);
	int* a2 = (int*)malloc(sizeof(int) * N);
	int* a3 = (int*)malloc(sizeof(int) * N);
	int* a4 = (int*)malloc(sizeof(int) * N);
	int* a5 = (int*)malloc(sizeof(int) * N);
	int* a6 = (int*)malloc(sizeof(int) * N);
	int* a7 = (int*)malloc(sizeof(int) * N);
	int* a8 = (int*)malloc(sizeof(int) * N);

	for (int i = 0; i < N; ++i)
	{
    
    
		a1[i] = rand();
		a2[i] = a1[i];
		a3[i] = a1[i];
		a4[i] = a1[i];
		a5[i] = a1[i];
		a6[i] = a1[i];
		a7[i] = a1[i];
		a8[i] = a1[i];

	}

	int begin1 = clock();
	InsertSort(a1, N);
	int end1 = clock();

	int begin2 = clock();
	ShellSort(a2, N);
	int end2 = clock();

	int begin3 = clock();
	SelectSort(a3, N);
	int end3 = clock();

	int begin4 = clock();
	HeapSort(a4, N);
	int end4 = clock();

	int begin5 = clock();
	BubbleSort(a5, N);
	int end5 = clock();

	int begin6 = clock();
	QuickSortNonR(a6, 0, N - 1);
	int end6 = clock();

	int begin7 = clock();
	MergeSortNonR(a7, N);
	int end7 = clock();

	int begin8 = clock();
	CountSort(a8, N);
	int end8 = clock();

	printf("InsertSort:%d\n", end1 - begin1);
	printf("ShellSort:%d\n", end2 - begin2);
	printf("SelectSort:%d\n", end3 - begin3);
	printf("HeapSort:%d\n", end4 - begin4);
	printf("BubbleSort:%d\n", end5 - begin5);
	printf("QuickSort:%d\n", end6 - begin6);
	printf("MergeSort:%d\n", end7 - begin7);
	printf("CountSort:%d\n", end8 - begin8);

	free(a1);
	free(a2);
	free(a3);
	free(a4);
	free(a5);
	free(a6);
	free(a7);
	free(a8);
}

insert image description here

Guess you like

Origin blog.csdn.net/qq_47658735/article/details/129780381