Data Structures - Eight Sorts

 All sorting in this article is in ascending order as an example

content

1. Direct insertion sort

2. Hill sort

Three, selection sort​

Fourth, heap sort

5. Bubble sort

6. Quick Sort

recursive version

1. Hoare version

2. Digging method

3. Front and back pointer method (recommended this way of writing)

Optimization of Quick Sort

1. Chinese method of three numbers

2. Recursion to small subintervals 

non-recursive version

Seven, merge sort

Recursive implementation:

Non-recursive implementation:

Eight, counting and sorting 

The stability summary of the eight sorts:


1. Direct insertion sort

Basic idea: When we usually play poker, the idea of ​​insertion sort is used in the sorting of the card draw stage.

1. When inserting the nth element, the previous n-1 numbers are already in order

2. Use this nth number to compare with the previous n-1 number, find the position to be inserted, and insert it (the number in the original position will not be overwritten because it is saved in advance)

3. The data in the original position is moved backward in turn

 Implementation:

①One-way implementation (insert x into the ordered interval of [0,end])

That is, for the insertion in general, we randomly enumerate some numbers, and the numbers to be inserted are divided into two cases

(1) The number to be inserted is the middle number in the previous ordered number, and the direct comparison assigns x to the end+1 position

(2) x is the smallest number, end will reach the position of -1, and finally assign x directly to the end+1 position

 ② Implementation of sorting the entire array 

We didn't know whether the array was ordered at the beginning, so we controlled the subscript, end starts from 0, and always saves the value at the end+1 position to x, and the loop can be sorted in a single pass. At the end, end= The number at position n-2, n-1 is stored in x

 Overall code:

void InsertSort(int* a, int n)
{
	assert(a);

	for (int i = 0; i < n - 1; ++i)
	{
		int end = i;
		int x=a[end+1];//将end后面的值保存到x里面了
		//将x插入到[0,end]的有序区间
		while (end >= 0)
		{
			if (a[end] > x)
			{
				a[end + 1] = a[end];  //往后挪动一位
				--end;
			}
			else
			{
				break;
			}
		}
		a[end + 1] = x;      //x放的位置都是end的后一个位置
	}
	
}

Direct insertion sort summary:

① The closer the elements are to the order, the higher the efficiency of direct insertion sorting 

②Time complexity: O(N^2)

In the worst case, every time a number is inserted, the previous number has to be moved, a total of 1+2+3+...+n=n(n+1)/2 

③ Space complexity: O(1)

No extra space, just a constant number of variables

2. Hill sort

 

Basic idea:

1. First select a number less than n as the gap, and all numbers with a distance of the gap are grouped into a group for pre-sorting (direct insertion sorting)

2. Select another number less than the gap and repeat the operation of ①

3. When gap=1, it is equivalent to the whole array is a group, and the overall order can be sorted by inserting one more time.

E.g: 

Implementation:

①Single group sorting

It is the same as the previous direct insertion, that is, the original interval is 1, and now it becomes a gap, and each group is pre-sorted separately

②Multiple groups are sorted

③ Sort the entire array (control gap)

Multiple presorts (gap>1) + one insertion sort (gap==1)

(1) The larger the gap, the faster the pre-arrangement, and the less close to the order.

(2) The smaller the gap, the slower the pre-arrangement and the closer to the order.

The result is: 

Overall code:

void ShellSort(int* a, int n)
{

	int gap = n;
	while (gap > 1)
	{
		gap /= 2;

		for (int i = 0; i < n - gap; i++)
		{
			int end = i;
			int x = a[end + gap];
			while (end >= 0)
			{
				if (a[end] > x)
				{
					a[end + gap] = a[end];
					end -= gap;
				}
				else
				{
					break;
				}
			}
			a[end + gap] = x;
		}
	}
}

Hill sort summary:

①Hill sort is an optimization of direct insertion sort

②Time complexity: O(N^1.3)

③ Space complexity: O(1) 

3. Selection sort 

Basic idea:

Select the largest or smallest from the array each time, and store it at the rightmost or leftmost of the array until all are sorted 

Implementation:

We have optimized here. In one sorting, the largest number (a[maxi]) and the smallest number (a[mini]) are directly selected and placed on the rightmost and leftmost, so that the sorting efficiency is twice the original.

①Single order

Find the smallest number (a[mini]) and the largest number (a[maxi]) and place them on the far left and far right

ps: Begin and end save the left and right subscripts of the records, and mini and maxi records save the minimum and maximum subscripts

 ② Sort the entire array

begin++ and end-- so that the remaining n-2 numbers can be arranged next time, and a single pass is performed again, so that a loop can be formed until begin is less than end

Overall code:

void SelectSort(int* a, int n)
{
	int begin = 0,end = n - 1;

	while (begin<end)
	{
		int mini = begin, maxi = begin;

		for (int i = begin; i <= end; i++)
		{
			if (a[i] < a[mini])
			{
				mini = i;
			}
			if (a[i] > a[maxi])
			{
				maxi = i;
			}
		}
		Swap(&a[mini], &a[begin]);
		//当begin==maxi时,最大值会被换走,修正一下
		if (begin==maxi)
		{
			maxi=mini;
		}
		Swap(&a[maxi], &a[end]);
		begin++;
		end--;
	}
}

Direct selection sort summary:

① Direct selection sorting is well understood, but the actual efficiency is not high, and it is rarely used

②Time complexity: O(N^2)

③ Space complexity: O(1)

Fourth, heap sort

Basic idea:

1. Construct the sequence to be sorted into a large heap. According to the nature of the large heap, the root node (heap top) of the current heap is the largest element in the sequence;

2. Swap the top element of the heap with the last element, and then reconstruct the remaining nodes into a large heap;

3. Repeat step 2, and so on, starting from the first build of the large heap, each time we build, we can get the maximum value of a sequence, and then put it at the tail of the large heap. Finally, an ordered sequence is obtained.

Small conclusion:

Sort in ascending order, build a heap

Sort in descending order, build a small heap

Implementation:,

① Downward adjustment of the algorithm

We build a given array sequence into a large heap. Building a heap from the root node requires multiple downward adjustment algorithms.

Heap downward adjustment algorithm (use premise):
(1) If you want to adjust it to a small heap, then the left and right subtrees of the root node must all be small heaps.
(2) If you want to adjust it to a large heap, then the left and right subtrees of the root node must be large heaps.

The basic idea of ​​the downward adjustment algorithm:

1. Starting from the root node, select the one with the larger left and right child values 

2. If the value of the selected child is greater than the value of the father, then exchange the values ​​of the two

3. Treat the older child as the new father, and continue to adjust downward until it reaches the leaf node

//向下调整算法
//以建大堆为例
void AdJustDown(int* a, int n, int parent)
{
	int child = parent * 2 + 1;
	//默认左孩子较大
	while (child < n)
	{
		if (child + 1 < n && a[child+1] > a[child ])//如果这里右孩子存在,
                                       //且更大,那么默认较大的孩子就改为右孩子
		{
			child++;
		}
		if(a[child]>a[parent])
		{
			Swap(&a[child], &a[parent]);
			parent = child;
			child = parent * 2 + 1;
		}
		else
		{
			break;
		}
	}
}

②Build a heap (build a given arbitrary array into a large heap)

The idea of ​​building a heap:

Starting from the penultimate non-leaf node, from the back to the front, it is used as the parent in turn, and adjusted downward in turn, until it is adjusted to the root position

 Heap diagram:

    //最后一个叶子结点的父亲为i,从后往前,依次向下调整,直到调到根的位置
	for (int i = (n - 1 - 1) / 2;i>=0;--i)
	{
		AdJustDown(a,n,i);
	}

 ③Heap sorting (using the idea of ​​heap deletion)

The idea of ​​heap sort:

1. After the heap is built, swap the number at the top of the heap with the last number
2. Don't look at the last number, and adjust the remaining n-1 numbers down into a pile and then go to step 1.

3. Stop until only one number is left at the end, so that it is arranged in an orderly manner

for (int end = n - 1; end > 0; --end)
	{
		Swap(&a[end],&a[0]);
		AdJustDown(a,end,0);
	}

The overall code is as follows:

void AdJustDown(int* a, int n, int parent)
{
	int child = parent * 2 + 1;
	
	while (child < n)
	{
		if (child + 1 < n && a[child+1] > a[child ])
                                       
		{
			child++;
		}
		if(a[child]>a[parent])
		{
			Swap(&a[child], &a[parent]);
			parent = child;
			child = parent * 2 + 1;
		}
		else
		{
			break;
		}
	}
}

//堆排序
void HeapSort(int*a,int n)
{
	
	for (int i = (n - 1 - 1) / 2;i>=0;--i)
	{
		AdJustDown(a,n,i);
	}
	
	for (int end = n - 1; end > 0; --end)
	{
		Swap(&a[end],&a[0]);
		AdJustDown(a,end,0);
	}
}

5. Bubble sort

The basic idea of ​​bubble sort:

In the process of one trip, the two numbers before and after are compared in turn, and the larger number is pushed back. Next time, only the remaining n-1 numbers need to be compared, and so on. 

//优化版本的冒泡排序
void BubbleSort(int* a, int n)
{
	int end = n-1;
	while (end>0)
	{
		int exchange = 0;
		for (int i = 0; i < end; i++)
		{
			if (a[i] > a[i + 1])
			{
				Swap(&a[i], &a[i + 1]);
				exchange = 1;
			}
		}
		if (exchange == 0)//单趟过程中,若没有交换过,证明已经有序,没有必要再排序
		{
			break;
		}
		end--;
	}
}

 Bubble sort summary:

① Very easy to understand sorting

②Time complexity: O(N^2)

③ Space complexity: O(1)

6. Quick Sort

recursive version

1. Hoare version

Hoare's one-way thought:

1. Make the key on the left, go first on the right to find a value smaller than the key

2. Go back to the left and find a value greater than the key

3. Then swap the values ​​of left and right

4. Repeat the above steps 1 2 3 all the time

5. The position of the two when they meet is exchanged with the selected key value on the far left

This puts the key in the correct position

Animated demo:  

//hoare版本
//单趟排序  让key到正确的位置上   keyi表示key的下标,并不是该位置的值
int partion1(int* a, int left, int right)
{
	int keyi = left;//左边作keyi
	while (left < right)
	{   //右边先走,找小于keyi的值
		while (left < right && a[right] >= a[keyi])
		{
			right--;
		}
		//左边后走,找大于keyi的值
		while (left < right && a[left] <= a[keyi])
		{
			left++;
		}
		Swap(&a[left], &a[right]);
	}
	Swap(&a[left], &a[keyi]);
	return left;
}

void QuickSort(int* a, int left, int right)
{
	if (left >= right)
		return;

	int keyi = partion1(a, left, right);
	//[left,keyi-1] keyi [keyi+1,right]
	QuickSort(a, left, keyi - 1);
	QuickSort(a, keyi + 1, right);
}

2. Digging method

In fact, it is essentially a deformation of hoare

One-way digging method:

1. First store the first data on the far left in the temporary variable key to form a pit

2. Start on the right to find a value smaller than the key, and then throw the value into the pit, forming a new pit at this time

3. Start from the left and find a value greater than the key, throw the value into the pit, and form a new pit at this time

4. Repeat steps 1 2 3 all the time

5. Until the two sides meet, a new pit is formed, and finally the key value is thrown into it

This way the key has reached the correct position

Animated demo: 


//挖坑法
int partion2(int* a, int left, int right)
{
	int key = a[left];
	int pit = left;
	while (left < right)
	{
		while (left < right && a[right] >= key)
		{
			right--;
		}
		a[pit] = a[right];//填坑
		pit=right;


		while (left < right && a[left] <= key)
		{
			left++;
		}
		a[pit] = a[left];//填坑
		pit=left;
	}
	a[pit] = key;
	return pit;
}

void QuickSort(int* a, int left, int right)
{
	if (left >= right)
		return;

	int keyi = partion2(a, left, right);
	//[left,keyi-1] keyi [keyi+1,right]
	QuickSort(a, left, keyi - 1);
	QuickSort(a, keyi + 1, right);
}

3. Front and back pointer method (recommended this way of writing)

The idea of ​​the front and rear pointers:

1. Initially select prev as the start of the sequence, the cur pointer points to the next position of prev, and also select the first number on the left as the key

2. Cur goes first, finds a value less than the key, and stops when found

3、++prev

4. Swap prev and cur as subscript values

5. Repeat steps 2 3 4 in a loop. After stopping, finally exchange key and prev as the subscript value

This way the key also reaches the correct position

Animated demo: 

int partion3(int* a, int left, int right)
{
	int prev = left;
	int cur = left + 1;
	int keyi = left;
	while (cur <= right)
	{
		if (a[cur] < a[keyi] && ++prev != cur)//prev != cur  防止cur和prev相等时,相当于自己和自己交换,可以省略
		{                                   //前置 ++ 的优先级大于 != 不等于的优先级
			Swap(&a[prev], &a[cur]);
		}
		++cur;
	}
	Swap(&a[keyi], &a[prev]);
	return prev;
}

void QuickSort(int* a, int left, int right)
{
	if (left >= right)
		return;

	int keyi = partion3(a, left, right);
	//[left,keyi-1] keyi [keyi+1,right]
	QuickSort(a, left, keyi - 1);
	QuickSort(a, keyi + 1, right);
}

Recursive expansion graph

Optimization of Quick Sort

1. Chinese method of three numbers

Quick sort is sensitive to data. If the sequence is very disordered and chaotic, then the efficiency of quick sort is very high, but if the sequence is in order, the time complexity will change from O(N*logN) to O (N^2), equivalent to bubble sort

If the key selected for each sorting is exactly the middle value of the sequence, that is, the key is located in the middle of the sequence after a single sorting, then the time complexity of quick sorting is O(NlogN)

But this is an ideal situation. When we are faced with a set of sequences in extreme cases, which are ordered arrays, if we choose the left side as the key value, then it will degenerate into O(N^2) complexity, so at this time we Select the first position, the tail position, and the number in the middle position as three numbers, select the number in the middle position, and put it on the far left. In this way, the key selection starts from the left side. After optimization, everything becomes an ideal situation.

//快排的优化
//三数取中法
int GetMidIndex(int* a, int left, int right)
{
	int mid = (left + right) / 2;
	
	if (a[left] < a[right])
	{
		if (a[mid] < a[right])
		{
			return mid;
		}
		else if (a[mid] > a[right])
		{
			return right;
		}
		else
		{
			return left;
		}
	}

	else
	{
		
		if (a[mid] > a[left])
		{
			return left;
		}
		else if (a[mid] < a[right])
		{
			return right;
		}
		else
		{
			return mid;
		}
	}

}
int partion5(int* a, int left, int right)
{
	//三数取中,面对有序时是最坏的情况O(N^2),现在每次选的key都是中间值,变成最好的情况了
	int midi = GetMidIndex(a, left, right);
	Swap(&a[midi], &a[left]);//这样还是最左边作为key

	int prev = left;
	int cur = left + 1;
	int keyi = left;
	while (cur <= right)
	{
		if (a[cur] < a[keyi] && ++prev != cur)//prev != cur  防止cur和prev相等时,相当于自己和自己交换,可以省略
		{                                   //前置 ++ 的优先级大于 != 不等于的优先级
			//++prev;
			Swap(&a[prev], &a[cur]);
		}
		++cur;
	}
	Swap(&a[keyi], &a[prev]);
	return prev;
}

2. Recursion to small subintervals 

As the recursion depth increases, the number of recursion increases at a rate of 2 times per layer, which has a great impact on the efficiency. When the length of the sequence to be sorted is divided to a certain size, the efficiency of continuing to divide is worse than that of insertion sorting. You can use insert row instead of quick row

We can use insertion sort to sort the remaining numbers when the length of the divided interval is less than 10

//小区间优化法,可以采用直接插入排序
void QuickSort(int* a, int left, int right)
{
	if (left >= right)
		return;

	if (right - left + 1 < 10)
	{
		InsertSort(a + left, right - left + 1);
	}
	else
	{
		int keyi = partion5(a, left, right);
		//[left,keyi-1] keyi [keyi+1,right]
		QuickSort(a, left, keyi - 1);
		QuickSort(a, keyi + 1, right);
	}
}

non-recursive version

The recursive algorithm mainly divides sub-intervals. If you want to implement quick sorting non-recursively, you only need to use a stack to save the interval. Generally, the first thing that comes to mind when changing a recursive program to a non-recursive one is to use a stack, because recursion itself is a process of pushing a stack.

The basic idea of ​​non-recursion:

1. Apply for a stack to store the start and end positions of the sorted array.

2. Push the start and end positions of the entire array onto the stack.

3. Since the characteristics of the stack are: last in, first out, right is pushed back into the stack, so right pops out of the stack first.

Define an end to receive the top element of the stack and pop the stack, and define a begin to receive the top element of the stack and pop the stack.

4. Sort the array in a single pass and return the subscript of the key value.

5. At this time, it is necessary to arrange the sequence to the left of the reference value key.

If only the starting position and ending position of the sequence on the left side of the reference value key are stored in the stack, the following interval will not be found when the left side is sorted. So first store the start position and end position of the sequence on the right into the stack, and then store the start position and end position of the left sequence into the stack.

6. Determine whether the stack is empty. If not, repeat steps 4 and 5. If it is empty, the sorting is completed.

void QuickSortNonR(int* a, int left, int right)
{
	Stack st;
	StackInit(&st);
	StackPush(&st,left);
	StackPush(&st, right);

	while (!StackEmpty(&st))
	{
		int end = StackTop(&st);
		StackPop(&st);

		int begin = StackTop(&st);
		StackPop(&st);

		int keyi = partion5(a,begin,end);
		//区间被成两部分了 [begin,keyi-1] keyi [keyi+1,end]
		if (keyi + 1 < end)
		{
			StackPush(&st,keyi+1);
			StackPush(&st,end);
		}
		if (keyi-1>begin)
		{
			StackPush(&st, begin);
			StackPush(&st, keyi -1);
		}
	}
	StackDestroy(&st);
}

Summary of Quick Sort:

①The overall comprehensive performance and usage scenarios of quicksort are relatively good, so we dare to call it quicksort

②The only dead end of fast sorting is to sort some ordered or near-ordered sequences, such as 2,3,2,3,2,3,2,3, which will become O(N^2) time complexity

③Time complexity O(N*logN)

④ Space complexity O(logN)

Seven, merge sort

 The basic idea of ​​merge sort (divide and conquer idea):

1. ( Split ) Divide an array into a left sequence and a right sequence, let them both be ordered separately, and then subdivide the left sequence into a left sequence and a right sequence, and repeat this step until the subdivision does not exist. or until there is only one number

2. ( Merge ) Combine the numbers obtained in the first step into an ordered interval

 Implementation:

① Split

 ②Merge

Recursive implementation:

It is very similar to a binary tree in idea, so we can use a recursive method to implement merge sort

code show as below:

void _MergeSort(int* a, int left, int right, int* tmp)
{
	if (left >= right)
	{
		return;
	}
	int mid = (left + right) / 2;
	_MergeSort(a, left, mid, tmp);
	_MergeSort(a, mid+1, right, tmp);
	
	int begin1 = left, end1 = mid;
	int begin2 = mid + 1, end2 = right;
	int i = left;
	while (begin1 <= end1 && begin2 <= end2)
	{
		if (a[begin1] < a[begin2])
		{
			tmp[i++] = a[begin1++];
		}
		else
		{
			tmp[i++] = a[begin2++];
		}
	}
	while (begin1 <= end1)
	{
		tmp[i++] = a[begin1++];
	}
	while (begin2 <= end2)
	{
		tmp[i++] = a[begin2++];
	}
	for (int j = left; j <= right; j++)
	{
		a[j] = tmp[j];
	}
}
//归并排序
void MergeSort(int* a, int n)
{
	int* tmp = (int*)malloc(sizeof(int)*n);
	if (tmp == NULL)
	{
		printf("malloc fail\n");
		exit(-1);
	}
	_MergeSort(a,0,n-1,tmp);

	free(tmp);
	tmp = NULL;
}

Non-recursive implementation:

We know that the disadvantage of recursive implementation is that the stack will always be called, and the stack memory is often very small. So, we try to use the loop method to achieve

Since we are manipulating the subscript of the array, we need to use the array to help us store the array subscript obtained recursively above. The difference from recursion is that recursion has to subdivide the interval all the time, and the left interval has to be recursively divided. , and then recursively divide the right interval, and the non-recursion of the array is to process the data at one time, and copy the subscript back to the original array each time

The basic idea of ​​merge sort is to regard the to-be-sorted sequence a[0...n-1] as n ordered sequences of length 1, and merge adjacent ordered lists in pairs to obtain n/2 length 2 The ordered list of ; merge these ordered sequences again to obtain n/4 ordered sequences of length 4; and so on, and finally obtain an ordered sequence of length n.

But we are in an ideal situation (even number), and there are special boundary controls. When the number of data is not an even number, the gap groups we divide are bound to have out-of-bounds places.

The first case:

Second case:

code show as below:

void MergeSortNonR(int* a, int n)
{
	int* tmp = (int*)malloc(sizeof(int)*n);
	if (tmp == NULL)
	{
		printf("malloc fail\n");
		exit(-1);
	}

	int gap = 1;
	while (gap < n)
	{
		for (int i = 0; i < n; i += 2 * gap)
		{
			// [i,i+gap-1] [i+gap,i+2*gap-1]
			int begin1 = i, end1 = i + gap - 1;
			int begin2 = i + gap, end2 = i + 2 * gap - 1;

			// 核心思想:end1、begin2、end2都有可能越界
			// end1越界 或者 begin2 越界都不需要归并
			if (end1 >= n || begin2 >= n)
			{
				break;
			}
			
			// end2 越界,需要归并,修正end2
			if (end2 >= n)
			{
				end2 = n- 1;
			}

			int index = i;
			while (begin1 <= end1 && begin2 <= end2)
			{
				if (a[begin1] < a[begin2])
				{
					tmp[index++] = a[begin1++];
				}
				else
				{
					tmp[index++] = a[begin2++];
				}
			}

			while (begin1 <= end1)
			{
				tmp[index++] = a[begin1++];
			}

			while (begin2 <= end2)
			{
				tmp[index++] = a[begin2++];
			}

			// 把归并小区间拷贝回原数组
			for (int j = i; j <= end2; ++j)
			{
				a[j] = tmp[j];
			}
		}

		gap *= 2;
	}

	free(tmp);
	tmp = NULL;
}

Summary of Merge Sort:

①The disadvantage is that it requires O(N) space complexity, and merge sort is more to solve the problem of off-disk sorting

②Time complexity: O(N*logN)

③ Space complexity: O(N)

Eight, counting and sorting 

Also known as non-comparative sorting, also known as pigeonhole principle, it is a variant application of hash direct addressing method

Basic idea:

1. Count the number of occurrences of the same element

2. According to the statistical results, copy the data back to the original array

Implementation:

① Count the number of occurrences of the same element

For a given arbitrary array a, we need to open up a count array count, a[i] is a few, and the subscript of the count array is a few ++

Here we use absolute mapping, that is, the array element in a[i] is a few, and we are at the position where the subscript of the count array is a few ++, but for data aggregation, it does not start from a smaller number, such as 1001, For data such as 1002, 1003, and 1004, we can use the relative mapping method to avoid the waste of opening up array space. The space size of the count array can be determined by subtracting the minimum value + 1 from the maximum value in the a array (ie: range=max-min+1), we can get the count array subscript j =a[i]-min

 ② According to the result of the count array, copy the data back to the a array

The number of data in count[j] indicates how many times the number has appeared. If it is 0, it does not need to be copied

 code show as below:

void CountSort(int* a, int n)
{
	int min = a[0], max = a[0];//如果不赋值,min和max就是默认随机值,最好给赋值一个a[0]

	for (int i=1;i<n;i++)//修正 找出A数组中的最大值和最小值
	{
		if (a[i] < min)
		{
			min=a[i];
		}
		if (a[i]>max)
		{
			 max=a[i];
		}
	}
	int range = max - min + 1;//控制新开数组的大小,以免空间浪费
	int* count = (int*)malloc(sizeof(int) * range);
	memset(count,0, sizeof(int) * range);//初始化为全0
	if (count==NULL)
	{
		printf("malloc fail\n");
		exit(-1);
	}

	//1、统计数据个数
	for (int i=0;i<n;i++)
	{
		count[a[i]-min]++;
	}
	//2、拷贝回A数组
	int j = 0;
	for (int i=0;i<range;i++)
	{
		while (count[i]--)
		{
			a[j++] = i + min;
		}
	}
	free(count);
	count = NULL;
}

 Counting sort summary:

①When the data range is relatively concentrated, the efficiency is very high, but the usage scenarios are very limited, negative numbers can be arranged, but nothing can be done about floating-point numbers

②Time complexity: O(MAX(N,range))

③ Space complexity: O(range)

The stability summary of the eight sorts:

Stable sorts are: direct insertion sort, bubble sort, merge sort

Unstable sorting includes: Hill sort, selection sort, heap sort, quick sort, count sort

Guess you like

Origin blog.csdn.net/weixin_57675461/article/details/121903270