Sorting (1) Exchange sorting c / c ++ and python implementation

Exchange order

Exchange refers to swapping the positions of these two records in the sequence according to the comparison results of the two keywords in the sequence. There are mainly bubble sorting and quick sorting.

Bubble Sort (Bubble Sort)

The basic idea of ​​bubble sorting : from front to back or back to front, compare the two adjacent elements, if the order is reversed, then exchange. Each bubble sort will move at least one element to where it should be, repeat n-1, and the sorting of n data is completed.

If you sort a group of data 7, 8, 9, 6, 5, 4, from small to large, the detailed process of the first bubble sort is as follows: It
Insert picture description herecan be seen that after a bubble operation, an element has been moved to It should be in the position. After n-1 such bubbling operations, n-1 elements are moved to the position where they should be, and the remaining one is naturally in the position.

In fact, the bubbling process just now can be optimized. When there is no data exchange for a bubbling operation, it means that it has reached a complete order, and it is not necessary to continue to perform subsequent bubbling operations.

// 冒泡排序c实现,a表示数组,n表示数组大小
/**
 * Author: gamilian
*/
void bubble_sort(int a[], int n) {
	if (n <= 1) 
		return;
 	for (int i = 0; i < n; ++i) {
    	boolean flag = false;	// 提前退出冒泡循环的标志位
    	for (int j = 0; j < n - i - 1; ++j) {
      		if (a[j] > a[j+1]) { // 交换
        		int tmp = a[j];
       			a[j] = a[j+1];
        		a[j+1] = tmp;
        		flag = true;  // 表示有数据交换      
      		}
   	 	}
    	if (!flag) 
    		break;  // 没有数据交换,提前退出
  	}
}
#	冒泡排序python实现
"""
    Author: gamilian
"""
def bubble_sort(a):
	""" 冒泡排序 
		args:
			a: List[int]
	"""
    length = len(a)
    if length <= 1:
        return

    for i in range(length):
        made_swap = False
        for j in range(length - i - 1):
            if a[j] > a[j + 1]:
                a[j], a[j + 1] = a[j + 1], a[j]
                made_swap = True
        if not made_swap:
            break

The stability of the algorithm : In bubble sorting, only the exchange can change the order of the two elements. In order to ensure the stability of the bubble sorting algorithm, when there are two adjacent elements of equal size, we do not exchange, the same size data will not change the order before and after sorting, so bubble sorting is a stable sorting algorithm .

Space complexity : The bubbling process only involves the exchange of adjacent data, and only requires a constant level of temporary space, so its space complexity is O (1) , which is an in-place sorting algorithm.

Time complexity : In the best case, the data to be sorted is already ordered. We only need to perform a bubbling operation to end it, so the best case time complexity is O (n) . The worst case is that the data to be sorted happens to be in reverse order. We need to perform n -1 bubbling operations, so the worst-case time complexity is O (n ^ 2) . The average time complexity is more complicated and can be calculated by the reverse order degree.
The degree of reverse order is the number of pairs of elements in the array that have an unordered relationship.

Pairs of elements in reverse order: a [i]> a [j], if i <j.

The degree of order is the number of pairs of elements in the array that have an ordered relationship.

Ordered element pairs: a [i] <= a [j], if i <j.

For a completely ordered array, the degree of ordering is n * (n-1) / 2. We call the degree of order of such a fully ordered array the full degree of order .
At the same time, reverse order degree = full order degree-order degree .

Bubble sort contains two operation atoms, comparison and exchange. Each time the exchange occurs, the degree of order increases by one. No matter how the algorithm is improved, the number of exchanges is always determined, which is the reverse order degree, that is, n * (n-1) / 2-initial order degree.

Bubble sorting is performed on an array containing n data. In the worst case, the n data in the initial state are in reverse order and the degree of order is 0, so n * (n-1) / 2 exchanges are performed. In the best case, the initial state data is ordered, and the degree of order is n * (n-1) / 2, so there is no need to exchange. On average, n * (n-1) / 4 swap operations are required, and there are definitely more comparison operations than swap operations, and the upper limit of complexity is O (n ^ 2), so the average time complexity is O (n ^ 2).

Quick sort

The idea of ​​quick sorting is based on the idea of ​​divide and conquer : If you want to sort a set of data from p to r with subscripts in the array, we choose any data from p to r as pivot (partition point).

We traverse the data between p and r, place the smaller than pivot to the left, the larger than pivot to the right, and the pivot to the middle. After this step, the data from the array p to r is divided into three parts, the previous p to q-1 are less than pivot, the middle is pivot, and the following q + 1 to r are Greater than pivot.
Insert picture description here

According to the idea of ​​divide and conquer and recursive processing, we can use recursive sorting of data with subscripts from p to q-1 and data with subscripts from q + 1 to r, until the interval is reduced to 1, which means that all The data is in order.

Recursion formula: quick_sort (left ... right) = quick_sort (left ... pivot -1) + quick_sort (pivot +1 ... right)
Termination condition: left> = right

//	快排c实现
/**
 * Author: gamilian
*/
//	对区间[left,right]划分
int Partition(int A[], int left, int right){
	int pivot = A[left];	//将第一个元素设为pivot
	while(left < right){	//只要left与right不相遇
		while(left < right && right > pivot) right--;	
		//只要right比pivot大,就一直左移
		A[left] = A[right];	//将比pivot小的元素移到左边
		while(left < right && left <= pivot) left++;
		//只要left比pivot小,就一直右移
		A[right] = A[left];	//将比pivot大的元素移到右边
	}
	A[left] = pivot;		//pivot放在最终left与right相遇的位置
	return left;			//返回存放pivot的下标
}
// A是数组,left与right初值为序列首尾下标
void quick_sort(int A[], int left, int right){ 
	if(left < right){						//当前区间长度超过1
		int pivot = Partition(A, left, right);	//划分区间
		quick_sort(A, left, pivot - 1);			//对于左子区间快排
		quick_sort(A, pivot + 1, right);		//对右子区间快排
	}
}
# 快排python实现,划分时用swap
"""
    Author: gamilian
"""
import random
def quick_sort(A, left, high):
	""" 快速排序 
		args:
			A: List[int]
			left: int
			right: int
	"""
    if left < right:
        # get a random position as the pivot
        k = random.randint(left, high)
        a[left], a[k] = a[k], a[left]

        m = partition(a, left, high)  # a[m] is in final position
        quick_sort(a, left, m - 1)
        quick_sort(a, m + 1, high)


def partition(a, left, right):
	""" 划分区间 
		args:
			A: List[int]
			left: int
			right: int
	"""
    pivot, j = a[left], left
    for i in range(left + 1, right + 1):
        if a[i] <= pivot:
            j += 1
            a[j], a[i] = a[i], a[j]  # swap
    a[left], a[j] = a[j], a[left]
    return j

The stability of the algorithm : because the partitioning process involves swap operations, if there are two identical elements in the array, after the first partitioning operation, the relative order of the two identical elements will change. Therefore, quick sorting is not a stable sorting algorithm .

Space complexity : If the recursive work stack is counted, because the fast queue is recursive, a recursive work stack is needed to save the necessary information of each layer of recursive calls, and its capacity is consistent with the maximum depth of the recursive calls. The space complexity of the best case of the fast queue is O (nlogn) , and the worst case requires n-1 recursive calls, that is, the space complexity of the worst case of the fast queue is O (n) , and the average space complexity of the fast queue is O (nlogn) .
If the recursive work stack is not counted, the fast sort space complexity is O (1) , which is an in-place sorting algorithm.

Time complexity : If each partition operation can just divide the array into two cells of nearly equal size, then the formula for recursively resolving the time complexity is the same as the merge. Therefore, the best case of fast sorting is O (nlogn) .

T (1) = C; When n = 1, only constant-level execution time is needed, so it is expressed as C. T (n) = 2 * T (n / 2) + n; n> 1

If we choose the last element as the pivot every time, the two intervals obtained by each partition are not equal. We need to perform about n partition operations to complete the entire process of quick queue. We need to scan about n / 2 elements on average for each partition, so the worst-case time complexity of fast sorting is O (n ^ 2) .

Assume that each partition operation divides the interval into two cells with a size of 9: 1.

T (1) = C; When n = 1, only constant-level execution time is needed, so it is expressed as C.
T (n) = T (n / 10) + T (9 * n / 10) + n; n> 1

Therefore, the average time complexity of fast queue is O (nlogn)

Published 10 original articles · Likes2 · Visits 496

Guess you like

Origin blog.csdn.net/qq_41167295/article/details/105112905