Several common sorting algorithms and their advantages and disadvantages

Common sorting algorithms are:

  1. Bubble Sort: Repeatedly traverse the sequence to be sorted, compare the sizes of adjacent elements and exchange them until the sequence sorting is completed. The time complexity is O(n^2), which is a stable sorting algorithm.

  2. Selection Sort: Each traversal finds the smallest element and puts it at the end of the sorted sequence. The time complexity is O(n^2), which is not a stable sorting algorithm.

  3. Insertion Sort (Insertion Sort): Traverse the sequence to be sorted and insert the current element into the appropriate position in the sorted sequence. The time complexity is O(n^2), which is a stable sorting algorithm.

  4. Shell Sort: An improved version of insertion sorting, which divides the sequence into several subsequences for insertion sorting, and finally performs another insertion sorting. The time complexity is O(n log n) to O(n^2), which is not a stable sorting algorithm.

  5. Merge Sort: Divide the sequence into several subsequences and sort them recursively, and then merge the sorted subsequences into the final ordered sequence. The time complexity is O(n log n), which is a stable sorting algorithm.

  6. Quick Sort: Select a reference element, put the elements smaller than the reference element on the left, put the elements larger than the reference element on the right, and then recursively sort the left and right parts. The time complexity is O(n log n), which is not a stable sorting algorithm.

  7. Heap Sort: Build the sequence to be sorted into a large root heap or a small root heap, exchange the top element with the last element each time, and then rebuild the heap. The time complexity is O(n log n), which is not a stable sorting algorithm.

The advantages and disadvantages are as follows

algorithm name advantage shortcoming
Bubble Sort 1. Simple implementation, easy to understand and code; 2. Bubble sorting is a good choice for sorting small data sets 1. High time complexity, O(n^2). 2. The efficiency of sorting a large amount of data is low; 3. When the array is completely ordered, it still needs to be completely sorted.
selection sort 1. The algorithm idea is simple and easy to implement; 2. It does not occupy additional memory space 1. High time complexity, O(n^2). 2. The running time has nothing to do with the initial state of the input data, that is, regardless of whether the data is ordered or disordered, the same number of comparison and movement operations are required; 3. Among all N ^ 2, the number of exchanges is the least, so select sorting It has a slight advantage over bubble sort in terms of data movement. But when data movement is time-consuming, other algorithms need to be considered.
insertion sort 1. Simple implementation, easy to understand and code; 2. For small-scale data sorting, insertion sorting is more efficient; 3. For partially ordered array sorting, the efficiency is higher 1. For sorting large-scale out-of-order arrays, the efficiency of insertion sorting is low, and the time complexity is O(n^2). 2. Insertion sorting is a stable sorting, but each exchange requires 3 assignment operations. Compared with selection sorting, the number of exchanges is more, so the efficiency is lower than selection sorting.
Hill sort 1. An improved algorithm based on insertion sort, which can quickly sort basically ordered sequences; 2. It is an in-place sorting algorithm that does not require additional storage space; 3. The upper bound of the time complexity is O( n^2), but the actual running time is shorter. 1. The time complexity depends on the selection of the incremental sequence, and the selection of the incremental sequence is not very easy. 2. It is an unstable sorting algorithm, which does not guarantee that the relative order of equal elements remains unchanged.
merge sort 1. It is a stable sorting algorithm, which ensures that the relative order of equal elements remains unchanged; 2. The time complexity is stable, O(nlogn); 3. It is suitable for sorting with a large amount of data 1. Additional storage space is required to store temporary arrays. 2. Recursive calls are required, which consume a certain amount of system stack space; 3. When the array is completely ordered, it still needs to be completely sorted.
quick sort 1. The space complexity is low, and only one stack is needed to realize the recursive process; 2. The time complexity is better, O(nlogn) in the best case, O(nlogn) in the average case, and O(nlogn) in the worst case n^2); 3. The sorting effect is better for randomly distributed data, but the sorting effect for data with a large number of identical elements is poor. 1. Unstable, because the position of elements will change during the sorting process. 2. In the worst case, the time complexity is O(n^2). When the data has been sorted, the efficiency of quick sort will be greatly reduced; 3. The sorting effect is better for randomly distributed data, but for a large number of identical Data ordering of elements is less effective.
heap sort 1. Stable, because the position of the elements will not change during the sorting process; 2. The time complexity is stable, and the time complexity in the best, average, and worst cases is O(nlogn); 3. For random distribution And a large number of data with the same elements have a better sorting effect 1. The space complexity is high, and a complete binary tree needs to be established to store data, so it needs to occupy a large amount of memory space. 2. It is not suitable for sorting large-scale data, because in the case of a large amount of data, heap sorting requires multiple data exchanges, and the efficiency is low.

Guess you like

Origin blog.csdn.net/qq_42133976/article/details/130417604