Quick sort Vs. merge sort Vs. heap sort - who is the most powerful sorting algorithm

Such a problem is known to have the almost:

Heap sort is relatively progressive optimal sorting algorithm, reaching O (nlgn) the lower bound, but there is a certain possibility of fast row will have the worst division, time complexity may be O (n ^ 2), then why fast came in actual use is usually better than the heap sort?

Just yesterday wrote an article about the fast row optimization, a comparison of today do more of it. First look at a sorting algorithm diagram:

Sort method Average case Best case Worst case Auxiliary space stability
Bubble Sort O (n ^ 2) O (n) O (n ^ 2) O (1) stable
Simple selection sort O (n ^ 2) O (n ^ 2) O (n ^ 2) O (1) stable
Direct insertion sort O (n ^ 2) O (n) O (n ^ 2) O (1) stable
Shell sort O (nlogn) ~ O (n ^ 2) O (n ^ 1.3) O (n ^ 2) O (1) Unstable
Heapsort O (nlogn) O (nlogn) O (nlogn) O (1) Unstable
Merge sort O (nlogn) O (nlogn) O (nlogn) O (n) stable
Quick Sort O (nlogn) O (nlogn) O (n ^ 2) (Logn) ~ (n) Unstable

It can be seen arriving nlogn level sorting algorithm, a total of three, namely, heap sort, merge sort and quick sort, merge sort in which only the most stable. So, why have quick sort average case is the fastest it?

In fact, analysis of algorithms, the role of large-scale O is given a lower bound, rather than increase the number of lower bound. Therefore, the complexity of the algorithm as simply stated as the amount of data increases, the same algorithm time cost growth trends, time is not on the implementation of the same, and there are a lot of different constant parameters, such as in front of various sorting algorithms are formulas in omits a c, c is the 100, probably 10, but because it is a constant level does not affect the big O. for quick sort heap sort, it is for

In addition, even if the same algorithm, different people write code execution time under different scenarios may be very different. Here is a test data:

测试的平均排序时间:数据是随机整数,时间单位是s
数据规模    快速排序       归并排序        希尔排序        堆排序
1000万       0.75           1.22          1.77          3.57
5000万       3.78           6.29          9.48         26.54  
1亿          7.65          13.06         18.79         61.31

Heapsort each taking a maximum bottom of the stack and data exchange, re-screening of the heap, the top of the heap of X adjustment in place, there is still very likely (bottom of the heap X is obviously relatively small adjustments to the bottom of the heap, will be at the bottom), then the top of the heap and the maximum exchange again, and then adjust down, it can be said heapsort done a lot of wasted effort.

Sum up, although the fast row worst time complexity is high, but in the statistical sense, the probability of such data appears very small, and heap sort process in the exchange process in the row with fast switching time, though they are constant, but the constant time a lot worse.

Guess you like

Origin www.cnblogs.com/linhaostudy/p/11785412.html