It’s up to you to grasp the seven rankings as a whole

 

 

table of Contents

notice:

Bubble Sort

working principle

Realize ideas

Animation video

Fake code

Features

Insertion sort

working principle

Realize ideas

Animation video

Fake code

Features

Merge sort

working principle

Realize ideas

Animation video

Fake code

Features

Select sort

working principle

Realize ideas

detail

Animation video

Fake code

Features

Hill sort

working principle

Realize ideas

detail

Features

Heap sort

working principle

Realize ideas

C++ code

Features

Quick sort

working principle

Realize ideas

detail

Animation video

Fake code

Features

to sum up


notice:

        The to be sorted area (to be sorted) is referred to as the to-be-sorted area; the sorted area is referred to as the sorted area.

       At the beginning, the arranged area of ​​the sequence is empty, and the waiting area is the entire sequence set.

Bubble Sort

  • working principle

        In the number sequence waiting area, the sizes of two adjacent elements are compared in turn , and if the order is reversed , they are exchanged until there is no reverse order, and an ordered number sequence is obtained.

  • Realize ideas

        In each round of comparison and exchange of adjacent elements, the smallest (or largest) element is obtained to the end of the sequence. That is, the row area at the end of the sequence grows gradually.

  • Animation video

  • Fake code

//从小到大
void bubbleSort(int n, keyType arr[]) {
     for(index i=0; i<n-1; i++) {
         for(index j=0; j<n-1-i; j++) {
             if (arr[j] > arr[j+1])
                 exchange(arr[j],arr[j+1]);
          }
    }
}
  • Features

  1. The complexity is related to the initial sorting: the pseudo-code can be optimized, and if there is no reverse-order pair in a certain comparison, the sorting is stopped.

  2. Best O(n), worst/average O(n^2), space O(1), original address: optimized bubble sorting when the initial sequence is ordered, the best time complexity is O(n) ; Only a constant number of extra element space is used to store temporary data, and the space is O(1).

  3. Stability: Each adjacent comparison and exchange ensures the relative order between repeated data

  4. Applicable to small-scale data: when there are few elements, it is faster. As elements increase, high complexity becomes a bottleneck.

Insertion sort

  • working principle

        In the number sequence waiting area, one element is taken out and inserted into the sorted area one by one. The element in the waiting area is reduced by one, and the sorted area remains in order . In this way, until the area to be arranged is empty, an ordered sequence of the arranged area is obtained.

  • Realize ideas

        Take out an element in sequence from the waiting area of ​​the sequence and insert it into an appropriate position to keep the sorted area in order.

  • Animation video

  • Fake code


//从小到大
void insertSort(int n, keyType arr[]) {
     for(index i=1; i<n; i++)
         for(index j=i; 0<j && arr[j-1] > arr[j]; j--)
              exchange(arr[j-1],arr[j]);
}
  • Features

  1. The complexity is related to the initial ordering: when most of the data is already ordered, the time complexity is better.

  2. Best O(n), worst/average O(n^2), space O(1), original address: If the sequence is initially ordered, the complexity is best O(n).

  3. Stability: Each time you take an element and insert it, the relative order of the data is maintained, and when inserting, it can be ensured that the repetition number to the lower repetition number must be inserted after the previous repetition number.

  4. Suitable for small data

Merge sort

  • working principle

        Divide the target sequence in half until the number of elements in each group is 1, and then merge and sort them in pairs. First make the sub-sequences in order, then make the adjacent sub-sequences in order, and finally merge the two ordered lists into one ordered list.

  • Realize ideas

        Divide and conquer ideas. From top to bottom, macroscopically divide the sequence into blocks of data until the number of subsequences is 1. From bottom to top, the adjacent subsequences are gradually sorted and merged.

  • Animation video

  • Fake code

void mergeSort(keyType arr[], int left,int right) {
     if(left < right) {
         int mid = mid(left, right);
         mergeSort(arr,left, mid);
         mergeSort(arr,mid+1, right);

         merge(arr,left, mid, right);
     }
}
void merge(keyType arr[], int left, intmid, int right) {
     int nl = mid-left+1;
     int nr = right-mid;

     arrLeft= Malloc(nl);
     arrRight= Malloc(nr);
     arrLeft[0...nl] = arr[left...mid];
     arrRight[0...nr] = arr[mid+1...right];

     arr = compareAndMerge(arrLeft, arrRight);
     delete[] arrLeft, arrRight;
}
  • Features

  1. The complexity has nothing to do with the initial sorting: in all cases, it is first decomposed and then sorted and merged.

  2. Best/worst/average O(nlgn), space O(n), not in-situ

  3. Stability: After divide and conquer, the data of each block is relatively orderly macroscopically, and the data in each block can also be relatively orderly.

  4. Data segmentation can be managed, suitable for medium, large and large amounts of data

  5. The time complexity of each merge is O(n), and the depth is |lgn|, which is the fastest sorting in the worst case.

Select sort

  • working principle

        In the number sequence waiting area, select the smallest (or largest) element and place it at the beginning of the waiting area, minus one from the waiting area. In this way, until the area to be arranged is empty, an ordered sequence of the arranged area is obtained.

  • Realize ideas

        In the waiting area, the local minimum (or maximum) element is selected during the traversal. After the waiting area is traversed, the global minimum (or maximum) element in the waiting area is obtained, and it is placed at the starting position of the waiting area, and the elements in the waiting area are reduced. One.

  • detail

        The sequence of numbers can be implemented at the bottom of an array or a linked list, and the specific operation details of different data structures are different. For example, to select the local smallest (or largest) element, the array is usually subscripted, and the linked list needs a pointer to this element.

  • Animation video

  • Fake code

//从小到大
void selectSort(int n, keyType arr[]) {
     index i,j,smallest;
     for (i=0; i<n; i++) {
         smallest= i;
         for(j=i+1; j<n; j++)
            if (arr[smallest] > arr[j])
               smallest = j;
         exchange(arr[i],arr[smallest]);
      }
}
  • Features

  1. The complexity has nothing to do with the initial sorting: each time it traverses the waiting area and elects the smallest (largest) element, one element in the waiting area is reduced.

  2. Best/worst/average time O(n^2), space O(1), original address: the time complexity of the waiting area is (n)+(n-1)+(n-2)+…+( 2)+(1)=(n+1)*n/2 = O(n^2),

  3. Unstable: For example, sorting from small to large. In the waiting area, if there is a smaller element behind the repeating number pair, it is possible that the smaller element and the previous repeating number will be exchanged in a new round of selection. The relative position of the two repetitions changes and becomes unstable.

  4. For example, the sequence set (...,a,...,b1,...,b2,...,c; where c<b1=b2), after one round of selection, it becomes (...,a,...,c,...,b2,...,b1 ), at this time a<c. It is possible because this round of selection may also become (…,c,…,b1,…,b2,…,a), at this time c<b1<a, this number c is smaller than the repeated number. After lurking in the repeating number of pairs, there is no instability factor.

  5. Suitable for small-scale data

Hill sort

  • working principle

        It is a variant of insertion sort. Divide the sequence into several groups according to the step length, and perform insertion sort in each group. As the step size is gradually reduced to 1, the sorting is completed.

  • Realize ideas

        Through this strategy, the whole number sequence is basically ordered in the initial stage, with the small (or large) basically in the front and the large (or small) basically in the back; when the step size is 1, only fine adjustment is required.

  • detail

        The running time of Hill sorting depends on the choice of step size. As long as the final step size is 1, it can work. It is an improved version of insertion sort. The worst is similar to the average, and the lower bound is O(nlg2n).

The sequence of numbers is generally implemented at the bottom of the array, and the subscript is easy to get according to the step size.

  • Features

  1. The complexity is related to the initial sorting: For data that has almost been sorted, the efficiency of linear sorting can be achieved.

  2. Best O(n^1.3), worst O(n^2), average O(nlg2n), space O(1), original location. Unstable: Insertion sort is stable, but the data combined by step length is inserted and sorted, which is macroscopically unstable.

  3. It is suitable for small and medium-scale data, or data that has almost been sorted.

Heap sort

  • working principle

        The array is constructed into a largest heap, the number at the top of the heap is exchanged with the end of the array, and the remaining numbers are reconstructed into a heap, and so on.

  • Realize ideas

        Algorithm designed for the nature of the heap. A heap is a complete binary tree with the following properties: the value of each node is greater than or equal to the value of its left and right child nodes, called the maximum heap; the value of each node is less than or equal to the value of its left and right child nodes, called the minimum heap. Number the nodes in the heap by layer and map them to the array.

  • C++ code

//end开区间
void makeHeap(vector<int>&nums, int start, int end) {
     if(start >= len)
         return;

     int dad = start, son = (start<<1) + 1;
     //儿节点有效
     while(son < end) {
         //nums[son]是较大儿节点
         if(son+1 < end && nums[son] < nums[son+1])
              ++son;

         if(nums[dad] < nums[son])
              return;
         else{
              //父小节点调到儿节点
              //小节点以下有可能不满足条件
              swap(nums[dad],nums[son]);
              dad = son;
              son = (dad<<1) + 1;
         }
     }
}
void heap(vector<int>& nums) {
     int len = nums.size();
     //由最下边的一个父节点开始,构造堆
     //大节点一层层上浮:等差数列时间复杂度为O(n)
     for(int i = len/2-1; i >= 0; --i)
         makeHeap(nums,i, len);

     //开始n-1次下沉,再n-1次构造
     for(int i = len-1; i > 0; --i) {
         swap(nums[0],nums[i]);
         makeHeap(nums,0, i); //重建堆O(lgn)
     }
}
  • Features

  1. The complexity has nothing to do with the initial sorting: each sorting is to construct the heap O(n) first, sink n-1 times, and then reconstruct the heap O(lgn)

  2. Best/worst/average O(nlgn), space O(1), original location

  3. Unstable: In the binary tree operation of the array, the relative order of the repeated numbers cannot be guaranteed

  4. Suitable for medium and large scale data

Quick sort

  • working principle

        Divide the sequence into two (possibly empty) sub-sequences Al[p,…,q-1] and Ar[q+1,…,r] so that each number in Al is less than or equal to A[q], And A[q] is also less than or equal to each element in Ar. Among them, calculating the subscript q is also part of the division process.

  • Realize ideas

        Divide and conquer ideas. Select the cutoff value, that is, the pivot, and divide the data to be sorted into two independent parts through a sorting pass, and all the data in one part is smaller than all the data in the other part. Repeat divide and conquer until the sequence is ordered as a whole.

        In implementation, the selected pivot is often placed at one end of the sequence, and then the other end is compared and exchanged.

  • detail

        The running time of fast sorting depends on whether the division is balanced (the choice of the pivot). When the division is balanced, it is the same as merging, otherwise it is close to but slightly inferior to insertion sort. Principal element selection optimization:

  1. In the three-number selection, three elements are randomly selected from the sub-array, and the middle number is selected as the main element. Often choose the first, middle and last numbers

  2. Randomly select pivots so that in the worst case, the input data is no longer dependent on the random function. Disadvantages: When there are many identical data in the data, the randomization effect is directly weakened.

  • Animation video

  • Fake code

//从小到大,将主元放置数列左端,小于等于主元的数放在主元左边
void quickSort(keyType arr[], int left, int right) {
     if (left < right) {
         int x = partition(arr, left, right);
         quickSort(arr, left, x-1);
         quickSort(arr, x+1, right);
     }
}
void partition(keyType arr[], int left, int right) {
     int index= selectPrincipal(arr);
     int principal = exchange(arr[left],arr[index]);

     while (left < right) {
         while (left < right &&principal < arr[right])
              right--;
         exchange(arr[left], arr[right]);
         while (left < right &&arr[left] <= principal)
              left++;
         exchange(arr[left], arr[right]);
     }

     arr[left] = principal;
     return left;
}
  • Features

  1. The complexity is related to the initial sort: when the array is completely ordered (positive or reverse), the complexity is still O(n^2), while the insertion sort is O(n) at this time. Not suitable when it is basically orderly.

  2. Best/average O(nlgn), worst O(n^2), space O(lgn), original address: In the best case, the division is balanced, as long as the division is a constant ratio (the depth of the recursion tree is lgn, each layer The time cost is all O(n), and the running time of the algorithm is always O(nlgn). In the worst case, the two sub-problems contain n-1 and 1 data respectively, that is, only one data is excluded at a time, which is an oblique tree .

  3. Unstable: When there is a repetition number, it is impossible to guarantee that the selected pivot is always the repetition number of the front or the back. In this way, regardless of whether the left subsequence of the pivot is less than or equal to the pivot, it is possible to exchange two repetitions. Relative position. As long as there is the possibility of instability, it is an unstable algorithm.

  4. Good average performance, suitable for medium and large scale data.

 

to sum up

Sort Best T Average T Worst T space Original address stable
bubble O (n) O (n ^ 2) O (1)
insert O (n) O (n ^ 2) O (1)
Merge O (nlgn) O (n) ×
select O (n ^ 2) O (1) ×
Hill O (n ^ 1.3) O (nlg2n) O (n ^ 1.5) O (1) ×
heap O (nlgn) O (1) ×
Fast queue O (nlgn) O (n ^ 2) O (lgn) ×

 

           Everyone is unique, but some are unique you want to get closer. I'm a little yellow flower, do a little bit, do a little bit better. Welcome attention , sharing , collection and point of praise , your every support, all my power output.

           Scan, add attention, grow together wow.

                                                  

    Original articles are forbidden to be reproduced. If you need them, please contact them by private message.

Guess you like

Origin blog.csdn.net/sy_123a/article/details/115384796