Sorting Algorithm Analysis: Bubble, Selection, Quick and Insertion Sort

Table of contents

introduction

 1. Bubble sort

1.1 Sorting steps

1.2 Code implementation

1.3 Analysis

2. Selection sort

2.1 Sorting steps

2.2 Code implementation

2.3 Analysis

3. Quick Sort

3.1 steps

3.2 Code implementation

3.3 Analysis

4. Insertion sort

4.1 steps

4.2 Code implementation

4.3 Analysis


introduction

An important topic in computer science, a sorting algorithm is an algorithm for rearranging a set of elements, used to arrange data in an orderly manner. In actual development, we often need to choose a suitable sorting algorithm according to specific needs. This blog will explore four common sorting algorithms in depth: bubble sort , selection sort , quick sort , and insertion sort , detailing their principles, implementation, advantages and disadvantages.

 1. Bubble sort

Bubble sort is a simple and intuitive sorting algorithm, which completes sorting by comparing and exchanging adjacent elements.

1.1 Sorting steps

  1. First, starting from the first element of the array, compare the adjacent two elements in turn.
  2. If the first element is larger than the second, swap their positions, ensuring that the larger element is swapped to the back.
  3. Continue to traverse the array backwards, repeating the above process of comparing and exchanging until the penultimate element is traversed.
  4. After one round of traversal, the largest element will " bubble " to the end of the array .
  5. Repeat the above steps, but do not consider the sorted end parts in each traversal, because they are already the largest, only need to pay attention to the unsorted parts.
  6. Continue for multiple rounds of traversal until all elements are sorted

By continually repeating this process, the largest (or smallest) elements gradually "bubble" like bubbles into their correct positions, while the smaller (or larger) elements gradually sink to the bottom. Eventually, the entire array will be sorted in ascending (or descending) order.

1.2 Code implementation

template <typename T>
void bubbleSort(T arr[], int n) {
    for (int i = 0; i < n - 1; ++i) {
        // 在每一轮遍历中,比较相邻的元素并交换它们的位置
        for (int j = 0; j < n - i - 1; ++j) {
            if (arr[j] > arr[j + 1]) {
                // 如果前一个元素大于后一个元素,进行交换
                swap(arr[j], arr[j + 1]);
            }
        }
    }
}

1.3 Analysis

The time complexity of bubble sort is O(n^2) , where n is the length of the sequence to be sorted.

In the best case , the input data is already ordered, so only one round of traversal is required, and the time complexity is O(n). The worst case is that the input data is completely reversed, and n rounds of traversal are required, and the time complexity is O(n^2).

On average , the time complexity of bubble sort is also O(n^2).

Bubble sort is a stable sorting algorithm, that is, in the case of equal elements, their relative order will be kept unchanged. This is because bubble sort only compares and exchanges adjacent elements, and if two equal elements are not exchanged, their relative order will remain unchanged.

Although the time complexity of bubble sorting is high , it has the advantages of simple implementation and easy-to-understand code. It is suitable for sorting tasks of small-scale data sets or data sets that are already basically ordered . However, on large-scale data sets, the performance of bubble sorting will be significantly reduced, so in practical applications, it is necessary to choose a more efficient sorting algorithm according to the situation.

2. Selection sort

The basic idea is to select the smallest (or largest) element from the data to be sorted each time and put it at the end of the sorted part. By repeating this process, the entire sequence is finally sorted.

2.1 Sorting steps

  1. Divides the sequence to be sorted into sorted and unsorted parts. Initially, the sorted part is empty and the unsorted part contains the entire sequence.
  2. Find the smallest (or largest) element in the unsorted part, and record its position ( index ).
  3. Swap the smallest (or largest) element with the first element of the unsorted part (the first sorting is all unsorted content, so exchange the largest (smallest) element with the first element).
  4. Expands the sorted part by increasing the number of elements it contains, and shrinks the unsorted part by reducing the number of elements it contains by one.
  5. Repeat steps 2~4 until the unsorted part is empty.

2.2 Code implementation

template <typename T>
void selectionSort(T arr[], int n) {
    for (int i = 0; i < n - 1; ++i) {
        // 假设当前未排序部分的第一个元素是最小值
        int min_idx = i;

        // 在未排序部分中找到最小值的索引
        for (int j = i + 1; j < n; ++j) {
            if (arr[j] < arr[min_idx]) {
                min_idx = j;
            }
        }

        // 将最小值与当前未排序部分的第一个元素交换位置
        swap(arr[i], arr[min_idx]);
    }
}

2.3 Analysis

The time complexity of selection sort is O(n^2), where n is the length of the sequence to be sorted. This is because there are two levels of nested loops in the selection sort algorithm.

The outer loop traverses from 0 to n-1 , which represents the end position of the sorted section. In each outer loop, the inner loop starts at the next position of the outer loop and traverses to the end of the array. The inner loop is used to find the smallest (or largest) element in the unsorted part.

For each outer loop, the inner loop needs to compare the elements in the unsorted part and find the index of the smallest (or largest) element. Therefore, the number of iterations of the inner loop is ni-1 , where i is the number of iterations of the outer loop .

The total number of comparisons can be summed to get:

(n-1) + (n-2) + ... + 1 = (n-1) * n / 2 = (n^2 - n) / 2

The time complexity of this sum can be expressed as O(n^2).

In the worst case, selection sort requires the same number of compare and swap operations regardless of the order of the input data. Therefore, the best, worst and average time complexity of selection sort is O(n^2).

Selection sort is an unstable sorting algorithm. When selecting the smallest (or largest) element and swapping it, it is possible to change the relative order of equal elements . For example, for the sequence [5, 5, 3], selection sorting might result in [3, 5, 5], causing the relative order of equal elements to change.

Selection sort is an in-place sorting algorithm that requires no additional space. It only requires a constant level of extra space for storing some auxiliary variables, so the space complexity is O(1).

Although the time complexity of selection sort is high, it still has certain application value for small-scale data sets because of its simple implementation and easy-to-understand code. However, on large-scale datasets, selection sort has poor performance and is usually not the sorting algorithm of choice. In practical applications, if you have higher requirements for sorting performance, you can choose other more efficient sorting algorithms, such as quick sort or merge sort.

3. Quick Sort

The basic idea is to divide the sequence to be sorted into two subsequences by selecting a reference element, all elements of one subsequence are smaller than the reference element, and all elements of the other subsequence are greater than the reference element. Then the two subsequences are recursively sorted, and finally the entire sequence is ordered.

3.1 steps

  1. Select a pivot element (usually the first element of the sequence to be sorted).
  2. Compares the other elements in the sequence to the pivot element, placing elements smaller than the pivot element to the left of the pivot element and elements greater than the pivot element to the right of the pivot element.
  3. Repeat the above steps for the subsequences to the left and right of the reference element, until each subsequence has only one element left or is empty.
  4. Finally, all the subsequences are combined to obtain the sorted sequence.

Example :

Array to be sorted: [64, 25, 12, 22, 11]

  1. Select the first element 64 as the base element.
  2. Compare the other elements to the base element, placing elements less than 64 on the left and elements greater than 64 on the right. At this point, the sequence we get is [25, 12, 22, 11, 64].
  3. Repeat the above steps for the left subsequence [25, 12, 22, 11] and the right subsequence [64] respectively.
    • Quick sort the subsequence on the left, select the first element 25 as the reference element, and get [11, 12, 22, 25].
    • Perform quick sort on the right subsequence, only one element does not need to be sorted.
  4. Merge the left subsequence and the right subsequence to get the final ordered sequence: [11, 12, 22, 25, 64].

In this way, by recursively dividing and sorting the subsequences, eventually the whole sequence will be sorted.

3.2 Code implementation

template <typename T>
int partition(T arr[], int low, int high) {
    T pivot = arr[low];  // 将第一个元素作为基准元素
    int i = low, j = high;

    while (i < j) {
        // 从右向左找第一个小于基准元素的元素
        while (i < j && arr[j] >= pivot)
            j--;

        // 将找到的小于基准元素的元素放到左边
        arr[i] = arr[j];

        // 从左向右找第一个大于等于基准元素的元素
        while (i < j && arr[i] < pivot)
            i++;

        // 将找到的大于等于基准元素的元素放到右边
        arr[j] = arr[i];
    }

    // 将基准元素放到正确的位置
    arr[i] = pivot;

    // 返回基准元素的索引
    return i;
}

template <typename T>
void quickSort(T arr[], int low, int high) {
    if (low < high) {
        // 划分子序列并获得基准元素的索引
        int pivotIndex = partition(arr, low, high);

        // 对基准元素左边的子序列进行快速排序
        quickSort(arr, low, pivotIndex - 1);

        // 对基准元素右边的子序列进行快速排序
        quickSort(arr, pivotIndex + 1, high);
    }
}

3.3 Analysis

partitionThe function is used to divide the sequence to be sorted into two subsequences and return the index of the base element. It uses two left and right pointers ( iand j) to move from both ends of the sequence to the middle, find the elements that need to be exchanged, place the elements that are smaller than the reference element to the left of the reference element, and place the elements that are greater than or equal to the reference element to the reference element. right.

quickSortFunction used to call quicksort recursively. It first divides the subsequence and obtains the index of the reference element, then recursively calls the quick sort on the left and right subsequences of the reference element, and stops the recursion when the length of the subsequence is 1 or 0

The time complexity of quick sort is O(nlogn) .

In the best case , each division can evenly divide the sequence to be sorted into two subsequences of equal length. At this time, the time complexity of quick sorting is O(nlogn).

In the worst case , each division can only divide the sequence to be sorted into a subsequence of length 1 and a subsequence of length n-1. At this time, the time complexity of quick sorting is O(n^2) .

However, in the average case, quicksort has a time complexity of O(nlogn) and has better performance.

Quicksort is an in-place sorting algorithm that does not require additional space. It divides and sorts subsequences by recursive calls, and only requires a constant level of extra space for storing some auxiliary variables, so the space complexity is O(1).

Quicksort is an unstable sorting algorithm , i.e. in case of equal elements it may change their relative order. This is because the partitioning process in quicksort does not guarantee that the order of equal elements will not change.

Quick sort is a common and efficient sorting algorithm, especially suitable for sorting large-scale data sets . It is simple in principle, relatively easy to implement, and has good performance.

4. Insertion sort

The basic idea is to divide the sequence to be sorted into a sorted part and an unsorted part, and insert the elements of the unsorted part into the appropriate position of the sorted part one by one until the whole sequence is sorted.

When we use the insertion sort algorithm to sort an array, we can illustrate the process through an example in life.

  1. 1. Suppose you have a stack of poker cards in random order , and you want to arrange them in ascending order. You can use the idea of ​​insertion sort to accomplish this task.
  2. 2. At the beginning, you have only one card in your hand, which is the sorted part. Next, you take a card from the table, let's say it's a 4. You compare the 4 one by one with the cards in the sorted section to find where it should be inserted, and the sorted order is [2, 4].
  3. 3. Then, you again take a card from the table, let's say it's a 7. You compare the 7 to the cards in the sorted section one by one to find where it should be inserted. According to the sorted part [2, 4], you find that 7 should be inserted after 4, resulting in [2, 4, 7].
  4. 4. You repeat the process, picking up a card each time, comparing it one by one with the cards in the sorted section, and finding the right place to insert. Eventually, you have all the cards in your hand inserted into their proper places in the sorted section.

Through this example, you can see that the process of insertion sorting is like sorting a deck of cards in your hand, picking up a card from the table at a time, and inserting it into the appropriate position in the sorted part. Through step-by-step insertion, the entire sequence is gradually ordered.

4.1 steps

  1. Divide the array into sorted and unsorted parts. Initially, the sorted part has only one element, the first element of the array; the unsorted part consists of the remaining elements of the array.

  2. Selects an element from the unsorted part and inserts it at the appropriate position in the sorted part.

  3. Compare the selected element with the elements in the sorted part to find a suitable insertion position.

  4. Moves back elements that are larger than the selected element to make room for the selected element.

  5. Insert the selected element to the correct position to complete the insertion.

  6. Repeat steps 2~5 until the unsorted part is empty

4.2 Code implementation

template <typename T>
void insertionSort(T arr[], int n) {
    for (int i = 1; i < n; ++i) {
        T key = arr[i];  // 当前待插入的元素
        int j = i - 1;

        // 将比当前元素大的元素向后移动,为当前元素腾出插入位置
        while (j >= 0 && arr[j] > key) {
            arr[j + 1] = arr[j];
            j--;
        }

        // 将当前元素插入到正确的位置
        arr[j + 1] = key;
    }
}

Here is an example to illustrate the principle of insertion sort:

Suppose there is an array to be sorted: [7, 2, 4, 1, 5]

Initially, 7 is regarded as a sorted part, and [2, 4, 1, 5] is regarded as an unsorted part.

We sequentially select elements from the unsorted part and insert them into the appropriate place in the sorted part:

  • Select 2, inserting it into the sorted section. At this point, the sorted part becomes [2, 7] and the unsorted part becomes [4, 1, 5].

  • Select 4, inserting it into the sorted section. At this point, the sorted part becomes [2, 4, 7] and the unsorted part becomes [1, 5].

  • Select 1, inserting it into the sorted section. At this point, the sorted part becomes [1, 2, 4, 7] and the unsorted part becomes [5].

  • Select 5, inserting it into the sorted section. At this point, the sorted part becomes [1, 2, 4, 5, 7] and the unsorted part is empty.

Finally, by inserting the elements of the unsorted part one by one into the appropriate positions of the sorted part, we get a sorted array.

4.3 Analysis

The time complexity of insertion sort is O(n^2), where n is the length of the sequence to be sorted.

In the worst case , that is, when the sequence to be sorted is arranged in reverse order, the number of comparisons for insertion sorting is n(n-1)/2, which is O(n^2).

In the best case , that is, when the sequence to be sorted is already ordered, the number of comparisons for insertion sorting is n-1, which is O(n).

On average , the number of comparisons for insertion sort is also close to O(n^2).

Insertion sort is a sorting algorithm that is stable , i.e. does not change their relative order in the case of equal elements. This is because the movement of elements is only done if the current element is smaller than the elements of the sorted part.

Insertion sort is an in-place sorting algorithm that requires no additional space. It only requires a constant level of extra space for storing some auxiliary variables, so the space complexity is O(1).

In summary, insertion sort is a simple and intuitive sorting algorithm that builds an ordered sequence by inserting elements one by one and placing them in the appropriate position. Due to the characteristics of insertion sort, it is suitable for small-scale data sets or partially ordered situations. By continuously inserting elements into the sorted part and moving them, the entire sequence is finally sorted.

Guess you like

Origin blog.csdn.net/weixin_57082854/article/details/131718888