The beauty of data structures and algorithms (sorting)

Several basic sorting and its time complexity

insert image description here
Several of the most classic and commonly used sorting methods : bubble sort, insertion sort, selection sort, quick sort, merge sort, count sort, radix sort, bucket sort.

Second, how to analyze a sorting algorithm?

How to analyze sorting algorithm performance? The performance of sorting algorithm is analyzed from three aspects: execution efficiency, memory consumption and stability .

1. Execution efficiency (measured from the following 3 aspects)

1) Best case, worst case, average case time complexity
2) Coefficient of time complexity, constant, low order: the amount of sorted data is relatively small to consider
3) The number of comparisons and the number of exchanges (or moves)

2. Memory consumption

The memory consumption of an algorithm can be measured by space complexity, and sorting algorithms are no exception. However, for the space complexity of the sorting algorithm, we also introduced a new concept, Sorted in place. The in-place sorting algorithm is a sorting algorithm with a space complexity of O(1). The three sorting algorithms we are talking about today are all in-place sorting algorithms.

3. Stability

It is not enough to measure the quality of a sorting algorithm only by its execution efficiency and memory consumption. For the sorting algorithm, we also have an important metric, stability. This concept means that if there are elements with equal values ​​in the sequence to be sorted, after sorting, the original order of equal elements remains unchanged.

I will explain with an example. For example, we have a set of data 2, 9, 3, 4, 8, 3. After sorting by size, it is 2, 3, 3, 4, 8, 9.

There are two 3s in this set of data. After sorting by a certain sorting algorithm, if the order of the two 3s does not change, then we call this sorting algorithm a stable sorting algorithm; if the order of the two is changed, the corresponding sorting algorithm is called unstable. sorting algorithm .

Third, the sorting algorithm

1. Bubble sort

1) Execution efficiency: minimum time complexity, maximum time complexity, average time complexity

Sorting principle

1) Bubble sort will only operate on two adjacent data.
2) Compare the two adjacent data to see if they meet the requirements of the size relationship, and if not, let the two be exchanged.
3) One bubbling will move at least one element to the position where it should be, repeating n times, the sorting of n data is completed.
4) Optimization: If there is no data exchange in a certain bubbling, it means that complete order has been achieved, so the bubbling is terminated.

performance analysis

Execution efficiency: minimum time complexity, maximum time complexity, average time complexity
Minimum time complexity: When the data is completely ordered, only one bubbling operation is required, and the time complexity is O(n).
Maximum time complexity: When the data is sorted in reverse order, n times of bubbling operations are required, and the time complexity is O(n^2).
Average time complexity: Analyzed by order and inversion.

What is orderliness?
The degree of order is the number of pairs of elements with an ordered relationship in the array. For
example, [2, 4, 3, 1, 5, 6] The degree of order of this group of data is 11, which are [2, 4] [2, 3][2,5][2,6][4,5][4,6][3,5][3,6][1,5][1,6][5,6]. Similarly, for an inverted array, such as [6,5,4,3,2,1], the degree of order is 0; for a completely ordered array, such as [1,2,3,4,5,6 ], the degree of order is n*(n-1)/2, which is 15, and the complete order is called the full degree of order.

What is inversion degree? The definition of degree of inversion is exactly the opposite of degree of order. Core formula: Reverse order = full order - order.
The sorting process is the process of increasing the degree of order, decreasing the degree of inversion, and finally reaching the full degree of order, indicating that the sorting is completed.

2) Space complexity

Only one temporary variable is needed for each exchange, so the space complexity is O(1), which is an in-place sorting algorithm.

3) Algorithm stability

If the two values ​​are equal, the positions will not be swapped, so it is a stable sorting algorithm.

2. Insertion sort

Algorithm principle

First, we divide the data in the array into 2 intervals, the sorted interval and the unsorted interval. The initial sorted range has only one element, which is the first element of the array. The core idea of ​​the insertion algorithm is to take the elements in the unsorted interval, find a suitable insertion position in the sorted interval to insert it, and ensure that the elements in the sorted interval are always in order. Repeat this process until the unsorted element is empty and the algorithm ends.

performance analysis

1) Time complexity: best, worst, average case

If the array to be sorted is already sorted, we don't need to move any data. It only needs to traverse the array once, so the time complexity is O(n).

If the array is in reverse order, each insertion is equivalent to inserting new data in the first position of the array, so a lot of data needs to be moved, so the time complexity is O(n^2).

The average time complexity of inserting an element in an array is O(n), and insertion sort requires n insertions, so the average time complexity is O(n^2).

2) Space complexity

As can be seen from the above code, the operation of the insertion sorting algorithm does not require additional storage space, so the space complexity is O(1), which is an in-place sorting algorithm.

3) Algorithm stability

In insertion sort, for elements with the same value, we can choose to insert the elements that appear later after the elements that appear earlier, so that the original order remains unchanged, so it is stable.

3. Merge Sort

To be added...  

Suggested Topics (License)

912. Sorting Arrays (Supplementary Question 4. Tear Quick Sort by Hand)

https://leetcode-cn.com/problems/sort-an-array/

Given an integer array nums, please sort the array in ascending order.

Example 1:

Input:
nums = [5,2,3,1]
Output:
[1,2,3,5]
Example 2:
Input:
nums = [5,1,1,2,0,0]
Output:
[0,0 ,1,1,2,5]

hint:

1 <= nums.length <= 5 * 104
-5 * 104 <= nums[i] <= 5 * 104

/*
只能说不讲武德  这个题目要优化快排,一开始有点莽  直接冲快排 然后不出意外的超时了  不死心的还去交了一下改进的代发   一样卡在第11 12哥测试答案上,然后就看了一下题解  题解说到了基准值改为随机函数取的一个值,然后去看了快排优化的几种思路;
关于快排 O(logn)  圈重点——————》不稳定  
第一种
    就是最原始的  取一个固定值当基准值 然后去对比 一般就是取第一个或者最后一个了  但是如果输入序列是随机的,处理时间可以接受的。如果数组已经有序时,此时的分割就是一个非常不好的分割。因为每次划分只能使待排序序列减一,此时为最坏情况,快速排序沦为冒泡排序了,时间复杂度为Θ(n^2)。而且,输入的数据是有序或部分有序的情况是相当常见的。
第二种
    就是这个这个题  用随机函数去取基准值  这样就不会造成刚刚所说的那个情况  除非整个数组的数据一模一样的 时间复杂度又会回到Θ(n^2);
    其实我感觉也没有优化什么  就只是避免了第一种所说的情况而已;
       (ps:亲测 我把随机函数改了  直接让中间的值来当基准值)
第三种
    三数取中法  这个可以弥补第二种的缺陷 但是它也比哪俩种都难写 这个不讲(因为我也太了解 hhhh 网上还有什么三路快排让我先去琢磨一下子)
*/
class Solution {
    
    
    int quicksort(vector<int>& nums, int l, int r) {
    
    
        int temp = nums[r];
        int i = l - 1;
        for (int j = l; j <= r - 1; ++j) {
    
    
            if (nums[j] <= temp) {
    
    
                i = i + 1;
                swap(nums[i], nums[j]);
            }
        }
        swap(nums[i + 1], nums[r]);
        return i + 1;
    }
    int partition(vector<int>& nums, int l, int r) {
    
    
        int i = (r+l)>>1; // 取中间的值当基准值
        swap(nums[r], nums[i]);
        return quicksort(nums, l, r);
    }
    void Sort(vector<int>& nums, int l, int r) {
    
    
        if (l < r) {
    
    
            int pos = partition(nums, l, r);
            Sort(nums, l, pos - 1);
            Sort(nums, pos + 1, r);
        }
    }
public:
    vector<int> sortArray(vector<int>& nums) {
    
    
        srand((unsigned)time(NULL));
       Sort(nums, 0, (int)nums.size() - 1);
        return nums;
    }
};
//快排必须要用随机函数  不然就会超时

82. Remove Duplicate Elements in Sorted List II

Given the head of a sorted linked list, delete all nodes with duplicate numbers in the original linked list, leaving only the distinct numbers. Returns a sorted linked list.

Example 1:
insert image description here

Input: head = [1,2,3,3,4,4,5]
Output: [1,2,5]

Example 2:
insert image description here

Input: head = [1,1,1,2,3]
Output: [2,3]

/**
 * Definition for singly-linked list.
 * struct ListNode {
 *     int val;
 *     ListNode *next;
 *     ListNode() : val(0), next(nullptr) {}
 *     ListNode(int x) : val(x), next(nullptr) {}
 *     ListNode(int x, ListNode *next) : val(x), next(next) {}
 * };
 */
class Solution {
    
    
public:
    ListNode* deleteDuplicates(ListNode* head) {
    
    
        if (head==NULL||head->next==NULL){
    
      //先判断是否为空
            return head;
        }
        if (head->val!=head->next->val){
    
     //如果当前值和下一个不一样,那么这个数据就是合法的,不需要删除,那么就将下一个递归,检查是否需要删除
            head->next=deleteDuplicates(head->next);
            return head;
        }
        
        else {
    
    
            int value=head->val;  //记录值
            while (head!=NULL&&head->val==value){
    
       //删除重复的所以值
                head=head->next;
            }
            if (head==NULL) // 如果是空,那么直接返回
                return NULL;
                head=deleteDuplicates(head);   //因为这个head就是下一个待检查的数据,所以不需要加next,一开始我加了,然后数据少了,有时候有多了  所以这个还是要注意一下,否则很容易出错
                return head;
        
        } 
    }
};

Guess you like

Origin blog.csdn.net/qq_54729417/article/details/123345047