C ++ classic sorting summary

  First we must know the algorithm,

  0.1 algorithm classification

    Ten common sorting algorithms can be divided into two categories:

      Non-linear time comparison sorting : The relative order between elements is determined by comparison. Because its time complexity cannot exceed O (nlogn), it is called non-linear time comparison sorting.

      Linear time non-comparative sorting : Without comparing to determine the order between elements, it can break through the lower bound based on comparative sorting and run in linear time, so it becomes linear time non-comparative sorting.

 

 0.2 Algorithm complexity

 

 0.3 related concepts

    Stable : If a was originally in front of b and a = b, after sorting, a is still in front of b.

    Unstable : If a was originally in front of b and a = b, after sorting, a might appear after b.

    Time complexity : the total number of operations on sorted data. It reflects the regularity of the number of operations when n changes.

    Space complexity: refers to the measurement of the storage space required when the algorithm is executed in the computer , and it is also a function of the data size n.

Second, quick sort

  Suppose we now sort the 10 numbers "6 1 2 7 9 3 4 5 10 8". First of all, find a number in this sequence as a reference number (don't be scared by this term, it is a number for reference, you will know what it is used for later). For convenience, let the first number 6 be the reference number. Next, you need to place all numbers in this sequence that are larger than the reference number on the right of 6, and those that are smaller than the reference number on the left, similar to the following arrangement:

3  1  2 5  4  6  9 7  10  8

 

  

In the initial state, the number 6 is in the first position of the sequence. Our goal is to move 6 to a position in the middle of the sequence, assuming this position is k. Now you need to find this k, and take the k-th bit as the cut-off point, the numbers on the left are less than or equal to 6, and the numbers on the right are greater than or equal to 6, just recursively sort the two left and right intervals. Think about it, can you do this? This is the problem solved by quick sort.

Quick sorting is a kind of partition exchange sorting proposed by CRAHoare in 1962. It adopts a divide-and-conquer strategy, which is usually called Divide-and-Conquer Method. Its average time complexity is O (nlogn), and the worst time complexity is O (n ^ 2).

First picture above:

 

 

 

 From the figure we can see:

Left pointer, right pointer, base reference number.

In fact, the idea is quite simple, that is, to find the cut point of the array through the first traversal (making the left and right pointers coincide).

The first step: First, we take the number (20) from the left position of the array as a base reference. (If you choose a random one, after finding a random sentry, exchange it with the first element to start a normal fast queue)

The second step: look forward from the right position of the array, and always find a number smaller than (base). If it is found, assign this number to the left position (that is, assign 10 to 20). At this time, the array is: 10. The 40, 50, 10, 60, left and right pointers are 10 before and after.

Step 3: Look backward from the left position of the array, and always find a number larger than (base). If it is found, assign this number to the right position (that is, 40 is assigned to 10). At this time, the array is: 10, 40, 50, 40, 60, left and right pointers are 40 before and after.

The fourth step: repeat the "second, third" steps until the left and right pointers coincide, and finally put (base) at the position of 40, then the array value is: 10, 20, 50, 40, 60. Sort once.

The fifth step: At this time, 20 has sneaked into the array. The group on the left side of 20 is smaller than 20, and the group on the right side of 20 is larger than 20. Use 20 as the entry point and follow the number on the left and right The first, second, third, and fourth steps are carried out, and the quick sort is finally completed.

The quick sort code is as follows:

//快速排序,随机选取哨兵放前面 
void QuickSort(int* h, int left, int right)
{
    if(h==NULL) return;
    if(left>=right) return;
    
    //防止有序队列导致快速排序效率降低 
    srand((unsigned)time(NULL));
    int len=right-left;
    int kindex=rand()%(len+1)+left;
    Swap(h[left],h[kindex]);
    
    int key=h[left],i=left,j=right;
    while(i<j)
    {
        while(h[j]>=key && i<j) --j;
        if(i<j) h[i]=h[j];
        while(h[i]<key && i<j) ++i;
        if(i<j) h[j]=h[i];
    }
    
    h[i]=key;
    
    //QuickSort(&h[left],0,i-1);
    //QuickSort(&h[j+1],0,right-j-1);
    
    QuickSort(h,left,i-1);
    QuickSort(h,j+1,right);
}

  

Three, bubble sort

1. Principle

Bubble sorting compares adjacent records pair by pair during the scanning process. If the order is reversed, they are exchanged. Eventually, the largest record is "sinked" to the last position of the sequence, and the second pass "sinks" At the penultimate position, repeat the above operation until after n-1 scans, the entire sequence is sorted.

 

 

 

 Algorithm implementation:

// Bubbling sort 
void BubbleSort (int * h, size_t len) 
{ 
    if (h == NULL) return; 
    if (len <= 1) return; 
    // i is the number of times, j is the specific subscript 
    for (int i = 0; i <len-1; ++ i) 
        for (int j = 0; j <len-1-i; ++ j) 
            if (h [j]> h [j + 1]) 
                Swap (h [j ], h [j + 1]); 
                
    return; 
}

  

Four, select sort

Selection sorting is also a simple and intuitive sorting algorithm. Its working principle is easy to understand: Initially find the smallest (large) element in the sequence and put it at the beginning of the sequence as the sorted sequence; then, continue to find the smallest (large) element from the remaining unsorted elements and put To the end of the sorted sequence. And so on, until all elements are sorted.

Pay attention to the difference between selection sorting and bubble sorting: Bubble sorting places the current smallest (large) element in a suitable position by sequentially swapping the positions of two adjacent illegal elements; and selection sorting is remembered every time it is traversed In view of the current position of the smallest (large) element, only one exchange operation is needed to put it in the right place.

Algorithm implementation:

// select sort 
void SelectionSort (int * h, size_t len) 
{ 
    if (h == NULL) return; 
    if (len <= 1) return; 
    
    int minindex, i, j; 
    // i is the number of times, that is, sorted The number of; j is to continue to line 
    for (i = 0; i <len-1; ++ i) 
    { 
        minindex = i; 
        for (j = i + 1; j <len; ++ j) 
        { 
            if (h [ j] <h [minindex]) minindex = j; 
        } 
        Swap (h [i], h [minindex]); 
    } 
    
    return; 
}

  

Five, insert sort

Direct insertion sort (straight insertion sort), sometimes also referred to as insertion sort (insertion sort), is a typical application of the reduction method. The basic idea is as follows:

  • For the sorting problem of an array A [0, n], suppose that the problem of sorting the array in A [0, n-1] has been solved.
  • Consider the value of A [n], scan the ordered array A [0, n-1] from right to left until the first element less than or equal to A [n], insert A [n] after this element.

  Obviously, the idea based on the incremental method has higher efficiency in solving this problem.

Direct insertion sort For the worst case (strictly decreasing array), the number of comparisons and shifts is n (n-1) / 2; for the best case (strictly increasing array), the number of comparisons is n- 1. The number of shifts required is 0. Of course, the research on the best and the worst does not make much sense, because in reality, such extreme situations do not generally occur. However, direct insertion sort will show good performance for basically ordered arrays. This feature also gives it the possibility of further optimization. (Hill sort). The time complexity of direct insertion sort is O (n ^ 2), the space complexity is O (1), and it is also a stable sort.

The following uses a specific scenario to intuitively experience the process of direct insertion sorting:

Scenes:

There is an unordered array with 7 numbers: 89 45 54 29 90 34 68.

Use the direct insertion sort method to sort the array in ascending order.

89 45 54 29 90 34 68

45 89 54 29 90 34 68

45 54 89 29 90 34 68

29 45 54 89 90 34 68

29 45 54 89 90 34 68

29 34 45 54 89 90 68

29 34 45 54 68 89 90

 

Algorithm implementation:

// Insert sort 
void InsertSort (int * h, size_t len) 
{ 
    if (h == NULL) return; 
    if (len <= 1) return; 
    
    int i, j; 
    // i is the number of times Number; j is to continue to line 
    for (i = 1; i <len; ++ i) 
        for (j = i; j> 0;-j) 
            if (h [j] <h [j-1]) Swap ( h [j], h [j-1]); 
            else break; 
            
    return; 
}

  

Six, merge sort

Basic idea

  Merge sort (MERGE-SORT) is the use of merge sort method of thinking to achieve, the algorithm uses the classic divide and conquer (divide-and-conquer) strategy (divide and conquer the problem points (divide) into smaller problems and then recursive solution , And the stage of conquer "patches" the answers obtained in stages, that is, divide and conquer).

Divide and conquer

 

 

 

It can be seen that this structure is very much like a complete binary tree. In this paper, we use recursive sorting (also iterative). Divided phase will be appreciated that as recursive partitioning process sequence, recursion depth log 2 n-.

Merge adjacent ordered subsequences

  Let's take a look at the governance stage again . We need to merge two already ordered subsequences into an ordered sequence. For example, in the last merge in the above figure, [4, 5, 7, 8] and [1,2, 3,6] The two already ordered subsequences are merged into the final sequence [1,2,3,4,5,6,7,8], and see the implementation steps.

 

 

 

 

 Algorithm implementation:

//归并排序
void  MergeArray(int* arr, size_t left, size_t mid, size_t right, int* temp)
{
    if(arr==NULL) return;
    
    size_t i=left,j=mid+1,k=0;
    while(i<=mid && j<=right)
    {
        if(arr[i]<=arr[j])
        {
            temp[k++]=arr[i++];
            continue;
        }
        
        temp[k++]=arr[j++];
    }
    
    while(i<=mid)
        temp[k++]=arr[i++];
        
    while(j<=right)
        temp[k++]=arr[j++];
        
    memcpy(&arr[left],temp,k*sizeof(int));
        
    return;
}

void MMergeSort(int* arr, size_t left, size_t right, int* temp)
{
    if(left<right)
    {
        size_t mid=(left+right)/2;
        MMergeSort(arr, left, mid, temp);
        MMergeSort(arr, mid+1,right, temp);
        MergeArray(arr,left, mid, right, temp);
    }
}

void MergeSort(int* h, size_t len)
{
    if(h==NULL) return;
    if(len<=1) return;
    int* temp=(int*)calloc(len,sizeof(int));
    MMergeSort(h, 0, len-1, temp);
    
    memcpy(h,temp,sizeof(int)*len);
    
    free(temp);
    
    return;
}

  

Seven, Hill sorting

Hill sorting is a sorting algorithm proposed by Hill (Donald Shell) in 1959. Hill sorting is also an insert sorting. It is a more efficient version of simple insert sorting after improvement. It is also called narrowing incremental sorting. At the same time, this algorithm is one of the first algorithms to break through O (n 2 ). This article will introduce the basic idea of ​​Hill sorting and its code implementation in detail in a graphical way.

Basic idea

  Hill sorting is to group records by a certain increment of subscript, and use the direct insertion sort algorithm to sort each group; as the increment gradually decreases, each group contains more and more keywords. When the increment decreases to 1, The entire file is just divided into a group, and the algorithm terminates.

  Simple insertion sorting is very conventional, no matter what the distribution of the array is, it still compares, moves, and inserts the elements step by step, such as [5,4,3,2,1,0] such a reverse sequence, 0 at the end of the array It takes a lot of effort to return to the first position, and it takes n-1 times to compare and move elements. Hill sorting adopts a skip grouping strategy in the array. The array elements are divided into several groups by an increment, and then the group is inserted for sorting, then the increment is gradually reduced, and the insertion sorting operation by group is continued until the increment Is 1. Through this strategy, Hill sorting makes the entire array in the initial stage to be basically ordered from a macro perspective, with the small ones first and the big ones second. Then decrease the increment, and when the increment is 1, in fact, in most cases, only fine-tuning is required, and no excessive data movement is involved.

  Let's take a look at the basic steps of Hill sorting. Here we choose incremental gap = length / 2, and reduce the increment to continue with gap = gap / 2. This incremental choice can be expressed by a sequence, { n / 2, (n / 2) / 2 ... 1}, called incremental sequence . The selection and proof of the incremental sequence of Hill sorting is a mathematical problem. The incremental sequence we chose is more commonly used and is also the increment suggested by Hill. It is called the increment of Hill, but in fact this incremental sequence is not the most Excellent. Here we make an example using Hill increment.

 

 Algorithm implementation:

//希尔排序
void ShellSort(int* h, size_t len)
{
    if(h==NULL) return;
    if(len<=1) return;
    
    for(int div=len/2;div>=1;div/=2)
        for(int k=0;k<div;++k)
            for(int i=div+k;i<len;i+=div)
                for(int j=i;j>k;j-=div)
                    if(h[j]<h[j-div]) Swap(h[j],h[j-div]);
                    else break;
                
    return;
}

  

 

Eight, heap sort

Heap sort actually uses the nature of the heap to sort. To understand the principle of heap sort, we must first know what a heap is. 
Definition of
Heap Heap is actually a complete binary tree. 
The heap satisfies two properties: 
1. Each parent node of the heap is greater than (or less than) its child nodes; 
2. Each left and right subtree of the heap is also a heap. 
Heap classification: 
Heaps are divided into two categories: 
1. The largest heap (large top heap): each parent node of the heap is larger than its child nodes; 
2. The smallest heap (small top heap): each parent node of the heap is smaller than Its child node;

  

 

 

 Storage of heap: 
Generally, the heap is represented by an array, and the subscript of the parent node of the i node is (i – 1) / 2. The subscripts of its left and right subnodes are 2 * i + 1 and 2 * i + 2 respectively. As shown below:

 

 Heap sort: 
From the above introduction, we can see that the first element of the heap is either the maximum value (large top heap) or the minimum value (small top heap), so that when sorting (assuming a total of n nodes), The first element and the last element are directly exchanged, and then adjusted downward from the first element to the n-1th element. Therefore, if ascending order is required, a large heap is built, and when descending order is required, a small heap is built. 
The steps of heap sorting are divided into three steps: 
1. Building a heap (building a large heap in ascending order, building a small heap in descending order); 
2. exchanging data; 
3. adjusting downward. 
Suppose we now want to sort the array arr [] = {8,5,0,3,7,1,2} (descending order): 
first we need to build a small heap: 

 

 After the pile is built, it will start to sort:

 

 The array is now ordered.

Algorithm implementation:

// Heap sort 
/ * After 
big top heap sort, the array is sorted from small to large 
* / 
// ==== Adjustment ===== 
void AdjustHeap (int * h, int node, int len) // --- -node is the node number that needs to be adjusted, starting from 0; len is the heap length 
{ 
    int index = node; 
    int child = 2 * index + 1; // left child, the first node number is 0 
    while (child < len) 
    { 
        // Right subtree 
        if (child + 1 <len && h [child] <h [child + 1]) 
        { 
            child ++; 
        } 
        if (h [index]> = h [child]) break; 
        Swap (h [index], h [child]); 
        index = child; 
        child = 2 * index + 1; 
    } 
} 


// ==== Build heap 
== void MakeHeap (int * h, int len)  
{
    for (int i = len / 2; i> = 0;-i)
    {
        AdjustHeap(h, i, len);
    }
}

//====排序=====
void HeapSort(int* h, int len)
{
    MakeHeap(h, len);
    for(int i=len-1;i>=0;--i)
    {
        Swap(h[i],h[0]);
        AdjustHeap(h, 0, i);
    }
}

  

Nine, cardinality sorting

Cardinality sorting is different from the seven sorting methods explained earlier in this series. It does not require comparing the size of keywords .

It is based on the value of each bit in the keyword, through a number of "allocation" and "collection" of the sorted N elements to achieve sorting. 

1.LSD (low-order to high-order)

May wish to show a specific example of how cardinality sorting works. 

There is an initial sequence: R {50, 123, 543, 187, 49, 30, 0, 2, 11, 100}.

We know that for any Arabic number, the base of each digit is represented by 0 ~ 9.

So we might as well consider 0 ~ 9 as 10 buckets. 

We first classify according to the single digit numbers of the sequence and divide them into the designated buckets. For example: R [0] = 50, the single digit is 0, and store this number in the bucket with the number 0.

 

 

After classification, we take these numbers from each bucket in order from 0 to 9 in order.

At this time, the sequence obtained is an increasing trend in single digits. 

Sort by single digits: {50, 30, 0, 100, 11, 2, 123, 543, 187, 49}.

Next, you can also sort the tens and hundreds of digits in this way, and finally you can get the sorted sequence.

LSD  algorithm implementation:

int GetMaxDight(int* h, int len)
{
    if(h==NULL) return 0;
    if(len<1) return 0;
    
    int max=h[0];
    for(int i=1;i<len;++i)
    {
        if(h[i]>max) max=h[i];
    }
    
    int digit=1;
    while(max/10!=0)
    {
        max/=10;
        ++digit;
    }
    
    return digit;
} 

int GetReminder(int value,int digit)
{
    int div=1;
    for(int i=1;i<digit;++i)
        div*=10;
    
    return value/div%10;
}

void RadixSort_LSD(int* h, int len)
{
    if(h==NULL) return;
    if(len<=1) return;
    
    int digit=GetMaxDight(h,len);
    //printf("MaxDigit:%d\n", digit);
    
    int count[10]={0};
    int *tmp=(int*)calloc(len,sizeof(int));
    
    for(int d=1;d<=digit;++d)
    {
        memset(count,0,sizeof(count));
        
        for(int i=0;i<len;++i)
        {
            count[GetReminder(h[i],d)]++;
        }
        
        //求右边界 
        for(int i=1;i<10;++i)
        {
            count[i]+=count[i-1];
        }
        
        for(int i=len-1;i>=0;--i)
        {
            int r=GetReminder(h[i],d);
            int index=count[r];
            tmp[index-1]=h[i];
            count[r]--;
        }
        
        memcpy(h,tmp,len*sizeof(int));
    }
    
    free(tmp);
}

void RadixSort_LSD_Reverse(int* h, int len)
{
    if(h==NULL) return;
    if(len<=1) return;
    
    int digit=GetMaxDight(h,len);
    //printf("MaxDigit:%d\n", digit);

    int count[10]={0};
    
    int *tmp=(int*)calloc(len,sizeof(int));
    
    for(int d=1;d<=digit;++d)
    {
        memset(count,0,sizeof(count));
        
        for(int i=0;i<len;++i)
        {
            count[GetReminder(h[i],d)]++;
        }
        
        //printf("haha\n");
        
        //求右边界 
        for(int i=8;i>=0;--i)
        {
            count[i]+=count[i+1];
        }
        
        
        
        for(int i=len-1;i>=0;--i)
        {
            int r=GetReminder(h[i],d);
            int index=count[r];
            tmp[index-1]=h[i];
            count[r]--;
        }
        
        memcpy(h,tmp,len*sizeof(int));
    }
    
    free(tmp);
}

  

2. MSD (high to low order)

Below we will directly introduce the steps of MSD cardinality sorting.

MSD cardinality sorting is to group the sequence from the highest bit to the lowest bit. But its implementation process is different from LSD cardinality sorting, and recursive functions are needed in the sorting process.

Sequence to be sorted

170, 45, 75, 90, 2, 24, 802, 66

We see that the largest number in it is 3 digits. So we started to group these numbers from the hundred

0: 045, 075, 090,002, 024, 066
1: 170
2-7: empty
8: 802
9: empty

From here, it is different from the LSD cardinality sorting. In LSD cardinality sorting, the data in the bucket is collected after each group is divided. Then group the collected sequences according to the next digit. For MSD, the data in the bucket is not collected here. All we have to do is check the data in each bucket. When the number of elements in the bucket is more than one, the next bit of the bucket must be grouped recursively.

In this example, we want to group all the elements in bucket 0 according to the ten digits

0: 002
1: Empty
2: 024
3: Empty
4: 045
5: Empty
6: 066
7: 075
8: Empty
9: 090

According to the above, we should recursively group the elements in each bucket according to the single digits. But we found that the number of elements in each bucket is less than or equal to 1. Therefore, at this point we started to retreat. In other words, we started to collect data in the bucket. After the collection is completed, go back to the previous layer. At this time, the buckets grouped according to hundred digits become the following form

0: 002, 024, 045,066, 075, 090
1: 170
2-7: empty
8: 802
9: empty

Then we are collecting the data in this bucket. After collecting, the sequence is as follows

2, 24, 45, 66, 75, 90, 170, 802

The whole MSD cardinality sorting is carried out according to the above process.

When I describe the MSD cardinality sorting steps, the process of the intermediate recursive function may not be clear enough. I personally suggest that if you do n’t understand recursive functions, you can first understand the principle of recursive functions, and then look back at this process may be easier to understand the MSD cardinality sorting process.

Algorithm implementation:

int GetMaxDight(int* h, int len)
{
    if(h==NULL) return 0;
    if(len<1) return 0;
    
    int max=h[0];
    for(int i=1;i<len;++i)
    {
        if(h[i]>max) max=h[i];
    }
    
    int digit=1;
    while(max/10!=0)
    {
        max/=10;
        ++digit;
    }
    
    return digit;
} 

int GetReminder(int value,int digit)
{
    int div=1;
    for(int i=1;i<digit;++i)
        div*=10;
    
    return value/div%10;
}

void RRadixSort_MSD(int* h, int begin, int end, int digit)
{
    if(h==NULL) return;
    if(begin>=end) return;
    if(digit<1) return;
    
    int start[10];
    int count[10]={0};
    int *tmp=(int*)calloc(end-begin+1,sizeof(int));
    
    for(int i=begin;i<=end;++i)
    {
        count[GetReminder(h[i],digit)]++;
    }
    
    memcpy(start,count,sizeof(start));
    
    //求右边界
    for(int i=1;i<10;++i)
    {
        start[i]+=start[i-1];
    } 
    
    for(int i=end;i>=begin;--i)
    {
        int r=GetReminder(h[i],digit);
        int index=start[r];
        tmp[index-1]=h[i];
        start[r]--;
    }
    
    /*
    for(int i=0;i<10;++i)
    {
        printf("%d ",start[i]);
    }
    
    printf("\n");
    */
    
    memcpy(&h[begin],tmp,(end-begin+1)*sizeof(int));
    
    for(int i=0;i<10;++i)
    {
        if(count[i]>1)
        {
            RRadixSort_MSD(h, start[i], start[i]+count[i]-1, digit-1);
        }
    }
}

void RadixSort_MSD(int* h, int len)
{
    if(h==NULL) return;
    if(len<=1) return;
    
    int digit=GetMaxDight(h,len);
    
    //printf("MaxDigit:%d\n",digit);
    
    RRadixSort_MSD(h, 0, len-1, digit);
}

void RRadixSort_MSD_Reverse(int* h, int begin, int end, int digit)
{
    if(h==NULL) return;
    if(begin>=end) return;
    if(digit<1) return;
    
    int start[10];
    int count[10]={0};
    int *tmp=(int*)calloc(end-begin+1,sizeof(int));
    
    for(int i=begin;i<=end;++i)
    {
        count[GetReminder(h[i],digit)]++;
    }
    
    memcpy(start,count,sizeof(start));
    
    //求右边界
    for(int i=8;i>=0;--i)
    {
        start[i]+=start[i+1];
    } 
    
    for(int i=end;i>=begin;--i)
    {
        int r=GetReminder(h[i],digit);
        int index=start[r];
        tmp[index-1]=h[i];
        start[r]--;
    }
    
    /*
    for(int i=0;i<10;++i)
    {
        printf("%d ",start[i]);
    }
    
    printf("\n");
    */
    
    memcpy(&h[begin],tmp,(end-begin+1)*sizeof(int));
    
    for(int i=0;i<10;++i)
    {
        if(count[i]>1)
        {
            RRadixSort_MSD_Reverse(h, start[i], start[i]+count[i]-1, digit-1);
        }
    }
}

void RadixSort_MSD_Reverse(int* h, int len)
{
    if(h==NULL) return;
    if(len<=1) return;
    
    int digit=GetMaxDight(h,len);
    
    //printf("MaxDigit:%d\n",digit);
    
    RRadixSort_MSD_Reverse(h, 0, len-1, digit);
}

  Ten, the main function

 

  

void Swap(int& a, int& b)
{
    int t=a;
    a=b;
    b=t;
    
    return;
}

int main()
{
    int A[10]={0};
    srand((unsigned)time(NULL));
    
    printf("before:\n");
    for(int i=0;i<10;++i)
    {
        A[i]=rand()%100;
        printf("%d ",A[i]);
    }
    printf("\n");
    
    printf("after:\n");
    //QuickSort(A,0,9);
    //BubbleSort(A,sizeof(A)/sizeof(int));
    //SelectionSort(A,sizeof(A)/sizeof(int));
    //InsertSort(A,sizeof(A)/sizeof(int));
    //MergeSort(A,sizeof(A)/sizeof(int));
    //ShellSort(A,sizeof(A)/sizeof(int));
    //HeapSort(A,sizeof(A)/sizeof(int));
    //RadixSort_LSD(A,sizeof(A)/sizeof(int));
    //RadixSort_MSD(A,sizeof(A)/sizeof(int));
    //RadixSort_LSD_Reverse(A,sizeof(A)/sizeof(int));
    RadixSort_MSD_Reverse(A,sizeof(A)/sizeof(int));
    for(int i=0;i<sizeof(A)/sizeof(int);++i)
    {
        printf("%d ",A[i]);
    }
    printf("\n");
    
    return 0;
}

  

Guess you like

Origin www.cnblogs.com/WinkJie/p/12728335.html