[Algorithm Basics (2)] Binary Search and Fibonacci Sequence

  1. Binary search The premise
    for efficient binary search is that the data structure is ordered :

    1. Sort
    2. Halve the array into left and right arrays.
    3. Determine the size of the number you are looking for and the value in the middle position to determine which half of the number you are looking for is.
    4. Then continue searching in half until you find this number.

    Illustration :

    The first is to find each pointer;

    Insert image description here

    At this time, target>arr[mid], so left=mid+1, and then update arr[mid];

    Insert image description here

    At this time, target<arr[mid], so right=mid_1, and then update arr[mid];

    Insert image description here
    Finally found arr[mid]=target, end

    function binarySearch(arr, target){
          
          
    
        let start = 0;
        let end = arr.length - 1;
        if(!end){
          
          
          return arr[0] === target ? 0 : -1;		
        }
        if(end == 1){
          
          
            return arr[0] === target ? 0 : arr[1] === target ? 1 : -1; 
        }
        let middle;
        while(start <= end){
          
          
            middle = start + ((end - start) >> 1) | 0; 
            if(arr[middle] === target){
          
          
                return middle
            }else if(target > arr[middle]){
          
          
                start = middle + 1
            }else{
          
          
                end = middle - 1
            }
        }
        return -1
    }
    
  2. Master formula calculates the time complexity of recursion

  3. When some algorithms deal with a large-scale problem, they tend to split the problem into several sub-problems, process one or more of them recursively, and perform some preprocessing and summary processing before or after the divide and conquer. At this time, we can get a recursion equation about the complexity of this algorithm. Solving this equation can get the complexity of the algorithm. One of the most common recurrence equations is this:

    Assume constants a >= 1, b > 1, f(n) is a function, T(n) is a non-negative integer, T(n) = a T(n / b) + f(n), then there is:

    1. If f(n)=O(nlogb⁡a−ε), ε>0, then T(n)=Θ(nlogb⁡a).
    2. If f(n)=Θ(nlogb⁡a), then T(n)=Θ(nlogb⁡alog⁡n).
    3. If f(n)=Ω(nlogb⁡a+ε),ε>0, and for some constant c < 1 and sufficiently large n, af(n/b)≤cf(n), then T(n) =Θ(f(n)).

    For example, the common binary search algorithm, the recursive equation of time complexity is T(n) = T(n / 2) + θ(1), obviously there is nlogb⁡a=n0=Θ(1), which satisfies the second master theorem Bar, the time complexity can be obtained as T(n) = θ(log n).

    Let’s look at another example, T(n) = 9 T(n / 3) + n. It can be seen that nlogb⁡a=n2. Let ε be 1. Obviously, the first clause of Master’s theorem is satisfied, and we can get T(n) = θ(n ^2).

    Let’s take a slightly more complicated example, T(n) = 3 T(n / 4) + n log n. nlogb⁡a=O(n0.793), take ε = 0.2, obviously when c = 3 / 4, for a sufficiently large n, a * f(n / b) = 3 * (n / 4) * log can be satisfied (n / 4) <= (3 / 4) * n * log n = c * f(n), which is consistent with the third article of Master's theorem, so T(n) = θ(n log n) is obtained.

    When applying Master's theorem, one thing that must be paid special attention to is that ε in the first and third items must be greater than zero . If you cannot find an ε greater than zero, you cannot use these two rules.

    Reference: https://blog.gocalf.com/algorithm-complexity-and-master-theorem.html

  4. Fibonacci Sequence Problem Description: The definition of Fibonacci
    Number ( Fibonacci Number ) is: F(n) = F(n - 1) + F(n - 2), and F(0) = 0, F(1) = 1. For any specified integer n (n ≥ 0), calculate the exact value of F(n), and analyze the time and space complexity of the algorithm.

    // 斐波那契数列 递归
    function fibonacci(n){
          
          
        if (n==0) {
          
          
            return 0;
        }
        if (n==1) {
          
          
            return 1;
        }
        return fibonacci(n-1) + fibonacci(n-2);
    }
    

    An algorithm that looks intuitive but is terrifying to use is the recursive method. According to Fibonacci's recursive formula, for the input n, directly and recursively call the same function to obtain F(n - 1) and F(n - 2) respectively, and the addition of the two is the result. The end point of the recursion is the initial value of the recursion equation, that is, when n takes 0 or 1.

    The time complexity of this algorithm has a recursion equation similar to Fibonacci: T(n) = T(n - 1) + T(n - 2) + O(1), it is easy to get T(n) = O( 1.618 ^ n) (1.618 is the golden section, (1+5)/2). The space complexity depends on the depth of the recursion and is obviously O(n) .

    // 斐波那契数列 递推法
    function fibonacci(n){
          
          
        if (n == 0) {
          
          
            return 0;
        }
        if (n == 1) {
          
          
            return 1;
        }
        let a = [0, 1];
        let n_i = 1;
        while(n_i < n) {
          
          
            n_i++;
            // 交换位置
            let next = a[0] + a[1];
            a[0] = a[1];
            a[1] = next;
        }
        return a[1];
    }
    

    Although the difference is only one word, the complexity of the recursive method is much smaller. This method is to follow the recursive equation, starting from n = 0 and n = 1, and find all the Fibonacci numbers less than n one by one, and finally F(n) can be calculated. Since the first two Fibonacci numbers are needed for each calculated value, the smaller numbers can be discarded, which can reduce the space complexity to a minimum. Obviously the time complexity is O(n) and the space complexity is O(1) .

    Comparing the recursive method and the recursive method, both of them use the idea of ​​​​divide and conquer-the target problem is divided into several small problems, and the solution of the target problem is obtained by using the solution of the small problem. The difference between the two is actually the difference between ordinary divide and conquer algorithm and dynamic programming.

    Matrix method: ~

  5. merge sort

    // 归并排序
    function process(arr, L, R) {
          
          
        if (L === R) {
          
          
            return;
        }
        let mid = L + ((R - L) >> 1);
        process(arr,L,mid);
        process(arr,mid+1,R);
        merge(arr,L,mid,R);
    }
    
    function merge(arr,L,mid,R) {
          
          
        let help = [];
        let i = 0;
        let p1 = L;
        let p2 = mid +1;
        while (p1 <= mid && p2 <=R) {
          
          
            help[i++] = arr[p1] <= arr[p2] ? arr[p1++] : arr[p2++];
        }
        while(p1 <= M) {
          
          
            help[i++] = arr[p1++]
        }
        while(p2 <= R) {
          
          
            help[i++] = arr[p2++]
        }
        for(let i = 0;i<help.length;i++) {
          
          
            arr[L + i] = help[i]
        }
    }
    

Guess you like

Origin blog.csdn.net/sinat_29843547/article/details/128727143