Four fast ways to power (do not use pow function)

Pow(x, n)

  • Method 1: Violence Law
  • Method 2: Recursive fast power algorithm
  • Method 3: Iterative fast power algorithm
  • Method 4: Bit arithmetic

Method 1: Violence Law

Ideas

Just simulate the xmultiplying nprocess times.

If \ (n <0 \) , we can directly replace \ (x, n \) with \ (\ dfrac {1} {x} \) , \ ( -n \) to ensure \ (n \ ge 0 \ ) . This limitation can simplify our further discussion.

But we need to pay attention to extreme cases, especially the different range limits of negative integers and positive integers.

algorithm

We can use a simple loop to calculate the result.

class Solution {
public:
    double myPow(double x, int n) {
        long long N = n;
        if (N < 0) {
            x = 1 / x;
            N = -N;
        }
        double ans = 1;
        for (long long i = 0; i < N; i++)
            ans = ans * x;
        return ans;
    }
};

Complexity analysis

  • Time complexity: \ (O (n) \) . We will xbe multiplied ntimes.
  • Space complexity: \ (O (1) \) . We need a variable to store the xfinal result.

Method 2: Recursive fast power algorithm

class Solution {
public:
    double fastPow(double x, long long n) {
        if (n == 0) {
            return 1.0;
        }
        double half = fastPow(x, n / 2);
        if (n % 2 == 0) {
            return half * half;
        } else {
            return half * half * x;
        }
    }
    double myPow(double x, int n) {
        long long N = n;
        if (N < 0) {
            x = 1 / x;
            N = -N;
        }
        return fastPow(x, N);
    }
};

Complexity analysis

  • Time complexity: O (log (n)) O ( l o g ( n )). Every time we apply the formula $ (x ^ n) ^ 2 = x ^ {2 * n} \ (, \) n $ will be reduced by half. Therefore, we need at most \ (O (log (n)) \) calculations to get the result.
  • Space complexity: \ (O (log (n)) \) . Every time we calculate, we need to store the result of \ (x ^ {n / 2} \) . We need to calculate \ (O (log (n)) \) times, so the space complexity is \ (O (log (n)) \) .

Method 3: Iterative fast power algorithm

The fast power of recursion or iteration is actually a different way to achieve the same goal.

class Solution {
public:
    double myPow(double x, int n) {
        long long N = n;
        if (N < 0) {
            x = 1 / x;
            N = -N;
        }
        double ans = 1;
        double current_product = x;
        for (long long i = N; i ; i /= 2) {
            if ((i % 2) == 1) {
                ans = ans * current_product;
            }
            current_product = current_product * current_product;
        }
        return ans;
    }
};

Complexity analysis

  • Time complexity: \ (O (log (n)) \) . For neach bit, we can only take the most time. So the total time complexity is \ (O (log (n)) \) .
  • Space complexity: \ (O (1) \) . We only need two variables to store the xcurrent product and the final result.

Bit operation to achieve pow (x, n)

According to the idea of ​​the brute force method, it is particularly simple, but what about bit arithmetic?

Bit operation skills

Let me give an example, for example n = 13, then the binary representation of n is 1101, then the thirteenth power of m can be disassembled as:

\(m^{1101} = m^{0001} * m^{0100} * m^{1000}\)

We can read 1101 bit by bit by & 1 and >> 1. When it is 1, the multiplier represented by the bit is accumulated to the final result. Look at the code directly, but it is easy to understand:

int pow(int n){
    int sum = 1;
    int tmp = m;
    while(n != 0){
        if(n & 1 == 1){
            sum *= tmp;
        }
        tmp *= tmp;
        n = n >> 1;
    }
    return sum;
}

The time complexity is almost \ (O (logn) \) , and it looks very good

Guess you like

Origin www.cnblogs.com/RioTian/p/12713515.html