[Algorithm question] The best time to buy and sell stocks in the basic stage of dynamic programming, bit counting, and judging subsequences

foreword

Dynamic programming (Dynamic Programming, DP for short) is a method to solve the optimization problem of multi-stage decision-making process. It is a strategy for decomposing complex problems into overlapping subproblems, and deriving the optimal solution to the problem by maintaining the optimal solution for each subproblem.

The main idea of ​​dynamic programming is to use the optimal solution of the solved sub-problem to derive the optimal solution of a larger problem, thus avoiding repeated calculations. Therefore, dynamic programming usually uses a bottom-up approach to solve small-scale problems first, and then gradually derive larger-scale problems until the optimal solution of the entire problem is solved.

Dynamic programming usually includes the following basic steps:

  1. Define the state: Divide the problem into several sub-problems, and define the state to represent the solution of the sub-problems;
  2. Define the state transition equation: According to the relationship between the sub-problems, design the state transition equation, that is, how to deduce the calculation process of the unknown state from the known state;
  3. Determine the initial state: define the solution of the smallest sub-problem;
  4. Bottom-up solution: Calculate the optimal solution of all states according to the state transition equation;
  5. Constructs the solution of the problem from the optimal solution.

Dynamic programming can solve many practical problems, such as shortest path problem, knapsack problem, longest common subsequence problem, edit distance problem, etc. At the same time, dynamic programming is also the core idea of ​​many other algorithms, such as divide and conquer algorithm, greedy algorithm, etc.

Dynamic programming is a method to solve the optimization problem of multi-stage decision-making process. It decomposes complex problems into overlapping sub-problems, and derives the optimal solution of the problem by maintaining the optimal solution of each sub-problem. Dynamic programming includes steps such as defining states, designing state transition equations, determining initial states, bottom-up solutions, and constructing problem solutions. Dynamic programming can solve many practical problems, and it is also one of the core ideas of other algorithms.

1. The best time to buy and sell stocks

Given an array prices, its i-th element prices[i] represents the price of a given stock on the i-th day.

You can only choose to buy the stock on one day and sell it on a different day in the future. Design an algorithm to calculate the maximum profit you can make.

Returns the maximum profit you can make from this trade. If you can't make any profit, return 0.

Example 1:

Input: [7,1,5,3,6,4]
Output: 5
Explanation: Buy on day 2 (stock price = 1), sell on day 5 (stock price = 6), Maximum profit = 6-1 = 5.
Note that the profit cannot be 7-1 = 6, because the selling price needs to be greater than the buying price; at the same time, you cannot sell the stock before buying.

Example 2:

Input: prices = [7,6,4,3,1]
Output: 0
Explanation: In this case, no trades are completed, so the maximum profit is 0.

Source: LeetCode.

1.1. Ideas

Dynamic programming is generally divided into one-dimensional, two-dimensional, and multi-dimensional (using state compression), and the corresponding forms are dp[i], dp[i][j], and binary dp[i][j].

  • Specify what dp[i] should represent.
  • According to the relationship between dp[i] and dp[i-1], the state transition equation is obtained.
  • Determine the initial conditions, such as dp(0).

dp[i] represents the maximum profit of the previous i days, because we always want to maximize the profit, then the state transition equation:
dp[i]=max(dp[i−1],prices[i]−minprice).

insert image description here

1.2. Code implementation

class Solution {
    
    
public:
    int maxProfit(vector<int>& prices) {
    
    
        int n=prices.size();
        if(n==0)
            return 0;
        int minPrice=prices[0];
        vector<int> dp(n,0);
        // 初始条件
        dp[0]=0;
        for(int i=1;i<n;i++)
        {
    
    
            minPrice=min(minPrice,prices[i]);
            dp[i]=max(dp[i-1],prices[i]-minPrice);
        }
        return dp[n-1];
    }
};

Time complexity: O(n).
Space complexity: O(n).

2. Bit Counting

Given an integer n, for each i in 0 <= i <= n, count the number of 1s in its binary representation, and return an array ans of length n + 1 as the answer.

Example 1:

Input: n = 2
Output: [0,1,1]
Explanation:
0 --> 0
1 --> 1
2 --> 10

Example 2:

Input: n = 5
Output: [0,1,1,2,1,2]
Explanation:
0 --> 0
1 --> 1
2 --> 10
3 --> 11
4 --> 100
5 -- > 101

Source: LeetCode.

2.1. Ideas

Divide odd and even numbers:

  • The binary 1 number of an even number is super simple, because an even number is equivalent to being multiplied by a smaller number by 2, how does the multiplication by 2 come from? In binary operations, it is to shift one bit to the left, that is, add 1 more 0 to the lower bit, which means that dp[i] = dp[i / 2].
  • Odd numbers are a little harder to think of. Odd numbers are obtained from even numbers no larger than this number + 1. What happens to even numbers + 1 in binary bits? One more 1 will be added to the lower bit, which means that dp[i] = dp[i-1] + 1, of course it can also be written as dp[i] = dp[i / 2] + 1.

For all numbers, there are only two classes:

Odd number: In binary representation, an odd number must have one more 1 than the previous even number, because the more is the lowest 1.

         0 = 0       1 = 1
         2 = 10      3 = 11

Even number: In binary representation, the number of 1 in an even number must be as many as the number after dividing by 2. Because the lowest bit is 0, dividing by 2 means shifting one bit to the right, that is, erasing the 0, so the number of 1 remains unchanged.

          2 = 10       4 = 100       8 = 1000
          3 = 11       6 = 110       12 = 1100

In addition, the number of 1 of 0 is 0, so the traversal calculation can be started according to the parity.

2.2. Code implementation

Equation of state: dp[i]=dp[i>>1]+(i&1).

class Solution {
    
    
public:
    vector<int> countBits(int n) {
    
    
        vector<int> ans(n+1,0);
        for(int i=0;i<=n;i++)
        {
    
    
            ans[i]=ans[i>>1]+(i&0x01);
        }
        return ans;
    }
};

Time complexity: O(n). For each integer, it only takes O(1) time to calculate the "one-bit number".

Space complexity: O(1). Space complexity is constant except for the returned array.

3. Judgment subsequence

Given strings s and t, determine whether s is a subsequence of t.

A subsequence of a string is a new string formed by deleting some (or not) characters from the original string without changing the relative positions of the remaining characters. (For example, "ace" is a subsequence of "abcde", but "aec" is not).

Example 1:

Input: s = "abc", t = "ahbgdc"
Output: true

Example 2:

Input: s = "axc", t = "ahbgdc"
Output: false

Source: LeetCode.

3.1. Ideas

If enumeration is used, it will take a lot of time to find the next matching character in t. For each position of t, the first occurrence position of each character from this position can be preprocessed.
Dynamic programming can be used to implement preprocessing. Let f[i][j] represent the first occurrence of character j starting from position i in string t. When performing state transition, if the character at position i in t is j, then f[i][j]=i, otherwise j appears after position i+1, that is, f[i][j]=f[ i+1][j], so it is necessary to reverse the dynamic programming and enumerate i from the back to the front.

This allows the state transition equation to be written:
insert image description here

3.2. Code implementation

class Solution {
    
    
public:
    bool isSubsequence(string s, string t) {
    
    
        int n = s.size(), m = t.size();

        vector<vector<int> > f(m + 1, vector<int>(26, 0));
        for (int i = 0; i < 26; i++) {
    
    
            f[m][i] = m;
        }

        for (int i = m - 1; i >= 0; i--) {
    
    
            for (int j = 0; j < 26; j++) {
    
    
                if (t[i] == j + 'a')
                    f[i][j] = i;
                else
                    f[i][j] = f[i + 1][j];
            }
        }
        int add = 0;
        for (int i = 0; i < n; i++) {
    
    
            if (f[add][s[i] - 'a'] == m) {
    
    
                return false;
            }
            add = f[add][s[i] - 'a'] + 1;
        }
        return true;
    }
};

This changes the ID to "Everything DP".

Summarize

Dynamic programming (Dynamic Programming) is a method to solve multi-stage decision-making optimization problems. It decomposes complex problems into overlapping sub-problems and derives the optimal solution of the problem by maintaining the optimal solution of each sub-problem. Dynamic programming can solve many practical problems, such as shortest path problem, knapsack problem, longest common subsequence problem, edit distance problem, etc.

The basic idea of ​​dynamic programming is to use the optimal solution of the solved sub-problem to derive the optimal solution of a larger problem, thus avoiding repeated calculations. It usually uses a bottom-up approach to solve small-scale problems first, and then gradually derive larger-scale problems until the optimal solution of the entire problem is solved.

Dynamic programming usually includes the following basic steps:

  1. Define the state: Divide the problem into several sub-problems, and define the state to represent the solution of the sub-problems;
  2. Define the state transition equation: According to the relationship between the sub-problems, design the state transition equation, that is, how to deduce the calculation process of the unknown state from the known state;
  3. Determine the initial state: define the solution of the smallest sub-problem;
  4. Bottom-up solution: Calculate the optimal solution of all states according to the state transition equation;
  5. Constructs the solution of the problem from the optimal solution.

The time complexity of dynamic programming is usually O ( n 2 ) O(n^2)O ( n2 )orO(n3)O(n^3)O ( n3 ), the space complexity is O(n), where n represents the scale of the problem. In practical applications, in order to reduce space complexity, techniques such as rolling arrays can usually be used to optimize dynamic programming algorithms.

insert image description here

Guess you like

Origin blog.csdn.net/Long_xu/article/details/131429299