[Algorithm Questions] Fibonacci sequence in the basic stage of dynamic programming, the problem of frog jumping steps, the maximum sum of continuous sub-arrays

foreword

Dynamic programming (Dynamic Programming, DP for short) is a method to solve the optimization problem of multi-stage decision-making process. It is a strategy for decomposing complex problems into overlapping subproblems, and deriving the optimal solution to the problem by maintaining the optimal solution for each subproblem.

The main idea of ​​dynamic programming is to use the optimal solution of the solved sub-problem to derive the optimal solution of a larger problem, thus avoiding repeated calculations. Therefore, dynamic programming usually uses a bottom-up approach to solve small-scale problems first, and then gradually derive larger-scale problems until the optimal solution of the entire problem is solved.

Dynamic programming usually includes the following basic steps:

  1. Define the state: Divide the problem into several sub-problems, and define the state to represent the solution of the sub-problems;
  2. Define the state transition equation: According to the relationship between the sub-problems, design the state transition equation, that is, how to deduce the calculation process of the unknown state from the known state;
  3. Determine the initial state: define the solution of the smallest sub-problem;
  4. Bottom-up solution: Calculate the optimal solution of all states according to the state transition equation;
  5. Constructs the solution of the problem from the optimal solution.

Dynamic programming can solve many practical problems, such as shortest path problem, knapsack problem, longest common subsequence problem, edit distance problem, etc. At the same time, dynamic programming is also the core idea of ​​many other algorithms, such as divide and conquer algorithm, greedy algorithm, etc.

Dynamic programming is a method to solve the optimization problem of multi-stage decision-making process. It decomposes complex problems into overlapping sub-problems, and derives the optimal solution of the problem by maintaining the optimal solution of each sub-problem. Dynamic programming includes steps such as defining states, designing state transition equations, determining initial states, bottom-up solutions, and constructing problem solutions. Dynamic programming can solve many practical problems, and it is also one of the core ideas of other algorithms.

1. Fibonacci sequence

Write a function, input n, and find the nth item of the Fibonacci sequence (ie F(N)). The Fibonacci sequence is defined as follows:

F(0) = 0, F(1) = 1;
F(N) = F(N - 1) + F(N - 2), where N > 1.

The Fibonacci sequence starts with 0 and 1, and subsequent Fibonacci numbers are obtained by adding the previous two numbers.

The answer needs to be modulo 1e9+7 (1000000007). If the initial calculation result is: 1000000008, please return 1.

Example 1:

Input: n = 2
Output: 1

Example 2:

Input: n = 5
Output: 5

Source: LeetCode.

1.2. Ideas

The boundary conditions for Fibonacci numbers are F(0)=0 and F(1)=1. When n>1, the sum of each item is equal to the sum of the previous two items, so there is the following recurrence relationship:

F(n)=F(n−1)+F(n−2)

Since Fibonacci numbers have a recursive relationship, dynamic programming can be used to solve them. The state transition equation of dynamic programming is the above recurrence relation, and the boundary conditions are F(0) and F(1).

According to the state transition equation and boundary conditions, it can be obtained that the time complexity and space complexity are both O(n) realizations. Since F(n) is only
related to F(n−1) and F(n−2), you can use the "rolling array idea" to optimize the space complexity to O(1). This implementation is given in the following code.

During the calculation, the answer needs to be modulo 1e9+7.

1.2, code implementation

class Solution {
    
    
public:
    int fib(int n) {
    
    
        if(n<2)
            return n;
        int pre=0,pre2=1;
        int cur=0;
        for(int i=2;i<=n;i++)
        {
    
    
            cur=(pre+pre2)%(1000000007);
            pre=pre2;
            pre2=cur;
        }
        return cur;
    }
};

Time complexity: O(n).
Space complexity: O(1).

2. The problem of frog jumping steps

A frog can jump up 1 step or 2 steps at a time. Find the total number of jumping methods for the frog to jump up an n-level step.

The answer needs to be modulo 1e9+7 (1000000007). If the initial calculation result is: 1000000008, please return 1.

Example 1:

Input: n = 2
Output: 2

Example 2:

Input: n = 7
Output: 21

Example 3:

Input: n = 0
Output: 1

Source: LeetCode.

2.2. Ideas

Suppose there are f(n) ways to jump up n steps. In all jumping methods, there are only two cases for the last step of the frog: jumping up 1 step or 2 steps.

  • When there is one step: there are n−1 steps left, and there are f(n−1) jumping methods in this case;
  • When there are 2 steps: there are n−2 steps left, and there are f(n−2) jumping methods in this case.

f(n) is the sum of the above two situations, that is, f(n)=f(n−1)+f(n−2), and the above recursive property is the Fibonacci sequence. This question can be transformed into finding the value of the nth item of the Fibonacci sequence, the only difference is that the starting number is different.

  • Frog jumping steps problem: f(0)=1 , f(1)=1 , f(2)=2 ;
  • Fibonacci sequence problem: f(0)=0 , f(1)=1 , f(2)=1 .

insert image description here

2.2. Code implementation

class Solution {
    
    
public:
    int numWays(int n) {
    
    
        if(n<2)
            return 1;
        int pre=1,pre2=1;
        int cur=1;
        for(int i=2;i<=n;i++)
        {
    
    
            cur=(pre+pre2)%(1000000007);
            pre=pre2;
            pre2=cur;
        }
        return cur;
    }
};

Time complexity: O(n).
Space complexity: O(1).

3. Maximum sum of consecutive subarrays

Input an integer array, one or more consecutive integers in the array form a sub-array. Find the maximum of the sum of all subarrays.

The required time complexity is O(n).

Example 1:

Input: nums = [-2,1,-3,4,-1,2,1,-5,4]
Output: 6
Explanation: The sum of consecutive subarrays [4,-1,2,1] is the largest, which is 6.

Source: LeetCode.

3.1. Ideas

Dynamic programming is the optimal solution to this problem.

Dynamic programming analysis:
state definition: Let the dynamic programming list dp[i] represent the maximum sum of consecutive sub-arrays ending with element nums[i].

Why the element nums[i] must be included in the definition of the maximum sum dp[i]: ensure the correctness of dp[i] recursion to dp[i+1]; if nums[i] is not included, the recursion will not satisfy the
title Contiguous subarray requirements of .

Transfer equation: If dp[i−1]≤0, it means that dp[i−1] has a negative contribution to dp[i], that is, dp[i−1]+nums[i] is not as large as nums[i] itself.

When dp[i−1]>0: execute dp[i]=dp[i−1]+nums[i]; when
dp[i−1]≤0: execute dp[i]=nums[i] ;

Initial state:
dp[0]=nums[0], that is, the maximum sum of consecutive subarrays ending with nums[0] is nums[0].

Return Value: Returns the maximum value in the dp list, representing the global maximum value.

insert image description here
Reduced space complexity:
Since dp[i] is only related to dp[i−1] and nums[i], the original array nums can be used as a dp list, that is, it can be modified directly on nums.
The space complexity is reduced from O(N) to O(1) due to the omission of the extra space used by the dp list.
insert image description here

3.2. Code implementation

class Solution {
    
    
public:
    int maxSubArray(vector<int>& nums) {
    
    
        int n=nums.size();
        int res=nums[0];
        for(int i=1;i<n;i++)
        {
    
    
            nums[i]+=max(nums[i-1],0);
            res=max(res,nums[i]);
        }
        return res;
    }
};

Time complexity: O(n).
Space complexity: O(1).

Summarize

Dynamic programming (Dynamic Programming) is a method to solve multi-stage decision-making optimization problems. It decomposes complex problems into overlapping sub-problems and derives the optimal solution of the problem by maintaining the optimal solution of each sub-problem. Dynamic programming can solve many practical problems, such as shortest path problem, knapsack problem, longest common subsequence problem, edit distance problem, etc.

The basic idea of ​​dynamic programming is to use the optimal solution of the solved sub-problem to derive the optimal solution of a larger problem, thus avoiding repeated calculations. It usually uses a bottom-up approach to solve small-scale problems first, and then gradually derive larger-scale problems until the optimal solution of the entire problem is solved.

Dynamic programming usually includes the following basic steps:

  1. Define the state: Divide the problem into several sub-problems, and define the state to represent the solution of the sub-problems;
  2. Define the state transition equation: According to the relationship between the sub-problems, design the state transition equation, that is, how to deduce the calculation process of the unknown state from the known state;
  3. Determine the initial state: define the solution of the smallest sub-problem;
  4. Bottom-up solution: Calculate the optimal solution of all states according to the state transition equation;
  5. Constructs the solution of the problem from the optimal solution.

The time complexity of dynamic programming is usually O ( n 2 ) O(n^2)O ( n2 )orO(n3)O(n^3)O ( n3 ), the space complexity is O(n), where n represents the scale of the problem. In practical applications, in order to reduce space complexity, techniques such as rolling arrays can usually be used to optimize dynamic programming algorithms.

insert image description here

Guess you like

Origin blog.csdn.net/Long_xu/article/details/131434443