First acquainted with the idea of dynamic programming algorithm

1. Problem introduction

Take the Fibonacci sequence made on Leetcode yesterday as an example:

Write a function and enter n to find the nth term of the Fibonacci sequence. The definition of the Fibonacci sequence is as follows:
F(0) = 0, F(1) = 1
F(N) = F(N-1) + F(N-2), where N> 1.
Fibonacci The sequence starts with 0 and 1, and the Fibonacci numbers afterwards are obtained by adding the two previous numbers.
The answer needs to be modulo 1e9+7 (1000000007). If the initial result of the calculation is: 1000000008, please return 1.

The problem is very simple, the first thing that comes to mind is to solve it directly and recursively . But the recursive method will time out here and there is a problem of large numbers out of bounds. The iterative method can be AC ​​in this problem, but looking at the problem solution, it is found that this problem is also an introductory example for dynamic programming algorithms .
Dynamic programming method:

class Solution {
    
    
    public int fib(int n) {
    
    
        if (n == 0 || n == 1) {
    
    
            return n;
        }
        long[] dp = new long[n + 1];
        dp[0] = 0;
        dp[1] = 1;
        for (int i = 2; i <= n; i++) {
    
    
            dp[i] = dp[i - 1] + dp[i - 2]; //状态转移处
            dp[i] = dp[i] % 1000000007;
        }
        return (int) dp[n];
    }
}

It can be seen that, compared with the traditional recursive method, a new dp array is created here to store the results of each calculation from the bottom up , avoiding the need to start from the beginning for each calculation.
The state transition equation here is actually the mathematical expression in brute force cracking (a new state can be derived from other existing states, which feels a bit similar to a recursive relationship)
. The Fibonacci sequence problem here is strictly not seen. Do dynamic programming problems, because the problems generally solved by dynamic programming generally have three elements: overlapping sub-problems, optimal sub-structures, and state transition equations . And here strictly it does not have the optimal sub-structural element, because each of its sub-problems has only one solution, and there is no optimal concept. It can also be speculated that dynamic programming algorithms are generally used to find the optimal solution.

Two. Summary

In my understanding, the essence of the dynamic programming algorithm is still an exhaustive list, and the results of each sub-problem are listed from the bottom up . But in the exhaustive process, the repetitive calculation problem of the optimization sub-problem, and the sub-problem of the dynamic programming problem must have an optimal solution, and then the state transition equation (similar to the mathematical relationship in the violent solution) is searched. This constitutes the three elements of the dynamic programming problem: overlapping subproblems, optimal substructures, and state transition equations .
The general form of the dynamic programming algorithm is to find the most value (optimum solution) , from the bottom to the top gradually from the optimal solution of the sub-problem to obtain the optimal solution of the original problem.
There are many dynamic programming problem models. Knapsack problem, interval model, etc. are all dynamic programming problems (a long way to go...)

Guess you like

Origin blog.csdn.net/m0_46550452/article/details/107416138