Algorithm and data structure interview guide - a preliminary exploration of dynamic programming

A first look at dynamic programming

"Dynamic programming" is an important algorithm paradigm. It decomposes a problem into a series of smaller sub-problems and avoids repeated calculations by storing the solutions to the sub-problems, thus greatly improving time efficiency.

In this section, we start with a classic example problem, first give its brute force backtracking solution, observe the overlapping sub-problems contained in it, and then gradually derive a more efficient dynamic programming solution.

!!! question “Climb the stairs”

给定一个共有 $n$ 阶的楼梯,你每步可以上 $1$ 阶或者 $2$ 阶,请问有多少种方案可以爬到楼顶。

As shown in the figure below, for a 3 33 stairs, total3 3There are 3 options to climb to the roof.

Insert image description here

The goal of this problem is to solve the number of solutions. We can consider exhaustive backtracking to enumerate all possibilities . Specifically, think of climbing stairs as a multi-round selection process: starting from the ground, each round chooses 1 1Level 1 or2 2Level 2 , whenever you reach the top of the stairs, add11 , prune it when it reaches the top of the stairs.

=== “Python”

```python title="climbing_stairs_backtrack.py"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbing_stairs_backtrack}
```

=== “C++”

```cpp title="climbing_stairs_backtrack.cpp"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbingStairsBacktrack}
```

=== “Java”

```java title="climbing_stairs_backtrack.java"
[class]{climbing_stairs_backtrack}-[func]{backtrack}

[class]{climbing_stairs_backtrack}-[func]{climbingStairsBacktrack}
```

=== “C#”

```csharp title="climbing_stairs_backtrack.cs"
[class]{climbing_stairs_backtrack}-[func]{backtrack}

[class]{climbing_stairs_backtrack}-[func]{climbingStairsBacktrack}
```

=== “Go”

```go title="climbing_stairs_backtrack.go"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbingStairsBacktrack}
```

=== “Swift”

```swift title="climbing_stairs_backtrack.swift"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbingStairsBacktrack}
```

=== “JS”

```javascript title="climbing_stairs_backtrack.js"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbingStairsBacktrack}
```

=== “TS”

```typescript title="climbing_stairs_backtrack.ts"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbingStairsBacktrack}
```

=== “Dart”

```dart title="climbing_stairs_backtrack.dart"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbingStairsBacktrack}
```

=== “Rust”

```rust title="climbing_stairs_backtrack.rs"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbing_stairs_backtrack}
```

=== “C”

```c title="climbing_stairs_backtrack.c"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbingStairsBacktrack}
```

=== “Zig”

```zig title="climbing_stairs_backtrack.zig"
[class]{}-[func]{backtrack}

[class]{}-[func]{climbingStairsBacktrack}
```

Method 1: Brutal search

Backtracking algorithms usually do not explicitly dismantle the problem, but treat the problem as a series of decision-making steps, and search for all possible solutions through heuristics and pruning.

We can try to analyze this question from the perspective of problem decomposition. Suppose we climb to the iithThere are dp[i]dp[i]of order id p ​​[ i ] solutions, thendp [ i ] dp[i]d p ​​[ i ] is the original problem, and its sub-problems include:

d p [ i − 1 ] , d p [ i − 2 ] , … , d p [ 2 ] , d p [ 1 ] dp[i-1], dp[i-2], \dots, dp[2], dp[1] dp[i1],dp[i2],,dp[2],dp[1]

Since only 1 1 can be played in each roundLevel 1 or2 22nd order, so when we stand at theiithWhen on the i- staircase, it was only possible to stand on the i-th i-1 i-1i1st order ori − 2 i − 2iOn level 2 . In other words, we can only start fromi − 1 i -1i1st order ori − 2 i − 2iGo to level iii level.

From this we can draw an important inference: Climb to the i - 1 i - 1iThe number of solutions of order 1 plus climbing toi − 2 i − 2iThe number of plans at level 2 is equivalent to climbing to level iiThe number of plans of order i . The formula is as follows:

d p [ i ] = d p [ i − 1 ] + d p [ i − 2 ] dp[i] = dp[i-1] + dp[i-2] dp[i]=dp[i1]+dp[i2]

This means that in the stair climbing problem, there is a recursive relationship between sub-problems, and the solution to the original problem can be constructed from the solutions to the sub-problems . The figure below illustrates this recurrence relationship.

Insert image description here

We can get the brute force search solution based on the recursion formula. Take dp[n]dp[n]d p ​​[ n ] is the starting point,and a larger problem is recursively decomposed into the sum of two smaller problemsuntil the minimum sub-problemdp [ 1 ] dp[1]dp[1] d p [ 2 ] dp[2] Return when d p [ 2 ] . Among them, the solution of the minimum subproblem is known, that is,dp [1] = 1 dp[1] = 1dp[1]=1 d p [ 2 ] = 2 dp[2] = 2 dp[2]=2 , means climbing to the1st 1st1 2 2 Level 2 has 1 1respectively1 2 2 2 options.

Observe the following code, it is a depth-first search like the standard backtracking code, but more concise.

=== “Python”

```python title="climbing_stairs_dfs.py"
[class]{}-[func]{dfs}

[class]{}-[func]{climbing_stairs_dfs}
```

=== “C++”

```cpp title="climbing_stairs_dfs.cpp"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFS}
```

=== “Java”

```java title="climbing_stairs_dfs.java"
[class]{climbing_stairs_dfs}-[func]{dfs}

[class]{climbing_stairs_dfs}-[func]{climbingStairsDFS}
```

=== “C#”

```csharp title="climbing_stairs_dfs.cs"
[class]{climbing_stairs_dfs}-[func]{dfs}

[class]{climbing_stairs_dfs}-[func]{climbingStairsDFS}
```

=== “Go”

```go title="climbing_stairs_dfs.go"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFS}
```

=== “Swift”

```swift title="climbing_stairs_dfs.swift"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFS}
```

=== “JS”

```javascript title="climbing_stairs_dfs.js"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFS}
```

=== “TS”

```typescript title="climbing_stairs_dfs.ts"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFS}
```

=== “Dart”

```dart title="climbing_stairs_dfs.dart"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFS}
```

=== “Rust”

```rust title="climbing_stairs_dfs.rs"
[class]{}-[func]{dfs}

[class]{}-[func]{climbing_stairs_dfs}
```

=== “C”

```c title="climbing_stairs_dfs.c"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFS}
```

=== “Zig”

```zig title="climbing_stairs_dfs.zig"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFS}
```

The figure below shows the recursive tree formed by brute force search. For the problem dp[n]dp[n]d p ​​[ n ] , the depth of its recursion tree isnnn , the time complexity isO ( 2 n ) O(2^n)O(2n ). The exponential order belongs to explosive growth. If we enter a relatively largennn , you will be stuck in a long wait.

Insert image description here

Observing the figure above, the exponential time complexity is caused by "overlapping sub-problems" . For example dp[9]dp[9]dp [ 9 ] is decomposed intodp[8]dp[8 ]dp[8] d p [ 7 ] dp[7] dp[7] d p [ 8 ] dp[8] dp [ 8 ] is decomposed intodp[7]dp[7 ]dp[7] d p [ 6 ] dp[6] d p ​​[ 6 ] , both contain subproblemsdp [ 7 ] dp[7]dp[7]

By analogy, the sub-problems contain smaller overlapping sub-problems, and the sub-problems are endless. Most computing resources are wasted on these overlapping problems.

Method 2: Memorized search

In order to improve the efficiency of the algorithm, we hope that all overlapping sub-problems are calculated only once . To this end, we declare an array memto record the solution of each subproblem and prune overlapping subproblems during the search process.

  1. When dp[i]dp[i] is first calculatedd p ​​[ i ] , we record itmem[i]for later use.
  2. When it is necessary to calculate dp[i]dp[i] againWhen d p [ i ] , we can directly obtain the result mem[i]from

=== “Python”

```python title="climbing_stairs_dfs_mem.py"
[class]{}-[func]{dfs}

[class]{}-[func]{climbing_stairs_dfs_mem}
```

=== “C++”

```cpp title="climbing_stairs_dfs_mem.cpp"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFSMem}
```

=== “Java”

```java title="climbing_stairs_dfs_mem.java"
[class]{climbing_stairs_dfs_mem}-[func]{dfs}

[class]{climbing_stairs_dfs_mem}-[func]{climbingStairsDFSMem}
```

=== “C#”

```csharp title="climbing_stairs_dfs_mem.cs"
[class]{climbing_stairs_dfs_mem}-[func]{dfs}

[class]{climbing_stairs_dfs_mem}-[func]{climbingStairsDFSMem}
```

=== “Go”

```go title="climbing_stairs_dfs_mem.go"
[class]{}-[func]{dfsMem}

[class]{}-[func]{climbingStairsDFSMem}
```

=== “Swift”

```swift title="climbing_stairs_dfs_mem.swift"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFSMem}
```

=== “JS”

```javascript title="climbing_stairs_dfs_mem.js"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFSMem}
```

=== “TS”

```typescript title="climbing_stairs_dfs_mem.ts"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFSMem}
```

=== “Dart”

```dart title="climbing_stairs_dfs_mem.dart"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFSMem}
```

=== “Rust”

```rust title="climbing_stairs_dfs_mem.rs"
[class]{}-[func]{dfs}

[class]{}-[func]{climbing_stairs_dfs_mem}
```

=== “C”

```c title="climbing_stairs_dfs_mem.c"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFSMem}
```

=== “Zig”

```zig title="climbing_stairs_dfs_mem.zig"
[class]{}-[func]{dfs}

[class]{}-[func]{climbingStairsDFSMem}
```

Observe the figure below. After memorization processing, all overlapping sub-problems only need to be calculated once, and the time complexity is optimized to O ( n ) O(n)O ( n ) , which is a huge leap.

Insert image description here

Method three: dynamic programming

Memoized search is a "top-to-bottom" method : we start from the original problem (root node) and recursively decompose larger sub-problems into smaller sub-problems until we solve the smallest known sub-problem (leaf nodes). ). After that, the solutions to the sub-problems are collected layer by layer through backtracking to construct a solution to the original problem.

In contrast, dynamic programming is a "bottom-to-top" approach : starting with a solution to the smallest subproblem and iteratively building solutions to larger subproblems until a solution to the original problem is obtained.

Since dynamic programming does not include a backtracking process, it only needs to be implemented using loop iteration without using recursion. In the following code, we initialize an array dpto store the solution to the subproblem, which plays the memsame recording role as the array in memoized search.

=== “Python”

```python title="climbing_stairs_dp.py"
[class]{}-[func]{climbing_stairs_dp}
```

=== “C++”

```cpp title="climbing_stairs_dp.cpp"
[class]{}-[func]{climbingStairsDP}
```

=== “Java”

```java title="climbing_stairs_dp.java"
[class]{climbing_stairs_dp}-[func]{climbingStairsDP}
```

=== “C#”

```csharp title="climbing_stairs_dp.cs"
[class]{climbing_stairs_dp}-[func]{climbingStairsDP}
```

=== “Go”

```go title="climbing_stairs_dp.go"
[class]{}-[func]{climbingStairsDP}
```

=== “Swift”

```swift title="climbing_stairs_dp.swift"
[class]{}-[func]{climbingStairsDP}
```

=== “JS”

```javascript title="climbing_stairs_dp.js"
[class]{}-[func]{climbingStairsDP}
```

=== “TS”

```typescript title="climbing_stairs_dp.ts"
[class]{}-[func]{climbingStairsDP}
```

=== “Dart”

```dart title="climbing_stairs_dp.dart"
[class]{}-[func]{climbingStairsDP}
```

=== “Rust”

```rust title="climbing_stairs_dp.rs"
[class]{}-[func]{climbing_stairs_dp}
```

=== “C”

```c title="climbing_stairs_dp.c"
[class]{}-[func]{climbingStairsDP}
```

=== “Zig”

```zig title="climbing_stairs_dp.zig"
[class]{}-[func]{climbingStairsDP}
```

The figure below simulates the execution process of the above code.

Insert image description here

Like the backtracking algorithm, dynamic programming also uses the concept of "state" to represent a specific stage of problem solving. Each state corresponds to a sub-problem and the corresponding local optimal solution. For example, the status of the stair climbing problem is defined as the current stair step iii

Based on the above, we can summarize the common terms of dynamic programming.

  • Call the array dp" dp dpdp 表」, d p [ i ] dp[i] d p ​​[ i ] represents stateiii corresponds to the solution to the subproblem.
  • Put the state corresponding to the smallest subproblem (i.e. 1 11 and2 22 stairs) is called the "initial state".
  • The recursive formula dp [ i ] = dp [ i − 1 ] + dp [ i − 2 ] dp[i] = dp[i-1] + dp[i-2]dp[i]=dp[i1]+dp[i2 ] is called the "state transition equation".

space optimization

If you are careful, you may find that because dp [ i ] dp[i]dp[i] 只与 d p [ i − 1 ] dp[i-1] dp[i1] d p [ i − 2 ] dp[i-2] dp[i2 ] , so we do not need to use an arraydpto store the solutions to all sub-problems, but only need two variables to scroll forward.

=== “Python”

```python title="climbing_stairs_dp.py"
[class]{}-[func]{climbing_stairs_dp_comp}
```

=== “C++”

```cpp title="climbing_stairs_dp.cpp"
[class]{}-[func]{climbingStairsDPComp}
```

=== “Java”

```java title="climbing_stairs_dp.java"
[class]{climbing_stairs_dp}-[func]{climbingStairsDPComp}
```

=== “C#”

```csharp title="climbing_stairs_dp.cs"
[class]{climbing_stairs_dp}-[func]{climbingStairsDPComp}
```

=== “Go”

```go title="climbing_stairs_dp.go"
[class]{}-[func]{climbingStairsDPComp}
```

=== “Swift”

```swift title="climbing_stairs_dp.swift"
[class]{}-[func]{climbingStairsDPComp}
```

=== “JS”

```javascript title="climbing_stairs_dp.js"
[class]{}-[func]{climbingStairsDPComp}
```

=== “TS”

```typescript title="climbing_stairs_dp.ts"
[class]{}-[func]{climbingStairsDPComp}
```

=== “Dart”

```dart title="climbing_stairs_dp.dart"
[class]{}-[func]{climbingStairsDPComp}
```

=== “Rust”

```rust title="climbing_stairs_dp.rs"
[class]{}-[func]{climbing_stairs_dp_comp}
```

=== “C”

```c title="climbing_stairs_dp.c"
[class]{}-[func]{climbingStairsDPComp}
```

=== “Zig”

```zig title="climbing_stairs_dp.zig"
[class]{}-[func]{climbingStairsDPComp}
```

Observing the above code, since the space occupied by the array is omitted dp, the space complexity changes from O ( n ) O (n)O ( n ) DescendanceO ( 1 ) O(1)O(1)

In dynamic programming problems, the current state is often only related to a limited number of previous states. At this time, we can only retain the necessary states and save memory space through "dimensionality reduction". This space optimization technique is called "rolling variables" or "rolling arrays" .

Guess you like

Origin blog.csdn.net/zy_dreamer/article/details/132923961