Algorithm exam review

Table of contents

1. Short answer questions* 30'

The meaning of the symbols O, Ω and θ

Basic steps of divide and conquer

Two basic elements of dynamic programming algorithms

Steps to Design a Dynamic Programming Algorithm

Similarities and Differences between Divide and Conquer and Dynamic Programming Algorithms

Two basic elements of the greedy method

The basic strategy of proving the correctness of the algorithm of the greedy method

Similarities and Differences between Greedy Algorithm and Dynamic Programming

In the depth-first search process of a graph, what three types of states of vertices can be distinguished, and what are the four types of edges in a directed graph?

If there is a situation where the weight of an edge is negative in a directed weighted graph, how to determine whether there is a negative cycle in the graph?

The concepts of maximum flow and minimum cut

Maximum flow min cut theorem and its proof

Circulation problem with demand and lower bound

The concept and use of polynomial reduction

The concepts of P problem, NP problem, NPC problem and NPH problem

Practical proof methods for NP-complete problems

Common NP-complete problems

The basic ideas of search algorithms such as Hill_climbing and best_first

2. Application questions* 40'

Shortest path dynamic programming on multi-segment graphs pushed to

Extensions and applications of depth-first search algorithms for graphs

Partitioning Algorithm for Strongly Connected Components

The basic idea and process of Warshall's transitive closure algorithm for graphs

The basic idea and process of Bellman_Ford algorithm

The solution algorithm of the maximum flow minimum cut problem Ford_Fulkerson (augmented path algorithm)

Applications of max-flow-minim algorithms, such as bipartite graph matching, edge disjoint path problems, and project selection problems

Solving recursive equations

3. Algorithm Design Questions* 30'

Divide and conquer method: statistical reverse order pairs, median search, principal element search

Greedy method: Dijkstra algorithm, interval scheduling problem, interval partitioning problem

Dynamic programming problems: longest increasing subsequence, edit distance problem, matrix multiplication problem, knapsack problem, minimum independent set of trees, etc.

Graph algorithms: depth-first search, shortest path problem, etc.


The meaning of the symbols O, Ω and θ

  • O: upper bound of time complexity function

  • Ω: lower bound of time complexity function

  • θ: an exact and compact bound on the time complexity function

Basic steps of divide and conquer

  • Divide the original problem into sub-problems of the same type
  • Solve subproblems recursively
  • Merge recursive solutions

Two basic elements of dynamic programming algorithms

  • Optimal substructure: the optimal solution contains the optimal solutions to its subproblems
  • overlapping subproblems

Steps to Design a Dynamic Programming Algorithm

  • Find the optimal substructure
  • Establish DP equation
  • Solve the DP equation
  • Backtracking to the optimal solution

Similarities and Differences between Divide and Conquer and Dynamic Programming Algorithms

  • Same: both require the original problem to have optimal substructure, that is, it can be divided and conquered
  • different:
    1. Divide and conquer method uses recursive solution; dynamic programming mostly uses iterative method.
    2. The sub-problems of the divide-and-conquer method are independent of each other; the sub-problems of dynamic programming are overlapping.

Two basic elements of the greedy method

  • Local optimal solution (greedy selection property)
  • optimal substructure

The basic strategy of proving the correctness of the algorithm of the greedy method

  • Prove that at each decision-making stage, the choice made using the greedy method is no worse than any other possible choice, that is, it is at least as good as any other possible choice.
  • Proof by greedy choice of substitution method
    1. Assume that there is an optimal solution to the original problem, and that the optimal solution is different from the solution obtained by the greedy method
    2. Give one difference between the optimal solution and the greedy solution
    3. Prove that we can replace the best choice of the optimal solution with a greedy choice without reducing the quality of the solution
  • For all solutions to the original problem, give a bound that it must satisfy, and then prove that the solution obtained using the greedy method can always reach this bound.
  • mathematical induction
  • contradiction

Similarities and Differences between Greedy Algorithm and Dynamic Programming

  • Same: both are used to solve optimization decision-making problems
  • Difference: The greedy method searches for the current optimal solution, which is top-down; dynamic programming searches for the global optimal solution, which is mostly bottom-up.

In the depth-first search process of a graph, what three types of states of vertices can be distinguished, and what are the four types of edges in a directed graph?

  1. vertex state
    • white: not visited
    • gray: has been visited, but its descendants have not been visited yet.
    • black: The visit has been completed, and the descendants have also been visited
  2. edges in directed graph
    • tree edge: tree edge (u, v), v is an unvisited vertex
    • back edge: reverse edge, back edge (u, v), v is the ancestor of u, and v has been visited
    • forward edge: forward edge (u, v), u is the ancestor of v, and v has been visited
    • Cross edge: transverse edge and cross edge are not other general terms for the above three types of edges.
      Four types of edges in directed graphs

If there is a situation where the weight of an edge is negative in a directed weighted graph, how to determine whether there is a negative cycle in the graph?

A complete Bellman_Ford algorithm finds the shortest path from a single source, and then updates all edges to see if dist[] becomes smaller. If dist[] becomes smaller, there is a negative cycle.

1
2
3
4
5
6
7
8
9
Bellman_Ford:

shortest_path(G,l,s)
    for all u in V
        dist[u]=INF
    dist[s]=0
    repeat |v|-1 times:
        for all e in E
            update(e)
process  update( edge(u,v) )
    if dist[v] > dist[u]+l(u,v)
        dist[v] = dist[u]+l(u,v)

The concepts of maximum flow and minimum cut

Maximum flow min cut theorem and its proof

Circulation problem with demand and lower bound

The concept and use of polynomial reduction

  1. concept

Given two problems A and B, we say that problem A can be polynomially reduced to problem B when it satisfies the following conditions:

There is a function f that can transform the input of problem A into the input of problem B in polynomial time.

A(x)=Yes is equivalent to B(f(x))=Yes
, denoted as A \< B. Solving problem B is at least as difficult as solving problem A.

  1. use:
    • Determine the polynomial time solvability of a problem
    • Determine the difficulty of the problem
    • Determine the equal difficulty of the problem

The concepts of P problem, NP problem, NPC problem and NPH problem

  1. P problem: A decision-making problem that can be solved in polynomial time. Such as graph search, shortest path, minimum spanning tree

  2. NP problem: A problem that can be verified in polynomial time. Such as Hamilton problem, Hamilton cycle

  3. NPC problem (NP complete problem): A problem that cannot currently be solved in polynomial time, but it has not been proven that this problem cannot be solved in polynomial time.

  4. NPH problem (NP-hard problem): satisfies the second clause of NP, but does not necessarily satisfy the first clause

Practical proof methods for NP-complete problems

  • Prove that A is an NP problem
  • Select a known NPC problem and prove that this problem can be converted into problem A in polynomial time

Common NP-complete problems

  • Clique problem: the optimization problem of finding the largest clique in the graph
  • Maximum independent set problem
  • Vertex cover problem: In a given graph, find the vertex cover with minimum size
  • Hamiltonian cycle problem
  • traveling salesman problem

The basic ideas of search algorithms such as Hill_climbing and best_first

  • Hill_climbing mountain climbing algorithm

The hill climbing algorithm is a type of heuristic search that aims to avoid traversing all elements. Just like climbing a mountain, if you want to reach the highest peak, first climb the highest mountain in front of you. A greedy local optimization strategy is adopted.

- Best\_First搜索算法

最佳优先算法在深度优先的基础上增加了估价函数。Best_First Search = DFS + BFS

Best_First Search算法描述:
1. N表示已排序的结点表(有小到大
2. 若N空,则结束。(初始时,N中只有根结点)
3. 取N的首元素n,并在N中删除n。若n为目标元素,则停止;否则,转到步骤4
4. 将n的后继结点加入N中,记作N',对N'排序,转到步骤1


- BFS(广度优先搜索)

算法描述:
    1. 根节点入**队列**Q
    2. 队首元素是否为目标元素,若是,停止;否则,转到步骤3
    3. 队首元素出队列,将其后继结点入队列
    4. 若Q空,结束;否则,转到步骤2

BFS伪代码:

BFS(G,s)
    for all u in V
        dist[u] = INF
    dist(s) = 0
    Q = [s] // queue containing just s
    while Q is not empty
        u = DeQueue(Q)
        for all edges(u,v) in E
            if  dist(v) = INF
                EnQueue(Q,v)
                dist(v) = dist(u) + 1


- DFS(深度优先搜索)
算法描述
    1. 根节点入**栈**S
    2. 栈顶元素是否为目标元素,若是,停止;否则,转到步骤3
    3. 栈顶元素出队列,将其后继结点入栈
    4. 若S空,结束;否则,转到步骤2
DFS伪代码:
DFS(s)
    for each u in V
        color[u] = white
    clock = 1
    for each u in V
        if color[u] = white
        then explore(G,u)

explore(G,u)
    color[u] = gray
    pre[u] = clock
    clock = clock + 1
    for each (u,v) in E
        if color[v] = white
        then explore(G,v)
    color[u] = black
    post[u] = clock
    clock = clock + 1

Application questions* 40'

Shortest path dynamic programming on multi-segment graphs pushed to

Extensions and applications of depth-first search algorithms for graphs

Partitioning Algorithm for Strongly Connected Components

The basic idea and process of Warshall's transitive closure algorithm for graphs

The basic idea and process of Bellman_Ford algorithm

The solution algorithm of the maximum flow minimum cut problem Ford_Fulkerson (augmented path algorithm)

Applications of max-flow-minim algorithms, such as bipartite graph matching, edge disjoint path problems, and project selection problems

Solving recursive equations

Algorithm Design Questions* 30'

Divide and conquer method: statistical reverse order pairs, median search, principal element search

  • Statistical reverse order pair
  • Find the median

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    
    Find_kth(A,low,high,kth)
        if low == high
            return A[low]
        q = random_patition(A,low,high)
        k = p - low + 1
        if kth == k
            return A[q]
        else if kth < k
            return Find_kth(A,low,q-1,kth)
        else    return  Find_kth(A,q+1,high,kth-i)
    
  • Main element search

Greedy method: Dijkstra algorithm, interval scheduling problem, interval partitioning problem

  • Dijkstra's algorithm

  • Interval scheduling (recursive)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    贪心选择:最早结束
    Recursive_Interval_Scheduling(s,f,i,j)
        m = i + 1
        while m < j and Sm < fi
            m = m + 1
        if m < j
            return {am} 并 Recursive_Interval_Scheduling(s,f,m,j)
        else
            return 0
    
  • Interval scheduling (iteration)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    Greedy_Interval_Scheduling(s,f)
    	n = length[s]
        A={1}
        i=1
        for m=2 to n
        	if Sm > fi
            	A = A 并 {am}
                i = m
            return A
    

Dynamic programming problems: longest increasing subsequence, edit distance problem, matrix multiplication problem, knapsack problem, minimum independent set of trees, etc.

  • longest increasing subsequence

    1
    2
    3
    4
    
    设L(i)表示以i为结点的最长递增子序列的长度,L(0)=1
    for k=0 to i-1
        if  a[k] < a[i]:    L(i)=max(L[k]+1);
        if all a[k] > a[i]: L[i]=1;
    
  • Matrix multiplication

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    for i=0 to n-1
        m(i,j)=0;
        for len=1 to n-1
            for i =1 to n-len
                j=i+len
                m(i,j)=INFINITY
                for k=i to j-1
                    q=m(i,k)+m(k+1,j)+P[i-1]*P[k]*P[j];
                    if q < m(i,j)
                    then    m(i,j)=q
    return (1,n);
    
  • edit distance

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    EdieDistance(A[1...m],B[1...n])
        //base case
        for i = 0 to m
            E(i,0) = i
        for j= 1 to n
            E(0,j) = j
        //填充表格
        for i = 1 to m
            for j=1 to n
                E(i,j) = min{1+E(i-1,j),1+E(i,j-1),1+E(i-1,j-1)}
        return E(m,n)
    
  • backpack problem

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    
    0-1 Knapsack Algorithm
    for w = 0 to W
        B[0,w] = 0
    for i =0 to n
        B[i,0] = 0
    for w = 0 to W
        if wi <= w
            if vi+B[i-1,wi] > B[i-1,w]
                B[i,w] = vi + B[i-1,w-wi]
            else
                B[i,w] = B[i-1,w]
        else
            B[i,w]=B[i-1,w]
    

Graph algorithms: depth-first search, shortest path problem, etc.

Guess you like

Origin blog.csdn.net/weixin_42459772/article/details/100069748