Detailed explanation of algorithm time complexity

1: Usage of master formula




T(N) = a * T(N / b) + f(N) (f(N) is a polynomial N d ...)


(1) log(b, a) > d ==> The time complexity is O(N log(b,a) )


(2) log(b, a) = d ==> time complexity is O(N d * log(N))


(3) log(b, a) < d ==> time The complexity is f(N)

2: Practical examples

2-1: Use the divide and conquer method to solve a problem of size N. If each step divides the problem into 8 sub-problems with a size of N/3, and the steps to solve the problem take O(N^2logN), then the time complexity of the entire algorithm is

Analysis: T(N) = 8(N / 3) + (N 2 )
: a = 8, b = 3, d = 2
log(3, 8) < 2 ==> The time complexity is O(N 2logN )

A. O(N2logN)

B. O(N2log2N)
C.O(N3logN)
D.O(Nlog8/log3)

2-2: Use the divide and conquer method to solve a problem of size N. Which of the following methods is the slowest?

A. At each step, the problem is divided into two sub-problems with a size of N/3, and the solution step takes O(N)
analysis: T(N) = 2(N / 3) + N,
we can get: a = 2, b = 3, d = 1
log(3, 2) < 1 ==> Time complexity is O(N)

B. At each step, the problem is divided into two sub-problems with a size of N/3, and the solution step takes O(NlogN).
Analysis: T(N) = 2(N / 3) + N
can be obtained: a = 2, b = 3, d = 1
log(3, 2) < 1 ==> Time complexity is O(NlogN)

C. Divide the problem into 3 sub-problems of size N/2 at each step, and the solution step takes O(N)
analysis: T(N) = 3(N / 2) + N,
we can get: a = 3, b = 2, d = 1
log(2, 3) > 1 ==> Time complexity is O(N log(2,3) )

D. Divide the problem into 3 sub-problems of size N/3 at each step, and the solution step takes O(NlogN)
analysis: T(N) = 3(N / 3) + N,
we can get: a = 3, b = 3, d = 1
log(3, 3) == 1 ==> Time complexity is O(NlogN)

2-3: The recursive formula of the time complexity of a given program: T(1)=1, T(N)=2T(N/2)+N. Then the closest description of the time complexity of the program is:

Analysis: T(N) = 2(N / 2) + N
can be obtained: a = 2, b = 2, d = 1
log(2, 2) == 1 ==> The time complexity is O(NlogN)

A. O(logN)
B. O(N)
C. O(NlogN)

D. O(N2)

2-4: When using the divide-and-conquer method to solve a problem of size N, if each step divides the problem into 4 sub-problems of size N/2, and uses O(N^2logN) time to execute the solution. Which of the following is closest to the overall time complexity?

Analysis: T(N) = 4(N / 2) + N 2
can be obtained: a = 4, b = 2, d = 2
log(2, 4) == 2 ==> The time complexity is O(N 2 ) * logN * logN == O(N 2 log 2 N)

A. O(N2logN)
B. O(N2)
C. O(N3logN)
D. O(N2log2N)

2-5: Given two n × n matrices A and B. Consider the following divide-and-conquer method for computing the matrix product C = A·B. Divide each matrix into four n 2 × n 2 \frac{n}{2}×\frac{n}{2} as follows2n×2nsubmatrix of:

Insert image description here
All matrix multiplication here is done recursively. Each block of matrix C can be obtained by addition and subtraction using P 1 , P 2 , ⋯, P 7 .
Which of the following is closest to the actual algorithm time complexity?
Analysis: This problem can be regarded as 7 (N/2) sub-problems
, namely: a = 2, b = 7
, so: O ( nlog 2 7 n^{log_{2}7}nlog27)

A.O( n 2 l o g 2 n n^{2}log_{2}n n2log2n)
B. O(ne)
C.O ( n l o g 2 7 n^{log_{2}7} nlog27)
D. o( n 3 n^{3} n3)

2-6: If you need to calculate the sequential multiplication of the following 6 matrices:

Insert image description here
The recursive calculation to solve the optimal number of multiplications yields the following recursion matrix m[i][j]:
Insert image description here
So, what is the minimum number of multiplications for sub-problems A2 to A5?
Analysis: According to the nature of the dynamic programming algorithm, the last result obtained is the current optimal solution, so just look up the table directly

A. 5000
B.2500
C.4375
D.7125

Continuously updating...

Guess you like

Origin blog.csdn.net/qq_52331221/article/details/128110757