Refuse to waste time! classmates! Time complexity is no longer difficult to understand, just read this article! ! !

1. What is time complexity

Time complexity is a measure of how the execution time of an algorithm changes as the size of the input grows. It describes the relationship between the time an algorithm takes to execute and the size of the input.

Usually, we use Big O notation (O) to denote time complexity and divide algorithms into several categories, such as constant time, logarithmic time, linear time, quadratic time, etc., based on how their execution time grows with the size of the input.

2. Common time complexity

Time complexity O(1)

The time complexity of O(1) means that the execution time of the algorithm is constant level, independent of the input scale.

An example of O(1) time complexity is getting the value of the first element in an array. Regardless of the length of the array, simply accessing the first position by index takes a fixed amount of time.

For example, here is a function for getting the first element of an array:

def get_first_element(arr):
    return arr[0]

In this example, no matter whether the length of the array is 10, 100 or 1000, the time to get the first element is constant, and there is no tendency to increase as the length of the array increases. Therefore, the time complexity of this function is O(1).

Time complexity O(n)

Examples of time complexity O(n) are more common, where the execution time of an algorithm scales linearly with the size of the input.

A simple example of O(n) time complexity is computing the sum of elements in an array.

Here is a sample code that calculates the sum of an array:

def calculate_sum(arr):
    total = 0
    for num in arr:
        total += num
    return total

In this example, the algorithm iterates over each element in the input array and adds them to the variable total. Since each element will only be accessed once, as the length n of the input array increases, the execution time will also increase linearly, and the time complexity is O(n).

It should be noted that in actual programming, sometimes there may be multiple steps or loops, but as long as they are executed in a linear fashion rather than nested, the overall time complexity is still O(n).

Time complexity O(log n)

The time complexity of O(log n) usually occurs in problems such as divide and conquer algorithms or binary search, where the execution time of the algorithm grows logarithmically with the increase of the input size.

An example of an O(log n) time complexity is the binary search algorithm. The binary search algorithm is used to find the position of a particular element in an ordered array.

The following is a sample code for binary search:

def binary_search(arr, target):
    left = 0
    right = len(arr) - 1

    while left <= right:
        mid = (left + right) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            left = mid + 1
        else:
            right = mid - 1

    return -1  # 如果没有找到目标元素,返回 -1

In this example, each iteration halves the range of the input array until the target element is found or determined not to exist. Therefore, no matter what the length of the input array is, the number of iterations required grows logarithmically.

Therefore, the time complexity of the binary search algorithm is O(log n), where n is the length of the input array.

Time complexity O(n log n)

A time complexity of O(n log n) usually arises when solving a problem using a divide-and-conquer algorithm, where the execution time of the algorithm grows n*log(n) as the size of the input increases.

An example of O(n log n) time complexity is merge sort. Merge sort is a common sorting algorithm, which achieves sorting by recursively dividing an array into smaller sub-arrays, and then merging the sub-arrays according to certain rules.

The following is a sample code for merge sort:

def merge_sort(arr):
    if len(arr) <= 1:
        return arr

    mid = len(arr) // 2
    left = arr[:mid]
    right = arr[mid:]

    left = merge_sort(left)
    right = merge_sort(right)

    return merge(left, right)

def merge(left, right):
    merged = []
    i = 0
    j = 0

    while i < len(left) and j < len(right):
        if left[i] <= right[j]:
            merged.append(left[i])
            i += 1
        else:
            merged.append(right[j])
            j += 1

    while i < len(left):
        merged.append(left[i])
        i += 1

    while j < len(right):
        merged.append(right[j])
        j += 1

    return merged

In this example, merge sort uses a divide-and-conquer strategy to divide the array into smaller sub-arrays and gradually merge the sub-arrays to obtain the sorted result. In each recursive call, the length of the array is divided in half, so the time complexity required for the merge operation at each level is O(n). There are a total of log n levels of recursion, so the time complexity of the entire merge sort is O(n log n).

Therefore, merge sort is a typical O(n log n) time complexity algorithm.

Time complexity O(n^2)

A time complexity of O(n^2) usually occurs in the case of nested loops, where the execution time of the algorithm grows quadratically with the size of the input.

An example of O(n^2) time complexity is Selection Sort. Selection sort is a simple but inefficient sorting algorithm. Its basic idea is to select the smallest element from the unsorted part each time and put it at the end of the sorted part.

Here is the sample code for selection sort:

def selection_sort(arr):
    n = len(arr)

    for i in range(n):
        min_idx = i
        for j in range(i + 1, n):
            if arr[j] < arr[min_idx]:
                min_idx = j

        arr[i], arr[min_idx] = arr[min_idx], arr[i]

    return arr

In this example, the outer loop executes n times and the inner loop executes ni times, where n is the length of the input array. Therefore, the total number of comparisons is n + (n-1) + (n-2) + ... + 1 = n*(n+1)/2, which is a quadratic increase in n. Therefore, the time complexity of selection sort is O(n^2).

It should be noted that the efficiency of selection sort is low in most cases, and it is not suitable for sorting large-scale data. But the advantage of selection sort is that it is an in-place sorting algorithm that does not require additional storage space, and for small-scale data or partially ordered data, selection sort may have certain advantages.

Time complexity O(2^n)

A time complexity of O(2^n) typically occurs in exponential algorithms where the execution time of the algorithm grows exponentially with the size of the input.

A classic example of O(2^n) time complexity is solving the Fibonacci sequence. The Fibonacci sequence is a recursively defined sequence in which each number is the sum of the previous two numbers.

The following is a sample code to solve the Fibonacci sequence using recursion:

def fibonacci(n):
    if n <= 1:
        return n

    return fibonacci(n-1) + fibonacci(n-2)

In this example, solving for the nth Fibonacci number requires a recursive call to solve for the sum of the first two Fibonacci numbers. Each recursive call calls two subproblems separately, and the size of each subproblem is a reduction of the size of the original problem. Therefore, the total number of recursive calls will be exponential, or O(2^n).

Note that solving the Fibonacci sequence recursively is very inefficient for large values ​​of n due to exponential growth. The time complexity can be reduced by optimizing the algorithm, such as using dynamic programming or iterative methods to avoid repeated calculations, thereby improving efficiency.

Guess you like

Origin blog.csdn.net/weixin_46475607/article/details/131966593