Explore the magic of the programming world: a brief analysis of the mysteries of classic algorithms

        

        A programmer may encounter various algorithms in his life, but there are always a few algorithms that a programmer will definitely encounter and need to master with a high probability. Let’s talk about these very important “must catch!” algorithms today~

I. Introduction

        Algorithms play an extremely important role in computer science and programming, and their importance is reflected in the following aspects:

  1. Problem-solving skills : Algorithms are a key tool for solving problems. They provide a way to describe a problem precisely, break it down into manageable sub-problems, and provide a clear way to solve these sub-problems. Mastery of different types of algorithms enables programmers to solve a variety of complex problems more efficiently.

  2. Performance optimization : The choice of algorithm directly affects the performance of the program. An efficient algorithm can greatly reduce the execution time and resource consumption of the program. In applications that process large-scale data or require high performance, choosing the right algorithm is crucial.

  3. Generality : Many algorithms are general and can be applied to many different problem domains. For example, sorting algorithms, search algorithms, graph algorithms, etc. can be used in a variety of contexts, from database querying to image processing to artificial intelligence and machine learning.

  4. Fundamentals of Computer Science : Algorithms are one of the core concepts of computer science. Understanding and studying algorithms is the foundation of the computer science discipline, and they have an important impact on both the theory and practice of computer science.

  5. Interviews and competitions : In programming interviews and programming competitions, algorithmic questions are often examined. Mastering algorithms not only helps you stand out in interviews, but also helps you do well in programming competitions.

  6. Innovation and new technology : New computer technology and innovation often require new algorithms to implement. For example, fields such as artificial intelligence, blockchain, and cryptography all require advanced algorithms to promote technological development.

        In short, algorithms are the cornerstone of programming and computer science. They are not only the key to solving problems, but also can improve the performance of programs, drive technological innovation, and be a key element of career success. Therefore, it is crucial for programmers to learn and master algorithms.

2: Introduction to common algorithms

        As a programmer, it is very important to master some classic algorithms because they play a key role in solving various problems and optimizing program performance. Here are some classic algorithms that programmers should master:

  1. Sorting algorithm :

    • Bubble Sort
    • insertion sort
    • selection sort
    • Quick sort
    • merge sort
    • Heap sort
    • counting sort
    • bucket sort
    • Radix sort
  2. Search algorithm :

    • linear search
    • binary search
    • Depth First Search (DFS)
    • Breadth First Search (BFS)
  3. Graph algorithm :

    • Shortest path algorithms (such as Dijkstra's algorithm and Bellman-Ford algorithm)
    • Minimum spanning tree algorithms (such as Prim's algorithm and Kruskal's algorithm)
    • topological sort
    • Graph traversal algorithms (DFS and BFS)
  4. String matching algorithm :

    • Brute force matching
    • KMP algorithm
    • Boyer-Moore algorithm
    • Rabin-Karp algorithm
  5. Dynamic programming :

    • backpack problem
    • longest common subsequence problem
    • Shortest edit distance problem
    • longest increasing subsequence problem
    • Shortest path problem (Dijkstra, Floyd-Warshall, etc.)
  6. Greedy algorithm :

    • greedy choice property
    • Applications of greedy algorithms (such as Huffman coding, activity selection problems, etc.)
  7. Divide and conquer algorithm :

    • merge sort
    • Quick sort
    • Tower of Hanoi problem
    • Recently clicked questions
  8. Graph algorithm :

    • Graph traversal (DFS and BFS)
    • Shortest path algorithm (Dijkstra, Bellman-Ford, Floyd-Warshall)
    • Minimum spanning tree algorithm (Prim, Kruskal)
    • topological sort
  9. Advanced data structures :

    • Heap (max-heap and min-heap)
    • Balanced binary trees (such as AVL trees and red-black trees)
    • Hash table
    • Graph (adjacency matrix and adjacency list representation)
  10. Search algorithm :

    • A* search algorithm
    • Deep learning search algorithms (such as Alpha-Beta pruning)

        These algorithms cover a variety of common programming and computer science problems. Mastering them can help you better understand and solve complex problems, and improve the efficiency and performance of your code.

Three: Introduction to key algorithms (character matching)

        The following is a brief discussion of the four types of algorithms in string matching:

(1): Violent matching

        The brute force matching algorithm is a simple string matching algorithm that compares characters one by one on the main string to find whether a substring appears in the main string. Here is a Python code example to implement the brute force matching algorithm:

def brute_force_match(text, pattern):
    n = len(text)  # 主串的长度
    m = len(pattern)  # 模式串的长度

    # 遍历主串
    for i in range(n - m + 1):
        j = 0  # 初始化模式串的索引
        while j < m and text[i + j] == pattern[j]:
            j += 1
        if j == m:  # 如果模式串的所有字符都匹配成功
            return i  # 返回匹配的起始位置

    return -1  # 如果未找到匹配,返回-1

# 测试示例
text = "Hello, World!"
pattern = "World"
result = brute_force_match(text, pattern)
if result != -1:
    print(f"Pattern found starting at index {result}")
else:
    print("Pattern not found in the text.")

        In this example, we first calculate the length of the main string and the pattern string, and then use two nested loops to compare the characters. The outer loop traverses the main string, and the inner loop traverses the pattern string. If an exact match is found in the inner loop, return the starting position of the match, otherwise continue to the next position in the outer loop.

        Please note that this implementation is the simplest brute force matching algorithm, it will find the first matching position and return it. If you want to find all matching positions, you can save the matching positions in a list and return the list.

        The time complexity of the brute force matching algorithm is O((n-m+1)*m). In the worst case, it needs to compare every position of the main string, so it may not be the most efficient string matching algorithm in large-scale texts. . But its implementation is simple and easy to understand, making it suitable for use with small texts or as an introductory way to learn and understand string matching algorithms.

(2): KMP algorithm

        The KMP algorithm (Knuth-Morris-Pratt algorithm) is an efficient string matching algorithm. It uses the information of the pattern string to reduce the number of character comparisons, thereby improving the matching efficiency. The core idea of ​​the KMP algorithm is to construct a partial matching table (also called a mismatch function or Next array) to guide jumps in the matching process.

        The following is the basic idea of ​​the KMP algorithm and Python code example:

Basic idea :

  1. Construct a partial matching table of the pattern string, which represents the length of the longest identical prefix and suffix corresponding to each position in the pattern string.

  2. During the matching process, when a character mismatch occurs, the information in the partial matching table is used to select an appropriate jump position, thereby reducing the number of character comparisons.

    def build_partial_match_table(pattern):
        m = len(pattern)
        partial_match_table = [0] * m
        length = 0
        i = 1
    
        while i < m:
            if pattern[i] == pattern[length]:
                length += 1
                partial_match_table[i] = length
                i += 1
            else:
                if length != 0:
                    length = partial_match_table[length - 1]
                else:
                    partial_match_table[i] = 0
                    i += 1
    
        return partial_match_table
    
    def kmp_search(text, pattern):
        n = len(text)
        m = len(pattern)
        partial_match_table = build_partial_match_table(pattern)
        i = 0  # 主串的索引
        j = 0  # 模式串的索引
    
        while i < n:
            if pattern[j] == text[i]:
                i += 1
                j += 1
    
                if j == m:  # 找到完全匹配
                    return i - j
    
            else:
                if j != 0:
                    j = partial_match_table[j - 1]
                else:
                    i += 1
    
        return -1  # 未找到匹配
    
    # 测试示例
    text = "ABABABABABCABAABABAB"
    pattern = "ABAB"
    result = kmp_search(text, pattern)
    if result != -1:
        print(f"Pattern found starting at index {result}")
    else:
        print("Pattern not found in the text.")

 

        In the above example, build_partial_match_tablethe function is used to build a partial match table of the pattern string, and then kmp_searchthe function uses the partial match table to perform string matching. In this way, the KMP algorithm can find matches with linear time complexity O(n+m) in the worst case, where n is the length of the main string and m is the length of the pattern string.

The advantage of the KMP algorithm is that it reduces the number of unnecessary character comparisons. Especially when the main string is long and the pattern string contains repeated characters, the performance is excellent. Therefore, it is a commonly used and efficient string matching algorithm.

(3): Boyer-Moore algorithm

        The Boyer-Moore algorithm is an efficient string search algorithm used to find the occurrence of a pattern string in text. Its main idea is to compare pattern strings and text strings from right to left to minimize the number of comparisons.

The core of the Boyer-Moore algorithm has two main parts:

  1.  Bad Character Rule: When a mismatching character is found, the algorithm determines how to move the pattern string to the right based on the position of the characters in the text in the pattern. If the non-matching character does not exist in the pattern, the pattern string can be shifted entirely to the right so that the non-matching character aligns with the next character in the text. If the unmatched character is present in the pattern, the pattern string is shifted to the right so that the unmatched character aligns with the character in the text.

  2. Good Suffix Rule: When a mismatching character is found, the algorithm determines how to move the pattern string to the right based on the matched portion of the pattern string. If the matched part occurs elsewhere in the pattern string, you can move the pattern string to the right so that the matched part aligns with the same part in the text.

The following is the Python code implementation of the Boyer-Moore algorithm:

def bad_character_table(pattern):
    table = {}
    pattern_length = len(pattern)
    for i in range(pattern_length - 1):
        table[pattern[i]] = pattern_length - 1 - i
    return table

def good_suffix_table(pattern):
    pattern_length = len(pattern)
    table = [0] * pattern_length
    last_prefix_position = pattern_length
    
    for i in range(pattern_length - 1, -1, -1):
        if is_prefix(pattern, i + 1):
            last_prefix_position = i + 1
        table[pattern_length - 1 - i] = last_prefix_position - (pattern_length - 1 - i)
    
    for i in range(pattern_length - 1):
        suffix_length = get_suffix_length(pattern, i)
        if pattern[i - suffix_length] != pattern[pattern_length - 1 - suffix_length]:
            table[pattern_length - 1 - suffix_length] = pattern_length - 1 - i
    return table

def is_prefix(pattern, p):
    pattern_length = len(pattern)
    j = 0
    for i in range(p, pattern_length):
        if pattern[i] != pattern[j]:
            return False
        j += 1
    return True

def get_suffix_length(pattern, p):
    pattern_length = len(pattern)
    length = 0
    j = pattern_length - 1
    for i in range(p, -1, -1):
        if pattern[i] == pattern[j]:
            length += 1
        else:
            break
        j -= 1
    return length

def boyer_moore(text, pattern):
    pattern_length = len(pattern)
    text_length = len(text)
    if pattern_length == 0:
        return 0

    bad_char = bad_character_table(pattern)
    good_suffix = good_suffix_table(pattern)

    i = pattern_length - 1
    while i < text_length:
        j = pattern_length - 1
        while pattern[j] == text[i]:
            if j == 0:
                return i
            i -= 1
            j -= 1
        i += max(bad_char.get(text[i], 0), good_suffix[j])
    
    return -1  # Pattern not found in text

# 使用示例
text = "This is an example text for Boyer-Moore algorithm."
pattern = "Boyer-Moore"
result = boyer_moore(text, pattern)
if result != -1:
    print(f"Pattern found at index {result}")
else:
    print("Pattern not found in text")

        This code implements the main functions of the Boyer-Moore algorithm, including building bad character tables and good suffix tables, and using these tables to perform string searches. When used, simply pass the text and pattern string to be searched to boyer_moorethe function. If the pattern string is found, it returns the starting index of the pattern in the text, otherwise it returns an empty result.

(4): Rabin-Karp algorithm

        The Rabin-Karp algorithm is a fast algorithm for string matching. It can find whether a pattern string appears in a text string and return the matching position. The main idea of ​​this algorithm is to use a hash function to calculate the hash value of each possible substring in the text string, and then compare the hash value of the pattern string with the hash value of the substring in the text string to quickly locate Possible matching positions. When the hashes match, a further character comparison is required to confirm the match.

The following is a Python example implementation of the Rabin-Karp algorithm:

def rabin_karp(text, pattern):
    if not text or not pattern:
        return []

    # 哈希函数参数
    base = 256  # 基数,通常为字符集大小
    prime = 101  # 一个较小的质数,用于哈希计算

    # 计算模式串和第一个文本子串的哈希值
    pattern_hash = 0
    text_hash = 0
    for i in range(len(pattern)):
        pattern_hash = (pattern_hash * base + ord(pattern[i])) % prime
        text_hash = (text_hash * base + ord(text[i])) % prime

    # 计算base^(m-1)的值,其中m是模式串的长度
    base_power = 1
    for i in range(len(pattern) - 1):
        base_power = (base_power * base) % prime

    matches = []
    for i in range(len(text) - len(pattern) + 1):
        # 检查哈希值是否匹配,如果匹配则进行字符比较
        if pattern_hash == text_hash:
            if text[i:i + len(pattern)] == pattern:
                matches.append(i)

        # 更新文本子串的哈希值
        if i < len(text) - len(pattern):
            text_hash = (base * (text_hash - ord(text[i]) * base_power) + ord(text[i + len(pattern)])) % prime

            # 保持哈希值为正数
            if text_hash < 0:
                text_hash += prime

    return matches

# 示例用法
text = "ABABDABACDABABCABAB"
pattern = "ABABCABAB"
matches = rabin_karp(text, pattern)
print("模式串在文本中的匹配位置:", matches)

        Note that the Rabin-Karp algorithm speeds up matching by comparing hashes, but may produce false positives in the case of hash collisions. Therefore, when the hashes match, a further character comparison is still required to confirm the match. The hash function parameters in this example can be adjusted according to specific needs to improve the performance of the algorithm.

Four: Introduction to key algorithms (search algorithm)

(1): Linear search

        The linear search algorithm, also known as the sequential search algorithm, is a simple and intuitive search method for finding specific elements in a list or array. It checks each element one by one until the target element is found or the entire dataset is traversed.

        Here is an example of implementing a linear search algorithm in Python:

def linear_search(arr, target):
    for i in range(len(arr)):
        if arr[i] == target:
            return i  # 找到目标元素,返回索引位置
    return -1  # 目标元素不存在于数组中,返回-1

# 示例用法
my_list = [1, 3, 5, 7, 9, 11, 13]
target_element = 7

result = linear_search(my_list, target_element)

if result != -1:
    print(f"目标元素 {target_element} 在数组中的索引位置为 {result}")
else:
    print(f"目标元素 {target_element} 未找到")

        In this example, linear_search the function accepts an array  arr and the target element  target as parameters. It finds the target element by iterating through each element in the array. If the target element is found, it returns the index position of the element; otherwise, it returns -1, indicating that the target element is not in the array.

        Note that the time complexity of the linear search algorithm is O(n), where n is the size of the array. It works well for small or unordered data sets. If you want to find elements in a large ordered data set, a more efficient algorithm such as binary search may be more suitable.

(2): Binary search

        Binary Search algorithm (Binary Search) is an efficient search algorithm used to find target elements in a sorted array or list. It works by comparing the target element to an element in the middle of the array and narrowing the search based on the comparison until the target element is found or determined not to exist.

        The following is an example of implementing a binary search algorithm using Python:

def binary_search(arr, target):
    left = 0  # 左边界
    right = len(arr) - 1  # 右边界

    while left <= right:
        mid = (left + right) // 2  # 中间位置

        if arr[mid] == target:
            return mid  # 找到目标元素,返回索引位置
        elif arr[mid] < target:
            left = mid + 1  # 目标元素在右半部分
        else:
            right = mid - 1  # 目标元素在左半部分

    return -1  # 目标元素不存在于数组中,返回-1

# 示例用法
my_list = [1, 3, 5, 7, 9, 11, 13]
target_element = 7

result = binary_search(my_list, target_element)

if result != -1:
    print(f"目标元素 {target_element} 在数组中的索引位置为 {result}")
else:
    print(f"目标元素 {target_element} 未找到")

        In this example, binary_search the function accepts a sorted array  arr and the target element  target as parameters. It uses two pointers  left and  right to represent the left and right boundaries of the search range. Then it calculates the middle position in a loop  midand compares the target element with the middle element. Depending on the comparison, it updates the left or right bounds to narrow the search until the target element is found or it is determined not to exist.

        The time complexity of the binary search algorithm is O(log n), where n is the size of the array. It is very efficient in large ordered data sets. But be aware that it requires that the data set must be sorted, otherwise it will not work properly.

(3): Depth-first search DFS

        Depth-First Search (DFS) is an algorithm for traversing or searching graph (Graph) and tree (Tree) data structures. The basic idea is to start from the starting node, explore as deep as possible along a path until you reach a leaf node, and then backtrack and explore other branches. DFS is usually implemented using recursion or stack.

        The following is an example of implementing a depth-first search algorithm in Python, assuming we have an adjacency list representation of a graph:

def dfs(graph, node, visited):
    if node not in visited:
        print(node, end=' ')  # 访问当前节点
        visited.add(node)
        for neighbor in graph[node]:
            dfs(graph, neighbor, visited)

# 示例用法
# 创建一个简单的有向图的邻接表表示
graph = {
    'A': ['B', 'C'],
    'B': ['D', 'E'],
    'C': ['F'],
    'D': [],
    'E': ['F'],
    'F': []
}

# 初始化访问集合
visited = set()

# 从节点 'A' 开始进行深度优先搜索
print("深度优先搜索结果:")
dfs(graph, 'A', visited)

        In this example, dfs the function accepts as arguments an adjacency list of the graph  graph, the current node  node , and a set of visited nodes  visited . It first checks if the current node has been visited, if not, it marks it as visited and outputs it. Then, it recursively performs DFS on all neighbor nodes of the current node. In the example, we perform a depth-first search starting from node 'A', and finally output the traversal sequence starting from node 'A'. The traversal order of DFS depends on the structure of the graph and the starting node.

        It should be noted that DFS may fall into an infinite loop, so appropriate termination conditions and loop detection are required to ensure that the algorithm ends normally

(4): Breadth-first search BFS

        Breadth-First Search (BFS) is a search algorithm for data structures such as graphs and trees. It starts from the starting node and expands outward layer by layer, first exploring nodes close to the starting node, and then exploring nodes further away. BFS is often used to find the shortest path, or to find a specific node in a tree or graph.

        The following is an example of using Python to implement the BFS algorithm:

from collections import deque

# 定义一个图的表示,使用邻接列表
graph = {
    'A': ['B', 'C'],
    'B': ['A', 'D', 'E'],
    'C': ['A', 'F'],
    'D': ['B'],
    'E': ['B', 'F'],
    'F': ['C', 'E']
}

# 实现BFS算法
def bfs(graph, start):
    visited = set()  # 用于存储已访问的节点
    queue = deque()  # 创建一个队列用于BFS

    queue.append(start)
    visited.add(start)

    while queue:
        node = queue.popleft()  # 出队列
        print(node, end=' ')

        for neighbor in graph[node]:
            if neighbor not in visited:
                queue.append(neighbor)
                visited.add(neighbor)

# 从节点'A'开始进行BFS
print("BFS结果:")
bfs(graph, 'A')

        In this example, we use an adjacency list to represent a simple undirected graph. The BFS function starts from the starting node 'A', traverses the graph layer by layer, and uses a queue to manage the nodes to be visited. This will output the traversal results of the BFS, starting from the 'A' node and traversing the entire graph.

Five: Introduction to key algorithms (divide and conquer algorithm)

(1): Merge sort

        Merge Sort is a classic sorting algorithm based on the divide-and-conquer strategy. It divides the array to be sorted into two sub-arrays, sorts the two sub-arrays separately, and then merges them into a sorted array. This merging process is a key step in merge sort. Here is an example of using Python to implement merge sort:

def merge_sort(arr):
    if len(arr) <= 1:
        return arr

    # 分割数组
    mid = len(arr) // 2
    left_half = arr[:mid]
    right_half = arr[mid:]

    # 递归排序左右子数组
    left_half = merge_sort(left_half)
    right_half = merge_sort(right_half)

    # 合并两个有序子数组
    return merge(left_half, right_half)

def merge(left, right):
    merged = []
    left_idx, right_idx = 0, 0

    while left_idx < len(left) and right_idx < len(right):
        if left[left_idx] < right[right_idx]:
            merged.append(left[left_idx])
            left_idx += 1
        else:
            merged.append(right[right_idx])
            right_idx += 1

    # 将剩余元素添加到merged中
    merged.extend(left[left_idx:])
    merged.extend(right[right_idx:])
    
    return merged

# 示例
arr = [38, 27, 43, 3, 9, 82, 10]
sorted_arr = merge_sort(arr)
print("排序后的数组:", sorted_arr)

        In this example, merge_sort the function recursively splits the input array in half, then sorts the two halves separately, and finally calls  merge the function to merge the two sorted subarrays into a single sorted array. This process is recursive step by step until all subarrays are sorted.

        Merge sort is a stable sorting algorithm with stable time complexity (O(n log n)) and is suitable for various data collections.

(2): Quick sort

        Quick Sort is an efficient divide-and-conquer sorting algorithm. Its basic idea is to select a benchmark element and divide the array into left and right parts. The left element is less than or equal to the benchmark element, the right element is greater than the benchmark element, and then Sort the left and right parts recursively. Here is an example of quick sort using Python:

def quick_sort(arr):
    if len(arr) <= 1:
        return arr

    pivot = arr[len(arr) // 2]  # 选择中间元素作为基准
    left = [x for x in arr if x < pivot]  # 所有小于基准的元素
    middle = [x for x in arr if x == pivot]  # 所有等于基准的元素
    right = [x for x in arr if x > pivot]  # 所有大于基准的元素

    return quick_sort(left) + middle + quick_sort(right)

# 示例
arr = [38, 27, 43, 3, 9, 82, 10]
sorted_arr = quick_sort(arr)
print("排序后的数组:", sorted_arr)

        In this example, quick_sort the function first selects a pivot element (usually the middle element) and then divides the array into three parts: elements less than the pivot, elements equal to the pivot, and elements greater than the pivot. Then recursively sort the left and right parts, and finally merge the sorted subarrays together.

        Quicksort typically has an average case time complexity of O(n log n), but can reach O(n^2) in the worst case. In practice, however, it is generally faster than other sorting algorithms because it has a lower constant factor.

(3): Tower of Hanoi problem

        The Tower of Hanoi problem is a classic recursive problem that involves moving a stack of plates from one pillar to another while obeying the following rules:

  1. Only one plate can be moved at a time.
  2. Plates must be placed in order of size from top to bottom. Large plates cannot be placed on small plates.
  3. Movement can be accomplished with the help of an additional empty pillar.

        Here is an example of using Python to implement the Tower of Hanoi problem recursively:

def hanoi(n, source, auxiliary, target):
    if n == 1:
        # 当只有一个盘子时,直接移动到目标柱子
        print(f"移动盘子 {n} 从 {source} 到 {target}")
        return
    # 将 n-1 个盘子从源柱子移动到辅助柱子
    hanoi(n - 1, source, target, auxiliary)
    # 移动第 n 个盘子到目标柱子
    print(f"移动盘子 {n} 从 {source} 到 {target}")
    # 将 n-1 个盘子从辅助柱子移动到目标柱子
    hanoi(n - 1, auxiliary, source, target)

# 示例
n = 3  # 3个盘子
hanoi(n, 'A', 'B', 'C')

        In this example, hanoi the function uses recursion to solve the Tower of Hanoi problem. It moves n-1 disks from the source column to the auxiliary column, then moves the n-th disk to the target column, and finally moves n-1 disks from the auxiliary column to the target column. This process proceeds recursively until all plates have been moved to the target pillar.

        The Tower of Hanoi problem is a classic recursive application that demonstrates the idea of ​​recursive algorithms.

(4): Recent point-to-point questions

        The Closest Pair of Points Problem is a classic problem of calculating the distance between the two nearest points on the plane. A common way to solve this problem is to use the divide-and-conquer approach. Here is an example of using Python to implement the nearest point pair problem:

import math
import sys

# 计算两点之间的距离
def distance(point1, point2):
    return math.sqrt((point1[0] - point2[0]) ** 2 + (point1[1] - point2[1]) ** 2)

# 暴力搜索法,用于小规模问题
def brute_force(points):
    min_dist = sys.maxsize
    for i in range(len(points)):
        for j in range(i + 1, len(points)):
            dist = distance(points[i], points[j])
            if dist < min_dist:
                min_dist = dist
    return min_dist

# 分治法求解最近点对问题
def closest_pair(points):
    n = len(points)
    
    # 如果点的数量较少,使用暴力搜索
    if n <= 3:
        return brute_force(points)

    # 将点按 x 坐标排序
    points.sort(key=lambda x: x[0])
    
    # 分成左右两部分
    mid = n // 2
    left = points[:mid]
    right = points[mid:]

    # 递归求解左右两部分的最近点对距离
    left_min_dist = closest_pair(left)
    right_min_dist = closest_pair(right)
    
    # 取左右两部分的最小距离
    min_dist = min(left_min_dist, right_min_dist)

    # 检查是否存在跨越左右两部分的更近点对
    strip = [point for point in points if abs(point[0] - points[mid][0]) < min_dist]
    strip.sort(key=lambda x: x[1])
    
    # 在 strip 区域内寻找更近点对
    for i in range(len(strip)):
        for j in range(i + 1, len(strip)):
            if strip[j][1] - strip[i][1] < min_dist:
                dist = distance(strip[i], strip[j])
                if dist < min_dist:
                    min_dist = dist

    return min_dist

# 示例
points = [(1, 2), (2, 4), (0, 0), (3, 1), (4, 2), (0, 5)]
closest_dist = closest_pair(points)
print("最近点对的距离:", closest_dist)

        In this example, we first use the divide-and-conquer method to divide the point set into left and right parts, and recursively calculate the closest point pair distance between the left and right parts. We then find possible closer pairs of points spanning the left and right parts and calculate their distance. Finally, we return the minimum distance between the left and right parts as well as the spanned part.

        The time complexity of this algorithm is O(n log n), where n is the number of points, so it is an efficient solution to the nearest point pair problem for large-scale point sets.

Six: Message

        When we delve into algorithms, we are not only exploring the mysteries of computer science, but also unlocking infinite possibilities. Algorithms are the essence of the programming world, and they drive our applications, systems, and technologies forward. Whether you are a beginner or an experienced developer, you can improve your programming skills by continuously learning and practicing algorithms. Algorithms are the cornerstone of computer science and the source of innovation. Let us continue to explore, learn, and create to build a smarter and more efficient digital world together.

Guess you like

Origin blog.csdn.net/qq_38563206/article/details/133124518