Find the closest point with the best performance

To find the closest points with the best performance, a common approach is to use spatial data structures to speed up the search process. Here are two common data structures and their applications:

  1. KD tree (KD-Tree): KD tree is a binary tree data structure used to segment and organize points in k-dimensional space. It can efficiently support nearest neighbor search. When building a KD tree, points are recursively divided into left and right subtrees so that each node represents a hyperrectangular area. When searching, you can search the branches of the tree step by step according to the location of the target point, thereby reducing the number and scope of searches. The time complexity of the construction and search operations of the KD tree is O(log n), where n is the number of points.
  2. Quadtree: Quadtree is a tree data structure that divides two-dimensional space. It divides the space into four quadrants and divides each quadrant recursively until the termination condition is reached. Store point reference or density information at each node. Through the quadtree, the space can be divided into different levels to quickly locate and search for the nearest point. The time complexity of the construction and search operations of the quadtree is O(log n), where n is the number of points.

These spatial data structures can be selected and implemented according to actual needs. The time complexity of building the data structure may be high, but the nearest points can be quickly found during the subsequent nearest neighbor search, thereby improving performance. Note that for very large data sets, you may also want to consider using more advanced data structures or algorithms to further optimize search efficiency.

Alternatively, if the number of points is not very large, you can also use a simple traversal method, calculating and updating the nearest points during the traversal process. The time complexity of this method is O(n), where n is the number of points. Although the performance of this method is poor, it may be sufficient for small-scale data sets.

How to implement KD tree

Implementing a KD tree can be a complex task involving tree construction, insertion, and search operations. The following is a simple implementation example of a KD tree, showing the basic ideas of construction, insertion and nearest neighbor search:

using UnityEngine;
using System.Collections.Generic;

public class KdTree
{
    
    
    private class KdTreeNode
    {
    
    
        public Vector3 position;
        public KdTreeNode left;
        public KdTreeNode right;

        public KdTreeNode(Vector3 position)
        {
    
    
            this.position = position;
            left = null;
            right = null;
        }
    }

    private KdTreeNode root;

    public void Insert(Vector3 position)
    {
    
    
        root = InsertRecursive(root, position, 0);
    }

    private KdTreeNode InsertRecursive(KdTreeNode node, Vector3 position, int depth)
    {
    
    
        if (node == null)
        {
    
    
            return new KdTreeNode(position);
        }

        int axis = depth % 3; // 3维空间,根据深度选择切分轴

        if (position[axis] < node.position[axis])
        {
    
    
            node.left = InsertRecursive(node.left, position, depth + 1);
        }
        else
        {
    
    
            node.right = InsertRecursive(node.right, position, depth + 1);
        }

        return node;
    }

    public KdTreeNode FindNearest(Vector3 targetPosition)
    {
    
    
        if (root == null)
        {
    
    
            return null;
        }

        KdTreeNode nearestNode = null;
        float nearestDistance = float.MaxValue;

        FindNearestRecursive(root, targetPosition, 0, ref nearestNode, ref nearestDistance);

        return nearestNode;
    }

    private void FindNearestRecursive(KdTreeNode node, Vector3 targetPosition, int depth, ref KdTreeNode nearestNode, ref float nearestDistance)
    {
    
    
        if (node == null)
        {
    
    
            return;
        }

        float distance = Vector3.Distance(node.position, targetPosition);

        if (distance < nearestDistance)
        {
    
    
            nearestNode = node;
            nearestDistance = distance;
        }

        int axis = depth % 3; // 3维空间,根据深度选择切分轴

        if (targetPosition[axis] < node.position[axis])
        {
    
    
            FindNearestRecursive(node.left, targetPosition, depth + 1, ref nearestNode, ref nearestDistance);

            // 检查是否需要在另一个子树搜索
            if (Mathf.Abs(targetPosition[axis] - node.position[axis]) < nearestDistance)
            {
    
    
                FindNearestRecursive(node.right, targetPosition, depth + 1, ref nearestNode, ref nearestDistance);
            }
        }
        else
        {
    
    
            FindNearestRecursive(node.right, targetPosition, depth + 1, ref nearestNode, ref nearestDistance);

            // 检查是否需要在另一个子树搜索
            if (Mathf.Abs(targetPosition[axis] - node.position[axis]) < nearestDistance)
            {
    
    
                FindNearestRecursive(node.left, targetPosition, depth + 1, ref nearestNode, ref nearestDistance);
            }
        }
    }
}

In the above code, we define an internal class KdTreeNode to represent the nodes of the KD tree. Each node contains a position, left sub-node left and right sub-node right.

The process of building a KD tree is implemented recursively, using the InsertRecursive function to insert new nodes in the tree. At each node, we select the split axis based on the current depth, and insert the new node into the left or right subtree based on the value of the split axis.

The nearest neighbor search process is also implemented recursively, using the FindNearestRecursive function to find the nearest node. At each node, we calculate the distance between the target location and the current node location, and update the nearest node and nearest distance. Then, based on the current depth and the value of the split axis, choose to further search in the left subtree or the right subtree, and determine whether you need to search in another subtree based on the distance from the current split axis.

Please note that the above code is a simplified KD tree implementation, only showing the basic construction, insertion and nearest neighbor search ideas. For more complex applications, more details and optimizations may need to be considered. In addition, the KD tree implementation here is for the three-dimensional space. If you need to process data in other dimensions, you need to make corresponding adjustments.

If you want to use a more complete and high-performance KD tree implementation, it is recommended to find an existing KD tree library or refer to more detailed tutorials and documentation for implementation.

Guess you like

Origin blog.csdn.net/hhh314159/article/details/131210413