Sword refers to offer to brush questions--GZ23--the subsequent traversal sequence of binary search tree

Insert picture description here
Problem-solving ideas:
from O(nlogn) to O(n), a more efficient method than recursion: upper bound method

Method 1: Recursive method I
read the comments and found that the recursive segmentation method is commonly used. Recursion is simple and easy to understand and easy to implement. First traverse to determine the boundary point of the left and right subtrees, and then make recursive judgments on the two subtrees respectively. . Now let us analyze the time complexity of the recursive method:

Taking the standard perfect binary search tree as an example, each layer of recursion involves traversing the sequence. Although the deeper the layer, the fewer nodes (less the root node of the subtree), but this reduction is insignificant. Even at the lowest level, there are still n/2 nodes (the number of nodes at the i-th level of a perfect binary tree is the sum of all the nodes above it + 1), so the traversal cost of the recursive method at each level is O(n), and For a binary tree, the average number of recursive levels is O(logn), so the final complexity of the recursive method is O(nlogn)

Method 2: Upper bound method
So greedy we can't help but wonder, is there a better way? ? O(n), even O(logn)?

Because of a correct traversal sequence, we can deliberately tamper with it at any position, so it is necessary to traverse all the elements to determine its correctness, so I personally think that the lower bound of the time complexity of this problem should be O(n).

Specifically, the key feature of the binary search tree is: for any subtree, there is "left subtree<root node<right subtree", so its root node restricts the value range of its left and right subtrees , The root root of the binary search tree is the upper limit (max) of its left subtree value, and at the same time the lower limit (min) of its right subtree value. If we start from the root node and go down, then the high-level ancestor node sequence will continue to form an upper and lower limit constraint on the lower-level untraversed nodes. As long as the lower-level node does not violate this constraint, then it is legal. Otherwise, the sequence is not legal.

Because of the characteristics of post-order traversal, we can access a given sequence in reverse order from right to left, because the last element of post-order traversal is the root node, and reverse access is equivalent to the order of access from root to leaf and right to left. , From root to leaf, we can use the upper and lower bound information provided by the ancestor node to judge the legitimacy of the child node. The specific steps are as follows:

If the current element> the previous element, it means that the current element may be the right child of the previous element. At this time:
if the current element breaks the max upper limit constraint, it means that there is a "left subtree> root" situation in the ancestors, which violates the search tree The definition of (try to change the value of 4 in the figure to 7, then there will be children 7> ancestor max constraint 5, the search tree does not hold);
otherwise, the current element is the right child of the previous element, and the current element will become the new ancestor Nodes provide constraints for subsequent nodes;
if the current element <the previous element, it means that the current element is the left child of an ancestor node. At this time:
we need to find out the ancestor node and add the ancestor’s right subtree node Discard all (because its right subtree structure has been determined and cannot provide help for subsequent nodes), the value of the ancestor node will become the new max upper limit constraint, the
current element will become the new ancestor node, and continue to provide constraints for subsequent nodes;
Insert picture description here
Therefore, we need to use a stack to store the determined ancestor nodes. When the legitimacy of the new node is determined, push it to the stack, and pop it out when the algorithm needs to find the left child of the current node;

Complexity analysis: In the case of no popping, the sequence is accessed from right to left, and the information is pushed into the stack in turn, and only the value at the top of the stack is accessed each time. The complexity is O(n); in the case of popping. , Because the same node will not repeatedly pop into the stack, so the worst case is that all nodes have to undergo a pop operation, and the complexity changes from O(n) to O(2n). Ignoring constants, the final time complexity of the algorithm is O(n), and the space complexity is also O(n).

import java.util.Stack;
public class Solution {
    
    
    public boolean VerifySquenceOfBST(int [] sequence) {
    
    
        if(sequence.length < 1){
    
    
            return false;
        }
        //roots栈中依次存放各层父辈节点的值
        //事先放入一个值避免对空栈进行判断
        Stack<Integer> roots = new Stack();
        roots.push(Integer.MIN_VALUE);
        int max = Integer.MAX_VALUE;
        for(int i = sequence.length-1; i > -1; i--){
    
    
            //如果当前节点超过max约束,那它必定不是二叉树
            if(sequence[i] >= max){
    
    
                return false;
            }
            //如果当前节点小于roots的栈顶,说明该节点是某个主辈的左孩子,需要找出这个祖辈
            //也就是不断出栈,利用二叉搜索树的特点,找出该祖辈,同时,该主辈也提供的新的max约束
            while(sequence[i] < roots.peek()){
    
    
                max = roots.peek();
                //为了找出给节点的祖辈而出栈
                roots.pop();
            }
            // 该节点成了新一代的祖辈节点,为后续节点判断自己的位置提供依据
            roots.push(sequence[i]);
        }
        return true;
    }
}

Reprinted at: https://blog.nowcoder.net/n/8fe97e67996249ccbe71328d3a49c4af?f=comment

Guess you like

Origin blog.csdn.net/weixin_42118981/article/details/113288631