[Algorithm Basics] Stacks and queues and common variants and uses, dual stacks, dynamic stacks, stack iterators, the use of double-ended queues, priority queues, concurrent queues, and delay queues

Table of contents

1. Stack

2. Queue

3. Common variants and uses of stacks and queues

3.1 Common variations and uses of stacks

3.1.1 Min Stack

3.1.2 Two Stacks

3.1.3 Fixed-Size Stack

3.1.4 Resizable Stack

3.1.5 Stack iterator

 3.2 Common variants and uses of queues

3.2.1 Double-ended queue (Deque)

3.2.2 Priority Queue

3.2.3 Concurrent Queue (Concurrent Queue)

3.2.4 Delay Queue


1. Stack

  1. Basic concepts of stack

            The stack is a linear data structure that follows the Last-In-First-Out (LIFO) principle. The last element added to the stack is the first one removed.
  2. Stack operations

    • Push: Add an element to the top of the stack.
    • Pop: Remove an element from the top of the stack.
    • Peek: Peek the top element of the stack without deleting it.
    • Determine whether the stack is empty.
  3. Sample code and comments

# 创建一个空栈
stack = []

# 压栈操作
stack.append(1)    # 添加元素1到栈
stack.append(2)    # 添加元素2到栈

# 出栈操作
top_element = stack.pop()  # 移除并返回栈顶元素

# 查看栈顶元素
top_element = stack[-1]  # 查看栈顶元素,不删除它

# 判断栈是否为空
is_empty = len(stack) == 0  # 如果栈为空,返回True

4 stack application scenarios

  • Function calls and recursion: Used to save the context of function calls.
  • Expression evaluation: used to parse and evaluate expressions.
  • Browser's back and forward functions: used to record access history.
  • Compiler's syntax analysis: used to track bracket matches. 


2. Queue

  1. Basic concepts of queues

            Definition: A queue is a linear data structure that follows the First-In-First-Out (FIFO) principle. The earliest element added to the queue is the first to be removed.
  2. Queue operations

    • Enqueue: Add elements to the end of the queue.
    • Dequeue: Remove elements from the head of the queue.
    • View the element at the head of the queue (Front): View the element at the head of the queue without deleting it.
    • Determine whether the queue is empty.
  3. Sample code and comments

from collections import deque

# 创建一个空队列
queue = deque()

# 入队操作
queue.append(1)   # 添加元素1到队列尾部
queue.append(2)   # 添加元素2到队列尾部

# 出队操作
front_element = queue.popleft()  # 移除并返回队头元素

# 查看队头元素
front_element = queue[0]  # 查看队头元素,不删除它

# 判断队列是否为空
is_empty = len(queue) == 0  # 如果队列为空,返回True

4 application scenarios of alignment

  • Task scheduling: used to execute tasks in sequence.
  • Data buffer: used to buffer data transmission.
  • Breadth-First Search: used in graph algorithms to traverse the hierarchical structure of the graph. 

3. Common variants and uses of stacks and queues

  • Stacks can be implemented using arrays or linked lists.
  • You can use an array (List) to implement a stack, where the operations mainly include append(push) and pop(pop).
  • A stack can also be implemented using a linked list, where operations include insertion and deletion operations at the head of the linked list.
  • Queues can be implemented using arrays, linked lists, or double-ended queues (Deques).
  • You can use an array (List) to implement a queue, but it is less efficient because the dequeue operation requires moving array elements.
  • You can also use a linked list to implement a queue, which includes insertion (enqueue) and deletion (dequeue) operations.

3.1 Common variations and uses of stacks

  1. Min Stack : A min stack is a stack that supports obtaining the smallest element in the stack in constant time. It usually includes push(push), pop(pop) and getMin(get the smallest element) operations.

  2. Two Stacks : A dual stack is a data structure composed of two stacks. It is usually used to implement queues or perform some complex operations on the stack.

  3. Fixed-Size Stack : This type of stack has a fixed capacity limit. Once the capacity limit is reached, no more elements can be added.

  4. Resizable Stack : A resizable stack can dynamically increase or decrease its capacity to adapt to changing needs.

  5. Stack Iterator : A stack iterator is a data structure that allows sequential traversal of elements in the stack without changing the state of the stack.

  6. Advanced Stack : The Advanced Stack can include other functions, such as calculating expressions, implementing reverse Polish expression calculations, etc.

Example:

3.1.1 Min Stack

        Min Stack is a special stack data structure. In addition to supporting regular stack operations (into and out of the stack), it can also efficiently obtain the smallest element in the stack, usually in constant time. Minimum stacks are typically used for problems that require keeping track of the smallest element in the stack while maintaining the performance of regular stack operations.

Here is a Python example demonstrating how to implement a minimal stack:

class MinStack:
    def __init__(self):
        # 主栈,用于存储元素
        self.stack = []
        # 辅助栈,用于存储最小元素
        self.min_stack = []

    def push(self, x):
        self.stack.append(x)
        # 如果辅助栈为空或新元素小于等于当前最小值,则将新元素入辅助栈
        if not self.min_stack or x <= self.min_stack[-1]:
            self.min_stack.append(x)

    def pop(self):
        # 出栈时,如果出栈元素等于当前最小值,也将辅助栈顶元素出栈
        if self.stack:
            if self.stack[-1] == self.min_stack[-1]:
                self.min_stack.pop()
            self.stack.pop()

    def top(self):
        if self.stack:
            return self.stack[-1]

    def get_min(self):
        if self.min_stack:
            return self.min_stack[-1]

# 创建最小栈
min_stack = MinStack()

# 入栈操作
min_stack.push(3)
min_stack.push(5)
min_stack.push(2)
min_stack.push(1)

# 获取栈顶元素和最小元素
print(min_stack.top())    # 输出: 1
print(min_stack.get_min())  # 输出: 1

# 出栈操作
min_stack.pop()
print(min_stack.top())    # 输出: 2
print(min_stack.get_min())  # 输出: 2

        In the above example, we used two stacks to implement a minimal stack. The main stack is used to store elements, while the auxiliary stack is used to store the current smallest element. In the stack operation, if the new element is less than or equal to the current minimum element, the new element will be pushed to the main stack and the auxiliary stack at the same time. In the pop operation, if the popped element is equal to the current minimum element, the top element of the auxiliary stack will be popped. In this way we can get the smallest element in the stack in constant time.

        The min stack is often used to solve problems that require frequent access to the minimum value in the stack, such as when designing a stack that supports push, popand getMinoperations. This data structure helps us get the smallest element without affecting the performance of the stack.

3.1.2 Two Stacks

A dual stack is a data structure composed of two stacks. It is usually used to implement queues or perform some complex operations on the stack.

class TwoStacks:
    def __init__(self):
        # 两个栈,一个用于入队,一个用于出队
        self.stack1 = []
        self.stack2 = []

    def enqueue(self, val):
        # 将元素压入stack1,模拟入队操作
        self.stack1.append(val)

    def dequeue(self):
        if not self.stack2:
            # 如果stack2为空,将stack1中的元素逐个弹出并压入stack2,以颠倒顺序
            while self.stack1:
                self.stack2.append(self.stack1.pop())
        # 弹出stack2的栈顶元素,模拟出队操作
        if self.stack2:
            return self.stack2.pop()

Expression evaluation:

        The dual stack can be used to parse and evaluate expressions, such as arithmetic expressions. One stack is for the operators and the other is for the operands. When traversing an expression, the operands are pushed onto the stack, and when an operator is encountered, the operands are calculated based on the operator.

Cooperation of the two stacks:

        In some algorithms, two stacks can work together, such as depth-first search (DFS) of a graph that uses two stacks to trace the traversal path.

        Dual stack is a very flexible data structure that can be used for different types of problems and applications as needed. Through reasonable design and use, dual stacks can improve the efficiency and readability of algorithms.

3.1.3 Fixed-Size Stack

Fixed-size stacks have a fixed capacity limit, and once the capacity limit is reached, no more elements can be added.

class FixedSizeStack:
    def __init__(self, max_size):
        self.max_size = max_size
        self.stack = []

    def push(self, item):
        if len(self.stack) < self.max_size:
            self.stack.append(item)
        else:
            raise IndexError("Stack is full")

    def pop(self):
        if self.stack:
            return self.stack.pop()
        else:
            raise IndexError("Stack is empty")

    def peek(self):
        if self.stack:
            return self.stack[-1]

    def is_empty(self):
        return len(self.stack) == 0

    def is_full(self):
        return len(self.stack) == self.max_size

         In the above example, we created a FixedSizeStackclass that accepts a parameter max_sizerepresenting the maximum capacity of the stack. The stack push operation ( push) will check whether the current stack size is less than the maximum capacity. If so, add the element to the stack, otherwise an exception will be thrown. The pop operation ( pop) will pop the top element of the stack to check whether the stack is empty, and the method of getting the top element of the stack ( peek) is also implemented.       

        Using a fixed-size stack ensures that the stack's capacity does not exceed limits, which is useful in certain resource-constrained situations, such as memory allocation or hardware resource management. This data structure helps developers better control and manage limited resources. 

3.1.4 Dynamic Stack (Resizable Stack)

        Dynamic can dynamically increase or decrease its capacity to adapt to changing needs.

class ResizableStack:
    def __init__(self):
        # 初始容量
        self.capacity = 2
        # 存储元素的列表
        self.stack = [None] * self.capacity
        # 当前元素数量
        self.size = 0

    def push(self, val):
        if self.size == self.capacity:
            # 如果栈已满,增加容量
            self._resize(self.capacity * 2)
        # 压栈操作
        self.stack[self.size] = val
        self.size += 1

    def pop(self):
        if self.size > 0:
            # 出栈操作
            self.size -= 1
            val = self.stack[self.size]
            self.stack[self.size] = None
            if self.size <= self.capacity // 4:
            # 如果栈的元素数量小于等于容量的四分之一,减小容量
            self._resize(self.capacity // 2)
            return val
        else:
            print("Stack is empty. Cannot pop element.")

    def _resize(self, new_capacity):
        # 调整栈的容量
        new_stack = [None] * new_capacity
        for i in range(self.size):
            new_stack[i] = self.stack[i]
        self.stack = new_stack
        self.capacity = new_capacity

                

        The above example demonstrates a variable-size stack that can dynamically increase or decrease capacity to accommodate changing needs of the stack. When the stack is full, it doubles its capacity, and when the stack has less than or equal to a quarter of its capacity, it cuts its capacity in half to save memory. This type of stack is suitable for situations where flexible memory management is required.

        A dynamic stack is a common data structure used to manage and manipulate data sets of varying sizes. In practical applications, dynamic stacks can improve memory utilization while maintaining stack performance.

3.1.5 Stack iterator

        A stack iterator is a data structure that allows sequential traversal of elements in a stack without changing the state of the stack. This kind of iterator is usually implemented by copying the elements in the stack to another data structure (such as a list), and then iterating operations can be performed on that data structure. 

Here is an example that demonstrates how to implement a stack iterator and how to use it to iterate over the elements in the stack:

class StackIterator:
    def __init__(self, stack):
        # 接收一个栈作为参数
        self.stack = stack
        # 复制栈中的元素到列表,以便进行迭代
        self.elements = list(stack)

    def __iter__(self):
        # 返回自身作为迭代器对象
        self.current = 0
        return self

    def __next__(self):
        # 迭代下一个元素,直到列表为空
        if self.current < len(self.elements):
            element = self.elements[self.current]
            self.current += 1
            return element
        else:
            # 列表为空,抛出StopIteration异常
            raise StopIteration


class Stack:
    def __init__(self):
        self.items = []

    def push(self, item):
        self.items.append(item)

    def pop(self):
        if not self.is_empty():
            return self.items.pop()

    def is_empty(self):
        return len(self.items) == 0

    def size(self):
        return len(self.items)


# 创建一个栈
stack = Stack()

# 压栈
stack.push(1)
stack.push(2)
stack.push(3)

# 创建栈的迭代器
stack_iterator = StackIterator(stack)

# 使用迭代器遍历栈中的元素
for element in stack_iterator:
    print(element)

# 输出:
# 1
# 2
# 3

        In the above example, we first define a StackIteratorstack iterator class named which receives a stack as a parameter and copies the elements in the stack to a list self.elements. Then, we implemented __iter__the and __next__methods so that the class can be used as an iterator. Finally, we create an iterator object on the stack and use fora loop to iterate over the elements on the stack. In this way, we can access the elements in the stack sequentially without changing the original stack.


 3.2 Common variants and uses of queues

In addition to standard queues (FIFO, first in first out), there are also some common queue variants that can adapt to different needs and application scenarios. Here are some common queue variants:

  1. Priority Queue : Elements have priority and are dequeued according to priority. Usually implemented as a min-heap or a max-heap.

  2. Double-ended Queue (Deque, Double-ended Queue) : allows insertion and deletion operations at both ends of the queue, and can be used for bidirectional searches, sliding windows and other issues.

  3. Concurrent Queue : A queue that supports concurrent access by multiple threads and usually provides thread-safe operations.

  4. Circular Queue : There is a relative position relationship between the end and the beginning of the queue, and memory can be recycled. It is often used to implement circular buffers.

  5. Blocking Queue : When the queue is empty, the dequeue operation will block the thread. When the queue is full, the enqueue operation will block the thread. Commonly used for inter-thread communication and task scheduling.

  6. Delay Queue : Elements can be dequeued after a certain delay time. It is often used for scheduled task scheduling and delayed processing.

  7. Lock-free Queue : A queue implemented without using a lock mechanism. It is used in high-performance concurrency scenarios and is usually implemented based on atomic operations.

  8. Priority Deque : It combines the characteristics of priority queue and double-ended queue. It can insert and delete elements at both ends and dequeue according to priority.

  9. Bounded Queue : Limits the maximum capacity of the queue. Once the capacity limit is reached, the enqueuing operation will block or discard elements.

  10. Queue with timeout (Timeout Queue) : The element has a timeout set in the queue, and will be automatically dequeued after the time limit is exceeded.

These queue variants can be selected and used according to different needs and scenarios, and they have wide applications in computer science and engineering. It is important to choose the appropriate queue variant based on the characteristics and performance requirements of the specific problem.

3.2.1 Double-ended queue (Deque)

        A double-ended queue (Deque, Double-ended Queue) is a data structure with bidirectional insertion and deletion operations, which allows enqueuing and dequeuing operations to be performed at both ends of the queue. The flexibility of a double-ended queue makes it suitable for a variety of application scenarios, including bidirectional search, sliding windows, implementing queues and stacks, etc.

        In Python, you can create a deque using collectionsthe module . dequeThe following is an example of basic operation of a deque:

from collections import deque

# 创建一个双端队列
deque_obj = deque()

# 在队头插入元素
deque_obj.appendleft(1)
deque_obj.appendleft(2)

# 在队尾插入元素
deque_obj.append(3)
deque_obj.append(4)

# 双端队列的内容现在是 [2, 1, 3, 4]

# 在队头出队
front_element = deque_obj.popleft()
print(front_element)  # 输出: 2

# 在队尾出队
rear_element = deque_obj.pop()
print(rear_element)  # 输出: 4

# 双端队列的内容现在是 [1, 3]

        In the above example, we first import dequethe class and then create a deque object deque_obj. We can use appendleftmethods to insert elements at the head of the queue and appendmethods to insert elements at the end of the queue. Dequeue operations can be performed at the head and tail of the queue using popleftmethods and methods respectively.pop

        Deques are useful when dealing with problems that require adding or removing elements from both ends, such as sliding window problems, implementing queues and stacks, etc. Since it supports bidirectional operation, it can provide more efficient solutions in some specific scenarios.

3.2.2 Priority Queue

        Priority Queue is a special queue in which each element is associated with a priority, and elements are dequeued in order of priority. In a priority queue, elements with higher priority are dequeued first, and elements with lower priority are dequeued last. This data structure is typically used to handle tasks or elements that need to be processed in priority order.

        In Python, you can use heapqmodules to implement min-heap priority queues, or you can use third-party libraries (such as queue.PriorityQueue) to create priority queues.

Here is an example of using a heapqmodule to implement a min-heap priority queue:

import heapq

class PriorityQueue:
    def __init__(self):
        self.elements = []

    def push(self, item, priority):
        heapq.heappush(self.elements, (priority, item))

    def pop(self):
        if self.elements:
            return heapq.heappop(self.elements)[1]
        else:
            raise IndexError("Priority queue is empty")

# 创建优先队列
pq = PriorityQueue()

# 添加元素到优先队列,带有优先级
pq.push("Task 1", 3)
pq.push("Task 2", 1)
pq.push("Task 3", 2)

# 出队操作将按照优先级顺序执行
print(pq.pop())  # 输出: "Task 2",因为它具有最高优先级
print(pq.pop())  # 输出: "Task 3"
print(pq.pop())  # 输出: "Task 1"

        In this example, we create a PriorityQueueclass that heapqimplements a min-heap priority queue. We can use pushmethods to add elements and their corresponding priorities, and then use popmethods to remove elements in order of priority.

        Note that the min-heap priority queue returns the element with the smallest priority. If a maximum heap priority queue is required, this can be achieved by taking the priority value to a negative number.

        Priority queues are commonly used in task scheduling, search algorithms (such as Dijkstra's algorithm), and other applications that require elements to be processed according to priority.

3.2.3 Concurrent Queue (Concurrent Queue)

       Concurrent Queue (Concurrent Queue) is a queue data structure that supports multi-threaded concurrent access and usually provides thread-safe enqueue and dequeue operations. This queue type is used in multi-threaded environments to ensure that multiple threads can safely access and modify the queue, preventing race conditions and data corruption.

queueThe following is a Python example that demonstrates how to create a concurrent queue using Python's module:

import queue
import threading
import time

# 创建并发队列
concurrent_queue = queue.Queue()

# 定义一个生产者线程,向队列中添加数据
def producer():
    for i in range(5):
        item = f"Item {i}"
        concurrent_queue.put(item)
        print(f"Produced: {item}")
        time.sleep(1)

# 定义一个消费者线程,从队列中获取数据
def consumer():
    while True:
        item = concurrent_queue.get()
        if item is None:
            break
        print(f"Consumed: {item}")
        concurrent_queue.task_done()

# 启动生产者线程
producer_thread = threading.Thread(target=producer)
producer_thread.start()

# 启动两个消费者线程
consumer_thread1 = threading.Thread(target=consumer)
consumer_thread2 = threading.Thread(target=consumer)

consumer_thread1.start()
consumer_thread2.start()

# 等待生产者线程完成
producer_thread.join()

# 向队列添加特殊的"None"值以通知消费者线程退出
concurrent_queue.put(None)
concurrent_queue.put(None)

# 等待消费者线程完成
consumer_thread1.join()
consumer_thread2.join()

         In the above example, we created a concurrent queue concurrent_queueand defined a producer thread ( producer) responsible for adding data to the queue, and two consumer threads ( consumer) responsible for getting data from the queue. The enqueuing operation of the queue is thread-safe, so multiple threads can add data simultaneously. The dequeue operation is also thread-safe, so multiple consumer threads can obtain data at the same time.

        Data sharing and synchronization in multi-threaded applications can be easily managed using concurrent queues, ensuring coordination between threads and data integrity.

3.2.4 Delay Queue

        Delay Queue is a special queue in which elements can be dequeued after a certain delay time. Typically, delay queues are used to implement task scheduling and delay processing, allowing you to schedule a specific task to be executed at a future time. Each element in the delay queue has a delay time associated with it. When the delay time elapses, the element is dequeued from the queue and the relevant operations are performed.

Here is a Python example demonstrating how to use a delay queue:

import queue
import threading
import time

# 创建延迟队列
delay_queue = queue.PriorityQueue()

# 定义一个延迟任务
def delayed_task(task, delay):
    time.sleep(delay)
    print(f"Executing task: {task}")

# 启动延迟任务线程
threading.Thread(target=delayed_task, args=("Task 1", 3)).start()
threading.Thread(target=delayed_task, args=("Task 2", 2)).start()
threading.Thread(target=delayed_task, args=("Task 3", 1)).start()

# 模拟添加延迟任务到队列中
delay_queue.put((time.time() + 5, "Task 4", 5))  # 延迟5秒执行
delay_queue.put((time.time() + 2, "Task 5", 2))  # 延迟2秒执行
delay_queue.put((time.time() + 3, "Task 6", 3))  # 延迟3秒执行

# 出队并执行延迟任务
while not delay_queue.empty():
    item = delay_queue.get()
    execute_time, task_name, delay = item
    current_time = time.time()
    if current_time >= execute_time:
        print(f"Executing delayed task: {task_name}")
    else:
        # 如果任务还未到执行时间,重新放回队列
        delay_queue.put(item)
        time.sleep(1)  # 等待1秒后再检查

         In the above example, we first created a priority-based delay queue delay_queueand then started some delay tasks using multi-threading. Delayed tasks are delayed_tasksimulated by the function, which executes the task after a specified delay time. At the same time, we also simulated adding delayed tasks to the queue, as well as the process of dequeuing and executing delayed tasks.

        Delay queues are very useful for applications that need to execute tasks in the future as planned, such as scheduled task scheduling, messaging systems, etc. It allows you to schedule and manage the execution of tasks to meet specific time requirements.

Guess you like

Origin blog.csdn.net/qq_35831906/article/details/133339221
Recommended