Time Geeks - US notes data structures and algorithms of -9 queue: The thread pool limited resources pool queue

We know that, the CPU resources are limited, the task of processing speed was positively correlated with the number of threads is not linear . Instead, too many threads and could lead to frequent switching CPU processing performance. Therefore, the thread pool size are generally considered to deal with the task of features and hardware environment to set in advance.

When we request a thread pool thread to a fixed size, if there is no idle thread pool resources, and this time the thread pool how to deal with this request? Or reject the request is queued requests? Various treatment strategies is how to achieve it? The contents of the underlying data structure is that we have to learn today, queue (queue).

First, how to understand the "queue"?

This concept is very good understanding of the queue. You can think of it as queue up for tickets, first come, first buy, then people can only stand at the end, is not allowed to jump the queue. Advanced, first out, this is a typical "queue."

We know that the stack only supports two basic operations: stack push () and pop pop (). With very similar queue stack, the operation is also very limited support, two basic operations are: enqueue the enqueue (), a discharge end of the queue to the data; dequeued dequeue (), an element taken from the head of the queue.

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-gQlyrNTn-1582166004259) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1556068219588.png)]

Therefore, the queue with the same stack, but also a linear table data structures restricted operation .

As a very basic data structures, application queue is also very wide, especially some queue with certain additional features, such as a circular queue blocking queue, concurrent queue . They develop a lot of partial underlying system, the frame, the middleware, plays a key role. For example, high performance queue Disruptor, Linux ring buffer, have used the cycle concurrent queue; Java concurrent use ArrayBlockingQueue and contracting to achieve a fair lock.

Second, the queues and a queue chain

How to implement a queue of it?

Like stacks, queues can use arrays to implement, you can also use the list to achieve. Implemented with a stack called the order of the array of stacks, a stack with a stack called the chain of linked list implementation. Similarly, an array called queues implemented by the queues , a queue called linked list implementation of the chain queue .

The following is a Java-based implementation of an array

// 用数组实现的队列
public class ArrayQueue {
  // 数组:items,数组大小:n
  private String[] items;
  private int n = 0;
  // head 表示队头下标,tail 表示队尾下标
  private int head = 0;
  private int tail = 0;

  // 申请一个大小为 capacity 的数组
  public ArrayQueue(int capacity) {
    items = new String[capacity];
    n = capacity;
  }

  // 入队
  public boolean enqueue(String item) {
    // 如果 tail == n 表示队列已经满了
    if (tail == n) return false;
    items[tail] = item;
    ++tail;
    return true;
  }

  // 出队
  public String dequeue() {
    // 如果 head == tail 表示队列为空
    if (head == tail) return null;
    String ret = items[head];
    ++head;
    return ret;
  }
}

For the stack, we only need a stack pointer on it. However, queue requires two pointers: one is the head pointer to the first team; tail is a pointer to the tail.

When a, b, c, d are sequentially queued, head pointer points to the next queue position 0 of the subscript, tail pointer to the position labeled 4.

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-JGWZAvVq-1582166004272) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1556069972119.png)]

When we called twice dequeue operation, the queue head pointer to the next position labeled 2, tail pointer is still pointing at index position 4.

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-x6alB358-1582166004280) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1556070063856.png)]

With constantly enqueue, dequeue operations, head, and tail are continuously moved backward. When the tail moves to the far right, even if there is free space in the array, the data can not continue to add to the queue. How this problem solve it?

We can move data! However, each time dequeue operations as deleting data array subscript 0, to move data from the entire queue, the time complexity of this operation will be dequeued from the original O (1) becomes O (n ). You can not be optimized a little bit?

In fact, we can not move data at the time of the team. If there is no free space, we just need the time into the team, and then concentrated to trigger a move operation data. With this idea, a team function dequeue () remain the same, we slightly modified look into the team function enqueue () implementation, you can easily solve the problem just. The following is a specific code:

// 入队操作,将 item 放入队尾
public boolean enqueue(String item) {
    // tail == n 表示队列末尾没有空间了
    if (tail == n) {
        // tail ==n && head==0,表示整个队列都占满了
        if (head == 0) return false;
        // 数据搬移
        for (int i = head; i < tail; ++i) {
            items[i-head] = items[i];
        }
        // 搬移完之后重新更新 head 和 tail
        tail -= head;
        head = 0;
    }

    items[tail] = item;
    ++tail;
    return true;
}

We see from the code, when the queue tail pointer is moved to the rightmost array, if the new data into the team, we can head to the tail of data between, move integrally 0 to the array of head-to tail position.

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-V0KEAMI7-1582166004283) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1556070834929.png)]

This realization of ideas in the team (shared equally), the time complexity of the operations team is still O (1).

Next, we will look at the basis of the list queue implementation .

Achieve list-based, we also need two pointers: head pointer and tail pointer. They point to the first node and the last linked list node. As shown, when the team, tail-> next = new_node, tail = tail-> next; time dequeued, head = head-> next.

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-HWSksUnQ-1582166004287) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1582081112945.png)]

Code:

public class LinkedQueue {
    // 定义一个节点类
    private class Node{
        String value;
        Node next;
    }

    // 记录队列元素个数
    private int size = 0;

    //head 指向队头结点, tail 指向队尾节点
    private Node head;
    private Node tail;

    // 入队
    public boolean enqueue(String item){
        Node newNode = new Node();
        newNode.value = item;
        if (size == 0){
            head = newNode;
            tail = newNode;
        }else{
            tail.next = newNode;
            tail = newNode;
        }
        size++;
        return true;
    }
    // 出队
    public String dequeue(){
        String res = null;
        if(size == 0) return res;
        res = head.value;
        head = head.next;
        size--;
        return res;
    }
    public static void main(String[] args){
        LinkedQueue test = new LinkedQueue();
        System.out.println("initial size:" + test.size);

        // 入队abcdef
        for (int i = 0; i < 6; i++) {
            test.enqueue(String.valueOf((char)(i+97)));
        }
        System.out.println("enqueued size:" + test.size);

        // 出队四个元素
        for (int i = 0; i < 4; i++){
            test.dequeue();
        }
        System.out.println("dequeued size:" + test.size);
    }
}

Third, the circular queue

We just use arrays to implement queue time, when the tail == n, there will be data movement operations, so the team operating performance will be affected. That there is no way to avoid data movement it? Let's look at Solutions circular queue.

A circular queue name implies, looks like a ring. Originally array is beginning to end is a straight line. Now we put end to end, to pull into a ring.

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-oiHEAOLI-1582166004291) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1582081281000.png)]

We can see that the size of the queue in FIG. 8, the current head = 4, tail = 7. When there is a new element into a team, we put index is 7, respectively. But this time, we do not update the tail is 8, but will move it one after the ring, to the subscript 0 position. When there is an element b again into the team, we will b placed at index position 0, and tail plus 1 Update 1. Therefore, after a, b successively enqueued, the circular queue element becomes like the following:

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-uqezkuRX-1582166004293) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1582081296596.png)]

Through this approach, we managed to avoid data movement operations. It seems difficult to understand, but the code circular queue implementation difficulty speaking in front of a non-circular queue is much more difficult than others. To write bug-free code that implements circular queue, the most critical is to determine a good team and a team full of empty judgment conditions .

In an array of non-circular queue implementation, the team determined that full tail == n, the empty judgment that the team head == tail. For that circular queue, how to determine the team and the team is full empty it?

Queue is empty, the determination condition remains head == tail. However, queue full condition is determined on the somewhat complicated. I drew a diagram of the queue is full, you can look at, try to summarize what the law.

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-Nte7DhWO-1582166004296) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1582081681320.png)]

I figure as in the case of team full of paintings, tail = 3, head = 4, n = 8, so the rule is to sum up: (3 + 1)% 8 = 4. More than a few teams draw full chart, you will find that when the team is full, (tail + 1)% n = head.

Have you ever noticed, when the queue is full, the position of the figure is actually no tail pointing to stored data . Therefore, the circular queue wasting storage space of an array.

public class CircularQueue {
    // 数组:items,数组大小:n
    private String[] items;
    private int n = 0;
    // head表示队头下标,tail表示队尾下标
    private int head = 0;
    private int tail = 0;
    // 申请一个大小为capacity的数组
    public CircularQueue(int capacity) {
        items = new String[capacity];
        n = capacity;
    }
    // 入队
    public boolean enqueue(String item) {
        // 队列满了
        if ((tail + 1) % n == head) return false;
        items[tail] = item;
        tail = (tail + 1) % n;
        return true;
    }
    // 出队
    public String dequeue() {
        // 如果head == tail 表示队列为空
        if (head == tail) return null;
        String ret = items[head];
        head = (head + 1) % n;
        return ret;
    }
}

Fourth, concurrent blocking queue and queue

More about the contents of the previous theory, it seems difficult something to do with the actual project development. Indeed, the very basis of this data structure queue, the usual business development is unlikely to achieve a queue from zero, it will not even be used directly. And some applications have special characteristics queue has been more widely, such as blocking queue and concurrent queue.

Blocking queue is actually based on an increase in the queue blocking operation. In simple terms, that is, when the queue is empty, take data from the head of the team will be blocked . Because the data have not advisable until the queue with the data to be returned; if the queue is full, then insert the data of the operation will be blocked until the queue has idle position before inserting the data, and then back again.

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-4o4qyv7S-1582166004301) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1582083055134.png)]

You should have found, that the above definition of a " producer - consumer model "! Yes, we can use a blocking queue, easily a "producer - consumer model"!

This blocking queue-based implementation of "producer - consumer model", can effectively coordinate production and consumption rate. When the speed of the "producer" of production data too fast, when "consumers" time to consumption, queue for storing data quickly filled. This time, producers will block until the "consumer" consumer data, "producer" will continue to be woken up "production."

And not only that, based on blocking queue, we can to improve the processing efficiency of the data by the number of coordination "producer" and "consumer". For example, the previous example, we can configure more than a few "consumers" to deal with a "producer":

[Picture dump outside the chain fails, the source station may have a security chain mechanism, it is recommended to save the pictures uploaded directly down (img-AcTiQOMO-1582166004309) (C: \ Users \ Mr.Hou \ AppData \ Roaming \ Typora \ typora-user- images \ 1582083468159.png)]

Earlier we talked about blocking queue in a multithreaded situation, there will be multiple threads simultaneously operating the queue, this time will be the presence of thread safety issues , then how to implement a thread-safe queue it?

We called thread-safe queue concurrent queue . The most straightforward implementation of the method is a direct enqueue (), dequeue () locking , but a large degree of concurrency lock granularity will be relatively low, allowing only one time a deposit or fetch operations. Indeed, based on the circular queue array using CAS atomic operation , it can be realized very efficient concurrent queue. This is also the circular queue is more extensive than the application queue chain of causes. In actual sermon Disruptor time, I will speak in detail the application of concurrent queue.

Answers opening

Queue of knowledge is finished, we now look back to the issue of opening. When there is no idle thread pool thread, a new task request thread resources, the thread pool how to deal with? Various treatment strategies and how to achieve it?

We generally have two treatment strategies. The first is a non-blocking approach, directly reject the task request; the other is blocking approach, the request is queued , until there is an idle thread, taken queued requests continue processing. That request is queued how to store it?

We want fair treatment of requests for each queue, advanced, first serve , so this queue data structure is very suitable to store queued requests. As we have said, there is a queue based on linked lists and arrays based on these two implementations. Both implementations for queued requests, what difference does it make?

Based implementation of a linked list, you can achieve a support unlimited line of unbounded queue (unbounded queue), but may result in too many requests are queued request response processing time is too long. So, for the more sensitive the response time of the system, based on an infinite queue thread pool linked list implementation is not appropriate .

The queue on a bounded (bounded queue), the finite size of the array to achieve the queue, the thread pool queued requests exceeds the size of the queue, the next request will be denied, in this way the response time sensitive systems , relatively more reasonable. However, a reasonable set of queue size, is very luxurious. Queue is too large to too many pending requests, the queue is too small will lead to not make full use of system resources, maximize performance.

In addition to the previously mentioned application queue requests queued thread pool scene outside the queue may use any limited resource pool for queuing request , such as a database connection pool and the like. In fact, for most limited resource scenario, when there is no free resources, can be implemented basically by the request queue "Queue" this data structure.

Content Summary

Today we are talking about a very similar with the stack data structure queue. About queue, you can master the following, this section no problem.

The greatest feature is FIFO queue, two main operations are enqueue and dequeue. With the same stack, both Arrays can be used to achieve, you can also use the list to achieve. Order to achieve an array of call queues, linked list called the chain to achieve the queue. In particular queue looks like a circular ring. In the array queue when there will be data movement operations, in order to solve the problem of data movement, we need to ring-like circular queue .

Circular queue is the focus of our festival . To write bug-free code that implements a circular queue, the key is to determine a good team and a team full of empty judgment conditions , the specific code you want to be able to write about.

In addition, we also talked about several advanced queue structure, blocking queue concurrent queue, the queue are still bottom of this data structure, but on many other additional features. Blocking queue is into the team, the team can block the operation, concurrent queue is operating a multi-thread safe queues .

After-school thinking

  1. In addition to the thread pool this pool structure will be used queuing request, you also know what a similar cell structure or scene will be used queuing queue request it?

  2. Today talked about concurrent queue on how to achieve lock-free concurrent queue, there are a lot of online discussion. On this issue, how do you see it?

  3. Apply the queue is very extensive, especially some queues have some additional features, such as a circular queue, blocking queue concurrent queue. They develop a lot of underlying biasing system, the frame, the middleware, plays a key role. For example, high performance queue Disruptor, Linux ring buffer, have used the cycle concurrent queue; Java concurrent use ArrayBlockingQueue and contracting to achieve a fair lock. Distributed application's message queue, the queue is also a structure.

  4. Consider using CAS lock-free queue, in front of the team, get tail position, whether changes when comparing tail into the team, if not, then allowed into the team, on the contrary, this time into the team fails. Ensemble is to obtain a head position
    is set, for cas.

Published 28 original articles · won praise 5 · Views 3135

Guess you like

Origin blog.csdn.net/hsk6543210/article/details/104406248