Multi-threaded case | singleton mode, blocking queue, timer, thread pool

Multi-threaded case

1. Case 1: thread-safe singleton mode

singleton pattern

Singleton pattern is a kind of design pattern

What are Design Patterns?
The design pattern is like the "chess record" in chess. The red side takes the lead and the black side jumps. For some moves of the red side, the black side has some fixed routines when responding. If you follow the routine, you will not suffer. Also invented a set of "chess records", called design patterns

There are also many common "problem scenarios" in software development , and some typical solutions are given for some typical scenarios

There are two design patterns that are very common,
one is the singleton pattern , and the other is the factory pattern

Singleton pattern => single instance (object)

In some scenarios, only one instance of a specific class can be created, and multiple instances should not be created

The singleton mode can ensure that there is only one instance of a certain class in the program, and multiple instances will not be created, this singleton mode is very common and useful in actual development. Many "concepts" in development are naturally singletons, JDBC, DataSource, such objects should be singletons

There are many ways to implement singleton mode in Java, two typical implementations of singleton mode:

  • hungry man mode
  • lazy mode

Example: Washing dishes
1. For this meal at noon, I used 4 bowls. After eating, I washed these 4 bowls immediately~~[Hungry Man]
2. For this meal at noon, I used 4 bowls and finished eating After that, don't wash it first. For this meal at night, you only need 2 bowls, and then you only need to wash 2 of them~~[Lazy Man]

The second is a more efficient operation, generally a commendatory word (to improve efficiency in the computer)

The singleton mode of the hungry man is to create an instance in a hurry
The singleton mode of the lazy man is not too anxious to create an instance, only when it is in use, it is actually created


1.1. Hungry man mode

private static Singleton instance;

Notice:

  1. Members decorated with static in a class should be called "class members" => "class attribute/class method", which is equivalent to the memory space corresponding to this attribute in the class object.
    Members without static decoration are called "instance members" => "Instance Properties/Instance Methods"

    Static variables belong to the class, stored in the method area, and loaded as the class is loaded,
    member variables belong to the object, stored in the heap, and created as the object is created

    • static is to make the current instance attribute a class attribute

    • A class object is the only instance in a Java process (guaranteed by the JVM), and class attributes grow on the class object, which further ensures that there is only one copy of the static members of the class

  2. Class object != object
    Class: It is equivalent to the template of the instance, and many objects can be created based on the template
    Object (instance)

    • Each class in the java code will get a .class file after the compilation is completed, and the class object is the .class file.
      When the JVM runs, it will load this .class file, read the binary instructions in it, parse it, and construct it in memory. Corresponding class object (class loading), like Singleton.class)
    • The class object contains all the information in the .class file,
      including: what is the class name, what attributes are in the class, what is the name of each attribute, what type is each attribute, and each attribute is public private...
      Based on these information, in order to achieve reflection
// 通过 Singleton 这个类来实现单例模式,保证 Singleton 这个类只有唯一实例
class Singleton {
    
    
    // 1.使用 static 创建一个实例,并且立即进行实例化
    //   这个 instance 对应的实例,就是该类的唯一实例
    private static Singleton instance = new Singleton();
  
    // 2.提供一个方法,让外面能够拿到唯一实例
    public static Singleton getInstance() {
    
    
        return instance;
    }
    
    // 3.为了防止程序猿在其他地方不小心地 new 这个 Singleton,就可以把构造方法设为 private
    //   把构造方法设为 private.在类外面,就无法通过 new的方式来创建这个 Singleton实例了!
    private Singleton() {
    
    };
}

public class demo1 {
    
    
    public static void main(String[] args) {
    
    
        Singleton instance = Singleton.getInstance();
        Singleton instance2 = Singleton.getInstance();
        System.out.println(instance == instance2); // true 两个引用相同
    }
}

For the initialization of this unique instance, I am more anxious. In the class loading stage, the instance will be created directly
(this class is used in the program, it will be loaded immediately)

Getlnstance in Hungry mode only reads the content of the variable.
If multiple threads just read the same variable without modifying it, it is still thread-safe at this time.


1.2. Lazy mode - single thread

class Singleton2 {
    
    
    // 1.就不是立即就初始化实例.
    private static Singleton2 instance = null;

    // 2.把构造方法设为 private
    private Singleton2() {
    
    }

    // 3.提供一个方法来获取到上述单例的实例
    //    只有当真正需要用到这个实例的时候,才会真正去创建这个实例
    public static Singleton2 getInstance() {
    
    
        if (instance == null) {
    
    
            instance = new Singleton2();
        }
        return instance;
    }
}

only in realThe instance is actually created when getInstance is used

A typical case:
notepadsuch a program is very slow when opening a large file (you want to open a 1G file, and notepad will try to read all the contents of this 1G into memory ) [Hungry man ]
Like some other programs, there are optimizations when opening large files (to open a 1G file, but only load the part that can be displayed on the screen first ) [Lazy]


1.3. Lazy mode - thread safety

The real problem to be solved is to implement a thread-safe singleton mode.
Thread safety is not safe. Specifically, it refers to whether there may be bugs in the concurrent call of the getInstance method in a multi-threaded environment.

——Are the lazy man mode and the hungry man mode thread-safe in a multi-threaded environment?

  • Hungry man mode Here, multi-threaded calls only involve "read operations"

  • In the lazy mode, including read operations and modification operations, there are thread safety issues

insert image description here

The above lists a possible sorting situation, and there are many actual situations.
Through the above analysis, it can be seen that there is a bug in the current code, which may cause multiple instances to be created.

How to ensure the thread safety of lazy mode? lock !

It does not mean that if there is synchronized in the code, it will be thread-safe, and the position of synchronized must be correct , and it cannot be written casually.

The essence is to read, compare, and write. These three operations are not atomic. This leads to the fact that the value read by t2 may be that t1 has not had time to write (dirty read), resulting in multiple new;Therefore, it is necessary to add the lock to the outside to ensure that the read operation and the modification operation are a whole

insert image description here

Use the class object here as the lock object
(there is only one copy of the class object in a program, which can ensure that when multiple threads call getInstance, they are all locked for the same object)

public static Singleton2 getInstance() {
    
    
    synchronized (Singleton.class) {
    
     // 类对象作为锁对象
        if (instance == null) {
    
    
            instance = new Singleton2();
        }
    }
    return instance;
}

insert image description here


1.4. Lazy mode - lock competition

Although the current thread safety problem has been solved after the lock is added, there are new problems:

For the code in the lazy mode just now, the thread is not safe, which occurs before the instance is initialized . When it is not initialized, multi-threaded call getinstance may involve reading and modification at the same time , but once the instance is initialized, subsequent Call getlnstance, the value of instance must be non-null at this time, if the judgment is not true, it is thread safe, so return will be triggered directly, and getlnstance only has two read operations, which is equivalent to one is a comparison operation and the other is a return operation, both of which are read operations

According to the above locking method, no matter whether the code is after initialization or before initialization. Locking has overhead, and every time getinstance is called, it will be locked , which means that even after initialization (already thread-safe), there are still a lot of lock competition . Locking can indeed make the code thread-safe, and it also pays There is a price ( the speed of the program is slow )

So why is it not recommended to use vector hashtable?? It is because these two types are locked without thinking

The improvement plan is to add a layer of conditional judgment to the locking . The object is not created before locking; after the object is created, no further locking is required.
The condition is whether the current initialization has been completed (instance == null)

class Singleton2 {
    
    
    // 1.就不是立即就初始化实例.
    private static Singleton2 instance = null;
    // 2.把构造方法设为 private
    private Singleton2() {
    
    }
    // 3.提供一个方法来获取到上述单例的实例
    //    只有当真正需要用到这个实例的时候,才会真正去创建这个实例
    public static Singleton2 getInstance() {
    
    
        if (instance == null) {
    
    
            synchronized (Singleton.class) {
    
     // 类对象作为锁对象
                if (instance == null) {
    
    
                    instance = new Singleton2();
                }
            }
        }
        return instance;
    }
}

These two conditions are exactly the same, it’s just a beautiful coincidence. The effect/expected purpose of these two conditions is completely different. The above condition determines whether to
lock or not.
The following condition determines whether to create Example
It happens that these two purposes are to determine whether the instance is null

In this code, it looks like two if conditions are adjacent, but in fact the execution timing of these two conditions is very different!
Locking may cause thread blocking. When the second if is executed after the lock ends, there may be a long time between the second if and the first if. The internal state of the running program and the values ​​of these variables may have changed greatly.
For example, the outer condition is executed at 10:16, and the inner condition may be executed at 10:30. At this time, the instance may have been modified by other threads.

If the if in the inner layer is removed, it becomes the typical error code just now, and the lock does not package the read + modify operations.

	public static Singleton2 getInstance() {
    
    
        if (instance == null) {
    
     // 判定的是是否要加锁
            synchronized (Singleton.class) {
    
    
                instance = new Singleton2();
            }
        }
        return instance;
    }

1.5. Lazy Mode - Memory Visibility Instruction Reordering

There is still an important problem in this code.
If multiple threads call getlnstance here,
it will cause a lot of operations to read instance memory =>It may be possible for the compiler to optimize this read memory operation into a read register operation
—Once the optimization is triggered here, if the first thread has completed the modification of the instance, the subsequent threads will not be aware of the modification , and the instance will still be regarded as null

In addition, the problem of instruction reordering will also be involved!!

instance = new Singleton();Split into three steps:

1. Apply for memory space
2. Call the constructor to initialize the memory space into a reasonable object
3. Assign the address of the memory space to the instance reference


Under normal circumstances, the compiler executes in the order of 123. The compiler has another operation, instruction reordering: in order to improve program efficiency, adjust the code execution order
of 123. The order may become 132.

If it is a single thread, there is no essential difference between 123 and 132.
For example, the auntie in the cafeteria is cooking, 1 is to take the plate, 2 is to put the meal, and 3 is to give the plate to me. At this time, just give me the plate first, and then load the rice.

But in a multi-threaded environment, there will be problems!!!
Assume that t1 is executed according to the steps of 132.
After t1 is executed to 13, before executing 2, it is cut out of the cpu and t2 is executed
(after t1 executes 3, t2 It seems that the reference here is not empty), at this moment, t2 is equivalent to directly returning the instance reference, and may try to use the attributes in the reference, but
because the 2 (installation) operation in t1 has not yet been executed After that, what t2 got is an illegal object, an incomplete object that has not been constructed yet

Solution: add volatile to instance

// 这个代码是完全体的线程安全单例模式
class Singleton2 {
    
    
    // 1.就不是立即就初始化实例.
    private static volatile Singleton2 instance = null;

    // 2.把构造方法设为 private
    private Singleton2() {
    
    }

    // 3.提供一个方法来获取到上述单例的实例
    //    只有当真正需要用到这个实例的时候,才会真正去创建这个实例
    public static Singleton2 getInstance() {
    
    
        if (instance == null) {
    
     // 判定的是是否要加锁
            synchronized (Singleton.class) {
    
     // 类对象作为锁对象
                if (instance == null) {
    
     // 判定的是是否要创建实例
                    instance = new Singleton2();
                }
            }
        }
        return instance;
    }
}

2. Case 2: blocking queue

2.1, producer consumer model

Queue first-in-first-out
The blocking queue is also a special queue that conforms to the first-in first-out rule. Compared with the ordinary queue, the blocking queue has some other functions!
1. Thread safety
2. Produce blocking effect

1). If the queue is empty , if the dequeue operation is performed, it will be blocked until another thread adds elements to the queue (the queue is not empty).
2). If the queue is full , the dequeue operation will also be executed. Blocking occurs until another thread takes elements from the queue (the queue is not full)

The message queue is also a special queue, which is equivalent to adding a "message type" on the basis of the blocking queue, and performing first-in-first-out according to the specified category. At this time,
the message queue we are talking about is still a "data structure "

Based on the above characteristics, the "producer consumer model" can be realized

insert image description here
The blocking queue here can be used as a trading place in the producer-consumer model

The producer-consumer model is a very useful multi-threaded development method in actual development! Especially in the scenario of server development
Suppose, there are two servers, AB, A as the entrance server directly receives the user's network request, B as the application server, to provide some data to A


Advantage 1: Decoupling

Realized the decoupling between sender and receiver

——Typical scenario in development: Mutual calls between servers

insert image description here

The client sends a recharge request to server A. At this time, A forwards the request to B for processing, and B returns the result to A after processing. At this time, it can be regarded as "A called B".

If the producer-consumer model is not used
in the above scenario, the coupling between A and B is relatively high! If A wants to call B, A must know the existence of B. If B hangs up, it is easy to cause a bug in A! !! (When developing A code, you must fully understand some interfaces provided by B, and when developing B code, you must also fully understand how A is called)

In addition, if you add another C server, you need to modify a lot of code for A at this time,
so you need to re-modify the code for A, re-test, re-release, and redeploy, which is very troublesome

For the above scenarios, using the producer-consumer model can effectively reduce coupling

insert image description here

For the request: A is the producer, B is the consumer
For the response: A is the consumer, B is the producer
The blocking queue is used as a trading place, and the queue is unchanged

A does not need to know B, but only needs to pay attention to how to interact with the queue (in the code of A, there is no line of code related to B)
B does not need to know A, and only needs to pay attention to how to interact with the queue (in the code of B, there is no One line of code is related to A)

If B hangs up, it has no effect on A , because the queue is fine, and A can still insert elements into the queue. If the queue is full, just block first. If
A hangs up, it has no effect on B, because the queue is still Well, B can still get elements from the queue. If the queue is empty, it will be blocked first. If
either party of AB hangs up, it will not affect the other party!!!

A new C is added as a consumer, and for A, it is completely unaware...


Advantage 2: Shaving peaks and filling valleys

Be able to " cut peaks and fill valleys " for requests to ensure system stability

——The effect of the Three Gorges Dam is to "cut peaks and fill valleys"

insert image description here
In the rainy season, the water flow will be very large. The Three Gorges Dam will close the gate to store water, bear the impact of the upstream, and protect the downstream water flow from being too large, so that there will be no floods
. The dam will open the gates to release water to provide more sufficient water to the downstream and avoid drought disasters—filling valleys. It
is really difficult to predict when the upstream will flood, so prevent problems before they happen

Upstream is the request sent by the user. The downstream is some servers that perform specific services.
How many requests are users making? Uncontrollable, sometimes, there are many requests, and sometimes there are few requests...

——The producer consumer model is not used:

insert image description here
When the producer-consumer model is not used, if the request volume suddenly skyrockets (uncontrollable)

A skyrocketing => B skyrocketing
A As an entry server , the calculation amount is very light, and the request skyrockets, so the problem is not big.
B As an application server , the calculation amount may be large, and more system resources are required. If there are more requests, the resources required Further increase, if the hardware of the host is not enough, the program may hang

——Using the producer consumer model:

insert image description here
A request skyrocketing => blocking queue request skyrocketing

Since the blocking queue has no amount of calculation, just simply storing a piece of data can withstand greater pressure.
B still consumes data at the original speed, and will not cause skyrocketing because of A's skyrocketing , and B will be protected. It is very good, and it will not cause crashes due to fluctuations in such requests

"Peak clipping": This kind of peak value is not continuous in many cases , just for a while, and then "filling the valley" is restored
: B still processes the previous backlog of data according to the original frequency

The "blocking queue" used in actual development is not a simple data structure, but a/a set of specialized server programs , and the functions it provides are not only the function of blocking queues, but also based on this Provide more functions (for data persistent storage, support multiple data channels, support multi-node disaster recovery redundancy backup, support management panel, convenient configuration parameters...) Such a queue has a new name
, " Message queue " (a component widely used in future development)
kafka is a relatively mainstream message queue in the industry. There are many implementations of message queues, and the core functions are similar


2.2. Implement blocking queue

Learn to use the blocking queue in the Java standard library . Based on this built-in blocking queue, implement a simple producer-consumer model
and then **implement a simple blocking queue** by yourself (in order to better understand the principle of blocking queues, more threads, especially lock operations)

The blocking queue BlockingQueue in the standard library

The blocking queue is built in the Java standard library. If we need to use the blocking queue in some programs, we can directly use the standard library.

  • BlockingQueue is an interface, and the real implementation class is LinkedBlockingQueue

  • Queue provides three methods: Queue offer. Exit the queue to poll. Get the first element peek

    There are two main methods of blocking queues: put into the queue and take out of the queue

  • BlockingQueue also has offer, poll, peek and other methods, but these methods do not have blocking features

insert image description here

import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;

public class demo3 {
    
    
    public static void main(String[] args) throws InterruptedException {
    
    
        BlockingQueue<String> blockingQueue = new LinkedBlockingQueue<>();
        blockingQueue.put("hello");
        String s1 = blockingQueue.take();
        System.out.println(s1);
        blockingQueue.take();
        String s2 = blockingQueue.take();
        System.out.println(s2);
    }
}

Take out "hello" and the queue is empty. At this time, if you take the element again, it will enter the block, waiting for other threads to add elements to the queue

insert image description here


producer consumer model

import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;


public class ThreadDemo1 {
    
    
    public static void main(String[] args) {
    
    
        BlockingQueue<Integer> blockingQueue = new LinkedBlockingQueue<>();

        // 创建两个线程,作为生产者和消费者
        Thread customer = new Thread(() -> {
    
    
           while (true) {
    
    
               try {
    
    
                   Integer result = blockingQueue.take();
                   System.out.println("消费元素:" + result);
               } catch (InterruptedException e) {
    
    
                   e.printStackTrace();
               }
           }
        });
        customer.start();

        Thread producer = new Thread(() -> {
    
    
            int count = 0;
            while (true) {
    
    
                try {
    
    
                    blockingQueue.put(count);
                    System.out.println("生产元素:" + count);
                    count++;
                    Thread.sleep(500); // 每500毫秒生产一个
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });
        producer.start();
    }
}

insert image description here


Blocking queue - single thread

To implement a blocking queue, you need to write an ordinary queue first, plus thread safety, plus blocking

Queues can be implemented based on arrays or linked lists

——Linked list: It is easy to perform head deletion/tail
insertion. The time complexity of the head deletion
operation of the linked list is O(1) and the tail insertion operation of the linked list. The time complexity is "can be O(1)"
with an additional reference, record the current end point

-- array:circular queue

insert image description here

[head, tail) all point to the subscript 0
into the queue , put the new element into the tail position, and tail++
out of the queue , return the element at the head position, and head++
when head / tail reaches the end of the array , it needs to start from the beginning start, recycle

When implementing a circular queue, there is an important question, how to distinguish whether it is an empty queue or a full queue?
If no additional restrictions are imposed, the queue is empty or full at this time because the head and tail coincide

  1. Waste a grid, head == tail is considered empty
    head == tail+1 is considered full

  2. Create an additional variable, size to record the number of elements, size == 0 empty
    size == arr.length full

class MyBlockingQueue {
    
    
    // 保存数据的本体
    private int[] items = new int[1000];
    // 队首下标
    private int head = 0;
    // 队尾下标
    private int tail = 0;
    // 有效元素个数
    private int size = 0;

    // 入队列
    public void put(int value) {
    
    
        // 1、
        if (size == items.length) {
    
    
            // 队列满了,暂时先直接返回
            return;
        }
        // 2、把新的元素放入 tail 位置
        items[tail] = value;
        tail++;
        // 3、处理 tail 到达数组末尾的情况
        if (tail >= items.length) {
    
     // 判定 + 赋值 (虽然是两个操作,两个操作都是高效操作)
            tail = 0;
        }
        // tail = tail % data.length; // 代码可读性差,除法速度不如比较,不利于开发效率也不利于运行效率
        // 4、插入完成,修改元素个数
        size++;
    }

    // 出队列
    public Integer take() {
    
    
        // 1、
        if (size == 0) {
    
    
            // 如果队列为空,返回一个非法值
            return null;
        }
        // 2、取出 head 位置的元素
        int ret = items[head];
        head++;
        // 3、head 到末尾 重新等于 0
        if (head >= items.length) {
    
    
            head = 0;
        }
        // 4、数组元素个数--
        size--;
        return ret;
    }
}

public class TestDemo {
    
    
    public static void main(String[] args) {
    
    
        MyBlockingQueue queue = new MyBlockingQueue();
        queue.put(1);
        queue.put(2);
        queue.put(3);
        queue.put(4);
        System.out.println(queue.take()); // 1
        System.out.println(queue.take()); // 2
        System.out.println(queue.take()); // 3
        System.out.println(queue.take()); // 4
    }
}

Blocking Queue - thread safe

At present, the implementation of ordinary queues has been completed, and the blocking function is added . The blocking function means that the queue should be used in a multi-threaded environment . It is guaranteed that in a multi-threaded environment, there is no problem calling put and take here.

Every line of code in put and take is manipulating public variables. In this case, just lock the entire method directly
(plus synchronizedit is already thread-safe)

// 入队列
public void put(int value) {
    
    
    // 此处是把 synchronized 包裹了方法里的所有代码,其实 synchronized 加到方法上,也是一样的效果
    synchronized (this) {
    
     // 针对同一个 MyBlockingQueue,进行 put,take 操作时,会产生锁竞争
        if (size == items.length) {
    
    
            return;
        }
        items[tail] = value;
        tail++;
        if (tail >= items.length) {
    
    
            tail = 0;
        }
        size++;
    }
}

// 出队列
public Integer take() {
    
    
    int ret = 0;
    synchronized (this) {
    
    
        if (size == 0) {
    
    
            return null;
        }
        ret = items[head];
        head++;
        if (head >= items.length) {
    
    
            head = 0;
        }
        size--;
    }
    return ret;
}

blocking queue - blocking

Next, realize the key point of blocking effect
, using wait 和 notify the mechanism
. For put, the blocking condition is that the queue is full. For take, the blocking condition is that the queue is empty.

Use which object to lock for which object to wait. If this is locked,
the wait in this.wait put must be woken up by take. As long as take succeeds in one element, the queue is not full, and it can be woken up.
For Waiting in take, the condition is that the queue is empty, the queue is not empty, that is, after the put is successful, it will wake up

insert image description here

In the current code, the two operations of put and take will not wait at the same time (the waiting conditions are completely different, one is empty and the other is full)

If someone is waiting, notify can wake up, if no one is waiting, notify has no side effects

notify can only wake up a random waiting thread, and it cannot be precise
. To be precise, you must use different lock objects.
To wake up t1, just o1.notify and let t1 perform o1.wait. If you want to wake up t2, just o2.notify and let t2 perform o2.wait

When the wait is woken up, the if condition must not be true at this time? Specifically, the wait in the put is woken up, and the queue is not satisfied. But after the wait is woken up, the queue must be not
satisfied ?
Note that this situation does not happen in our current code. The current code must be awakened only after the element is fetched successfully. It will wake up every time the element is fetched.
But to be on the safe side, the best way is to judge again after wait returns. See this Is the condition of the time met!!

Change if to while, the standard library recommends writing like this

while (size == items.length) {
    
    
    // 队列满了,暂时先直接返回
    // return;
    this.wait();
} 

while (size == 0) {
    
    
    // 如果队列为空,返回一个非法值
    // return null;
    this.wait();
}

insert image description here

code:

import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;

// 自己写的阻塞队列,此处不考虑泛型,直接使用 int 来表示元素类型了
class MyBlockingQueue {
    
    
    // 保存数据的本体
    private int[] items = new int[1000];
    // 队首下标
    private int head = 0;
    // 队尾下标
    private int tail = 0;
    // 有效元素个数
    private int size = 0;

    // 入队列
    public void put(int value) throws InterruptedException {
    
    
        synchronized (this) {
    
     // 针对同一个 MyBlockingQueue,进行 put,take 操作时,会产生锁竞争
            while (size == items.length) {
    
    
                // 队列满了,暂时先直接返回
                // return;
                this.wait();
            }
            // 2、把新的元素放入 tail 位置
            items[tail] = value;
            tail++;
            // 3、处理 tail 到达数组末尾的情况
            if (tail >= items.length) {
    
     // 判定 + 赋值 (虽然是两个操作,两个操作都是高效操作)
                tail = 0;
            }
            // tail = tail % data.length; // 代码可读性差,除法速度不如比较,不利于开发效率也不利于运行效率
            // 4、插入完成,修改元素个数
            size++;

            // 如果入队列成功,则队列非空,唤醒 take 中的 wait
            this.notify();
        }
    }

    // 出队列
    public Integer take() throws InterruptedException {
    
    
        int ret = 0;
        synchronized (this) {
    
    
            while (size == 0) {
    
    
                // 如果队列为空,返回一个非法值
                // return null;
                this.wait();
            }
            // 2、取出 head 位置的元素
            ret = items[head];
            head++;
            // 3、head 到末尾 重新等于 0
            if (head >= items.length) {
    
    
                head = 0;
            }
            // 4、数组元素个数--
            size--;

            // take 成后,唤醒 put 中的 wait
            this.notify();
        }
        return ret;
    }
}

public class ThreadDemo2 {
    
    
    public static void main(String[] args) {
    
    
        // 生产者消费者模型
        BlockingQueue<Integer> blockingQueue = new LinkedBlockingQueue<>();

        // 创建两个线程,作为生产者和消费者
        Thread customer = new Thread(() -> {
    
    
            while (true) {
    
    
                try {
    
    
                    Integer result = blockingQueue.take();
                    System.out.println("消费元素:" + result);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });
        customer.start();

        Thread producer = new Thread(() -> {
    
    
            int count = 0;
            while (true) {
    
    
                try {
    
    
                    blockingQueue.put(count);
                    System.out.println("生产元素:" + count);
                    count++;
                    Thread.sleep(500); // 每500毫秒生产一个
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });
        producer.start();
    }
}

3. Case 3: Timer

3.1, the timer Timer in the standard library

The timer is also an important component in software development, similar to an "alarm clock",After reaching a set time, wake up and execute the previously set tasks

There are two styles of alarm clocks in life: 1. Designate a specific moment to remind. 2. After specifying a specific period of time, remind
the timer here, not a reminder, is to execute a method/code that is ready to implement

Timer is a very commonly used component in actual development

For example, in network communication, it is easy to appear "unable to connect". If you can't wait all the time, you can use a timer to "stop loss". If the other party does not return data within 500ms, disconnect and try to reconnect, such as a
Map , I hope that a certain key in it will expire after 3s (automatically deleted)

A scenario like this requires a timer

join (specified timeout), sleep(sleep specified time is based on the internal timer of the system to achieve)

First introduce the timer usage of the standard library, and then see how to implement a timer yourself

The standard library provides a Timer class. The core method of the Timer class is schedule (arrangement). The effect of this method is to register a task for the timer. The task will not be executed immediately, but will be executed at a specified time.
The schedule contains two Parameters , the first parameter specifies the task code (Runnable) to be executed , and the second parameter specifies how long it will take to execute (in milliseconds)

import java.util.Timer;
import java.util.TimerTask;

public class demo5 {
    
    
    public static void main(String[] args) {
    
    
        Timer timer = new Timer();
        timer.schedule(new TimerTask() {
    
    
            @Override
            public void run() {
    
    
                System.out.println("hello time");
            }
        }, 3000);
        timer.schedule(new TimerTask() {
    
    
            @Override
            public void run() {
    
    
                System.out.println("hello time2");
            }
        }, 2000);
        System.out.println("main");
    }
}

Running results:
first print: main
a few seconds later print: hello time2
then print: hello time
but the program does not end

There is a dedicated thread inside the Timer, which is responsible for executing the registered tasks

Inside Timer all need:

  1. manage many tasks
  2. Execute the task whose time is up

Implement a timer yourself: a timer can register N tasks, and N tasks will be executed in order according to the originally agreed time

1). There is a scanning thread, which is responsible for judging when the time is up/executing the task (inside the timer alone, set up a thread, let this thread scan periodically, determine whether the task is up to time, if it is time, execute it, wait until the time is up)

2). There is also a data structure (priority queue) to save all registered tasks


3.2. Describe the task

Create a dedicated class to represent a task in a timer (TimerTask)

The tasks stored in the queue are Runnable. Runnable only describes the content of the task, but also needs to describe when the task is executed.

// 创建一个类,表示一个任务
class MyTask {
    
    
    // 任务具体要做什么
    private Runnable runnable;
    // 任务什么时候执行 (任务要执行的毫秒级时间戳)
    private long time;

    public MyTask(Runnable runnable, long time) {
    
    
        this.runnable = runnable;
        this.time = time;
    }
    
    // 获取当前任务时间
    public long getTime() {
    
    
        return time;
    }

    // 执行任务
    public void run() {
    
    
        runnable.run();
    }
}

3.2. Organization tasks

Use a certain data structure to put some tasks together and organize them through a certain data structure

Assuming that there are multiple tasks coming now—after an hour, do homework, after three hours, go to class, and after 10 minutes, go to rest—when the
tasks are arranged, the order of these tasks is out of order, but the execution of the tasks When it is time, this is not out of order, and it is executed in chronological order!
Our requirement is to be able to quickly find the task with the smallest time among all tasks

At this point we found that the heap can be used . In the standard library, there is a special data structurePriorityQueue

Each of our tasks here is executed after a "time". It must be executed first when the time is earlier. The smaller the time, the higher the
priority .
At this time , the first element of the queue is the first element in the entire queue. Although the
order of the elements in the queue cannot be completely determined, it can be known that the first element of the queue must be the first in time. At this time, the scanning
thread only needs to scan the first element of the queue instead of traversing the entire queue

private PriorityQueue<> queue = new PriorityQueue<>();

However, the priority queue here is to be used in a multi-threaded environment. Considering thread safety issues , tasks may be registered in multiple threads, and there is also a dedicated thread to fetch tasks for execution. The queue here needs Pay attention to thread safety issues
so we have to use both priority and blockingPriorityBlockingQueue queues

private PriorityBlockingQueue<> queue = new PriorityBlockingQueue<>();

// 自己写个简单的定时器
class MyTimer {
    
    
    // 扫描线程
    private Thread t = null;

    // 定时器内部要能够存放多个任务 阻塞优先级队列保存任务
    private PriorityBlockingQueue<MyTask> queue = new PriorityBlockingQueue<>();

    public MyTimer() {
    
    
        // TODO
    }
    
    /** 定时器提供一个 schedule 方法,注册任务
     * @param runnable 要执行的任务
     * @param after 多长时间(毫秒)之后执行
     */
    public void schedule(Runnable runnable, long after) {
    
    
        MyTask task = new MyTask(runnable, System.currentTimeMillis() + after);
        queue.put(task); // 任务放入堆
    }
}

——Tasks whose execution time is up:

It is necessary to execute the task with the most advanced time first ,
so there needs to be a scanning thread, which constantly checks the head element of the current priority queue to see if the time is up for the current most advanced task

Create a thread to scan in the timer construction method

The blocking queue can only be judged by taking the element out of the queue first, and it has to be put back if it is not satisfied.
This is not like a normal queue, which can be directly judged by taking the first element of the queue

public MyTimer() {
    
    
    t = new Thread(() -> {
    
    
        while (true) {
    
    
            try {
    
    
                // 取出队首元素,再比较这个任务有没有到时间
                MyTask myTask = queue.take();
                long curTime = System.currentTimeMillis();
                if (curTime < (myTask).getTime()) {
    
     // 1.没到时间,任务放回堆
                    queue.put(myTask);
                } else {
    
     // 2.时间到了,执行任务
                    myTask.run();
                }
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
        }
    });
    t.start();
}

3.3. Two flaws

There are two very serious problems in the above code:

Flaw #1: MyTask does not specify comparison rules

insert image description here

Like the comparison rules of the MyTask class we implemented just now , they do not exist by default . We need to specify them manually. Many of the collection classes in the standard library that are compared according to the size of time
have certain constraints, not casually. Any class can be put into these collection classes

--test:

public class ThreadDemo {
    
    
    public static void main(String[] args) {
    
    
        MyTimer myTimer = new MyTimer();
        myTimer.schedule(new Runnable() {
    
    
            @Override
            public void run() {
    
    
                System.out.println("任务1");
            }
        }, 1000);
        myTimer.schedule(new Runnable() {
    
    
            @Override
            public void run() {
    
    
                System.out.println("任务2");
            }
        }, 2000);
    }
}

insert image description here

Let the MyTask class implement the Comparable interface, and you can also use Comparator to write a comparator separately

Revise:

class MyTask implements Comparable<MyTask> {
    
            
    @Override
    public int compareTo(MyTask o) {
    
    
        return (int) (this.time - o.time);
    }
}

The second flaw: If you don't add any restrictions, this loop will execute very fast

while (true) turns too fast, causing meaningless CPU waste

If the tasks in the queue are empty, it’s okay, this thread will be blocked here (no problem)
I’m afraid that the tasks in the queue are not empty, and the task time has not yet arrived

insert image description here

The above operation is called "busy waiting" , waiting is indeed waiting, but not idle. There is no substantive work output, and at the same time there is no rest.
Waiting is to release CPU resources. Let the CPU do other things. But busy waiting. Both carried out the wait. It also takes up CPU resources, so busy waiting for this kind of operation is a waste of CPU.

Since it is to specify a waiting time, why not use it directly sleep, but use wait again?
sleep cannot be woken up halfway, and wait can be woken up halfway.
During the waiting process, a new task may need to be inserted! The new task may appear at the top of all previous tasks, and using sleep may miss the execution time of the new task

There is a version of wait that can be implemented based on waitsuch a mechanism
, specify the waiting time (notify is not required, it will wake up naturally when the time is up) , calculate the time difference between the current time and the goal of the task, and just wait for such a long time

In schedulethe operation, you need to add a notify operation. Use wait to wait, every time a new task comes (someone calls schedule), just notify, recheck the time, and recalculate the waiting time

In this way, the scanning thread can either wait for a specified time or wake up at any time. Let waiting not occupy CPU, and not miss new tasks at the same time

Revise:

insert image description here


3.4. Question 3: notify empty

The code is written here, there is still a very serious problem, this problem is still closely related to thread safety/random scheduling

Consider an edge case:

Assuming that the code executes to the put line, the thread is dispatched from the cpu...

insert image description here

When the thread comes back, the wait operation will be performed next. At this time, the wait time has been calculated.
For example, curTime is 13:00, and the task getTime is 14:00. It is about to wait for 1 hour (the wait has not been executed yet, Because the thread is transferred in put)

At this point, another thread calls schedule to add a new task, and the new task is executed at 13:30

insert image description here

Calling schedule here will execute notify to notify wait to wake up

Since the scanning thread wait has not been executed yet!
Therefore, the notify here will not generate any wake-up operation! At this moment, although the new task has been inserted into the queue, the new task is also at the head of the queue and then the scanning thread returns to the CPU. At this time, the waiting time is still 1 hour. Therefore, the new task at 13:30 is missed
. !

After understanding the above problems, it is not difficult to find that the reason for the problem is becauseThe current take operation, and wait operation, are not atomic
If you add a lock between take and wait, it is guaranteed that no new tasks will come during this process, and the problem will be solved naturally
(in other words, as long as you ensure that you are indeed waiting every time you notify)

Revise:

Here you only need to enlarge the scope of the lock. After the enlargement, you can ensure that when the notify is executed, the wait has indeed been executed, which can prevent the
situation that the notify is not ready yet, and the wait is not ready.

insert image description here

code:

import java.util.concurrent.PriorityBlockingQueue;

// 创建一个类,表示一个任务
class MyTask implements Comparable<MyTask> {
    
    
    // 任务具体要做什么
    private Runnable runnable;
    // 任务什么时候执行 (任务要执行的毫秒级时间戳)
    private long time;

    public MyTask(Runnable runnable, long time) {
    
    
        this.runnable = runnable;
        this.time = time;
    }

    // 获取当前任务时间
    public long getTime() {
    
    
        return time;
    }

    // 执行任务
    public void run() {
    
    
        runnable.run();
    }

    @Override
    public int compareTo(MyTask o) {
    
    
        return (int) (this.time - o.time);
    }
}

// 自己写个简单的定时器
class MyTimer {
    
    
    // 扫描线程
    private Thread t = null;

    // 定时器内部要能够存放多个任务 阻塞优先级队列保存任务
    private PriorityBlockingQueue<MyTask> queue = new PriorityBlockingQueue<>();

    // 扫描线程
    public MyTimer() {
    
    
        t = new Thread(() -> {
    
    
            while (true) {
    
    
                try {
    
    
                    synchronized (this) {
    
    
                        // 取出队首元素,再比较这个任务有没有到时间
                        MyTask myTask = queue.take();
                        long curTime = System.currentTimeMillis();
                        if (curTime < (myTask).getTime()) {
    
     // 1.没到时间,任务放回堆
                            queue.put(myTask);
                            // 在 put 后 wait
                            this.wait(myTask.getTime() - curTime);
                        } else {
    
     // 2.时间到了,执行任务
                            myTask.run();
                        }
                    }
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });
        t.start();
    }

    /** 定时器提供一个 schedule 方法,注册任务
     * @param runnable 要执行的任务
     * @param after 多长时间(毫秒)之后执行
     */
    public void schedule(Runnable runnable, long after) {
    
    
        // 注意换算,time 是一个时间戳,不是绝对的时间戳的值
        MyTask task = new MyTask(runnable, System.currentTimeMillis() + after);
        queue.put(task); // 任务放入堆
        // 有新任务加入 notify
        synchronized (this) {
    
    
            this.notify();
        }
    }
}

public class ThreadDemo {
    
    
    public static void main(String[] args) {
    
    
        MyTimer myTimer = new MyTimer();
        myTimer.schedule(new Runnable() {
    
    
            @Override
            public void run() {
    
    
                System.out.println("任务1");
            }
        }, 1000);
        myTimer.schedule(new Runnable() {
    
    
            @Override
            public void run() {
    
    
                System.out.println("任务2");
            }
        }, 2000);
    }
}

operation result:

任务1
任务2

Summarize:

  1. Describe a task: runnable + time
  2. Use a priority blocking queue to organize several tasks, PriorityBlockingQueue
  3. Implement the schedule method to register tasks to the queue
  4. Create a scanning thread. This scanning thread keeps getting the first element of the queue and determines whether the time is up.
  5. Note: Let the MyTask class support comparison, pay attention to solve the problem of busyness here, the wait is not executed when notity

4. Case 4: thread pool

4.1, user mode/kernel mode

Processes are relatively heavy, frequently created and destroyed, and costly.
Solution: process pool or thread

Threads (lightweight processes), although lighter than processes, creating threads is more efficient than creating processes; destroying threads is more efficient than destroying processes; scheduling threads is more efficient than scheduling processes...But if the frequency of creating and destroying is further increased, you will still find There are still some overhead
solutions: thread pool or coroutines/fibres (not yet added to the Java standard library. Go has built-in coroutines, so using Go to develop concurrent programming programs has certain advantages)

Use a thread pool to reduce the overhead of creating/destroying threads

Create the thread in advance and put it in the pool
1. You need to use the thread later, and get it directly from the pool, so you don't need to apply from the system. When the thread is used up, it is not returned to the system, but 2. Put it back into the pool for future use.
These two actions are more efficient than creating/destroying

——Why is it faster to put threads in the pool than to apply for release from the system?

insert image description here

The "user mode" in the program,
the user mode executes the code written by the programmer himself , and runs on the top application program layer. The code here is called "user mode" running code.

In the "kernel mode" in the program,
the kernel will provide some APIs for the program, called system calls . Some codes need to call the API of the operating system, and further logic will be executed in the kernel. The operations performed by the kernel mode are all in operation done in the system kernel.

For example, call a System.out.println. In essence, it needs to enter the kernel through the write system call, and the kernel executes the heap logic to control the display to output strings...

The code running in the kernel is called the code running in "kernel mode" .

Creating/destroying a thread requires the operating system kernel to complete (the essence of creating a thread is to build a PCB in the kernel and add it to the linked list).
In fact, the Thread.start called by the operating system needs to enter the kernel mode to run.

At this time, you don't know how many tasks the kernel is carrying (the kernel does not only serve you one application program, but also provides services for all programs). Therefore,
when using system calls to execute kernel code, it is impossible to be sure that the kernel has to do Which work, the overall process is "uncontrollable"

And put the created thread into the "pool", since the pool is implemented in user mode,
put it into the pool/take it from the pool, this process does not need to involve the kernel mode, it can be completed with pure user mode code

It is generally believed that operations in pure user mode are more efficient than operations processed in kernel mode.

For example: the funny old iron went to the bank to handle business, the teller said that a copy of the provincial certificate was needed

insert image description here
1. Funny old iron, come to the copier in the lobby to make a copy. Pure user mode operation. (Completely done by myself, the overall process is controllable)
2. Funny old iron, give the ID card to the teller, and let the teller copy it for him. This process is equivalent to handing over to the kernel state to complete some work. (It's not done by yourself, it's uncontrollable as a whole)
We don't know how many tasks the teller has. Maybe after it disappeared from the counter, it was copied for you.
But he might also be doing other things along the way. Count the money/ count the bills/ go to the toilet/ reply to a message...

Thinking that the efficiency of the kernel mode is low does not necessarily mean that it is really low. Instead, the code enters the kernel state and becomes uncontrollable.
When the kernel finishes the work for you, it will give you the result. (sometimes fast, sometimes slow)


4.2, the thread pool ThreadPoolExecutor in the standard library

ThreadPoolExecutor

Learn first - the use of thread pool in the Java standard library, and then implement a thread pool by yourself

The thread pool of the standard library is called ThreadPoolExecutorthis thing, which is a bit troublesome to use .
Under java.util.concurrent (concurrent concurrency),
many components related to multithreading in Java are in this concurrent package.

insert image description here

--Construction method:

(The explanation of the parameters of the construction method of ThreadPoolExecutor here is a high-frequency test point, and it is important to master!!!)

ThreadPoolExecutor(int corePoolSize,
                   int maximumPoolSize, 
                   long keepAliveTime, 
                   TimeUnit unit, 
                   BlockingQueue<Runnable> workQueue, 
                   ThreadFactory threadFactory, 
                   RejectedExecutionHandler handler)
    Creates a new ThreadPoolExecutor with the given initial parameters.
    // 创建一个新 ThreadPoolExecutor 给定的初始参数

int corePoolSize Number of core threads (number of regular employees)

int maximumPoolSize Maximum number of threads (regular employees + temporary workers)

Think of a thread pool as a "company". There are many employees working in the company.
Divide threads (employees) into two categories:
1. Regular employees (core threads), formal employees are allowed to fish
2. Temporary workers, temporary workers Workers are not allowed to fish fish.
At the beginning, it was assumed that the company had not much work to do, and regular employees could handle it completely, so there was no need for temporary workers.
If the company's tasks suddenly increase, and regular employees can't handle overtime, they need to hire a group of temporary workers (more threads).
However, a program task may not always be many. After a period of time, the workload decreases again. , the current regular employees can also handle it, and there are even Fuyu (regular employees can fish) and temporary workers are even more fishy, ​​so the existing threads (temporary workers) need to be eliminated to a certain extent

The overall strategy, guarantees for formal employees, and dynamic adjustment for temporary workers

long keepAliveTime Time allowed for temporary workers to fish

TimeUnit unit Units of time (s, ms, us...)

BlockingQueue<Runnable> workQueue, Task queue
The thread pool will provide a submitmethod for programmers to register tasks in the thread pool and add them to the task queue.
Each worker thread will try to take again and again. If there is a task, the take is successful, and if there is no task, it will be blocked.

ThreadFactory threadFactory , The thread factory class is used to create threads, and the thread pool needs to create threads

RejectedExecutionHandler handlerDescribes the rejection strategy of the thread pool , and is also a special object. It describes what kind of behavior will happen if you continue to add tasks when the thread pool task queue is full...

The following are the four rejection strategies provided by the standard library:

  1. Directly throw an exception RejectedExecutionException
  2. For the extra tasks, who added them and who is responsible for executing them
  3. directly discard the oldest task
  4. discard the latest task

For example, I have a lot of tasks to complete now, and someone suddenly gave me a new job, but I was already very busy, and the task queue was full, which caused my CPU to burn, and I couldn't do the new job (1) I said,
I Do n’t have time, do it
yourself

insert image description here


The number of threads in the thread pool:

Although there are so many parameters in the thread pool, the most important parameter when using it is the first set of parameters, the number of threads in the thread pool

——There is a program that needs to be concurrent/multi-threaded to complete some tasks. If the thread pool is used, what is the appropriate number of threads to set here? [Not only interview questions, but also topics that need to be considered at work]

In response to this problem, many statements on the Internet are incorrect!
A typical answer on the Internet: Suppose the machine has an N-core CPU, and the number of threads in the thread pool is set to N (the number of cores of the CPU), N + 1, 1.2N, 1.5N, 2N... as long as you can answer a
specific The numbers are all—must be wrong!

Different programs have different characteristics, and the number of threads to be set at this time is also different.
Consider two extreme cases:

  1. CPU intensive

    The task to be performed by each thread is to spin the CPU crazily (for a series of arithmetic operations).
    At this time, the number of threads in the thread pool should not exceed the number of CPU cores
    . At this time, if you set a larger value, it is useless for
    CPU-intensive tasks. , to occupy the CPU all the time, engage in so many threads, but the CPU is not enough.

  2. I/O intensive

    The work of each thread is to wait for IO (reading and writing hard disk, reading and writing network card, waiting for user input) - do not consume CPU
    At this time, such threads are in a blocked state and do not participate in CPU scheduling...
    It doesn't matter if you have more threads at this time, It is no longer limited by the number of CPU cores.
    In theory, you can set the number of threads to infinity (actually, of course not)

However, in our actual development, there is no program that fits these two ideal models... Real programs often partly consume CPU and partly wait for IO
. uncertain…

Determine the number of threads in practice: find the appropriate value through performance testing

For example, if you write a server program, the server can process user requests through a thread pool and multi-thread, and you can perform performance tests on this server.

For example, construct some requests and send them to the server. To test performance, many requests need to be constructed here, such as sending 500/1000/2000 per second...Construct an appropriate value according to the actual business scenario

According to the number of threads in different thread pools here, it can be observed that the speed of program processing tasks and the CPU occupancy rate held by the program. When the
number of threads increases, the overall speed will become faster, but the CPU occupancy rate will also be high
. If the number of threads decreases, the overall speed will slow down, but the CPU usage will also drop . It is necessary to find a
balance point
that makes the program speed acceptable and the CPU usage is reasonable.

Different types of programs, because of a single task, the calculation time and blocking time on the CPU are distributed differently,
so it is often unreliable to come up with a random number

The purpose of multi-threading is to make the program run faster. Why should we consider not to let the CPU usage rate be too high?
For online servers, there must be some redundancy! Be ready to deal with some possible emergencies at any time! (For example, the request suddenly skyrockets)
If the CPU is almost used up, the peak of the wave of requests suddenly comes at this time, and the server may hang up directly at this time


Executors

ThreadPoolExecutor This thread pool is more troublesome to use (provides more powerful functions), so a factory class is provided to make it easier for us to use

A simplified version of the thread pool is provided in the standard library. Executors
It is essentially ThreadPoolExecutorencapsulated for and provides some default parameters.

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class demo6 {
    
    
    public static void main(String[] args) {
    
    
        // 创建一个固定的线程数目的线程池,参数指定了线程的个数
        ExecutorService pool = Executors.newFixedThreadPool(10);
        // 创建一个自动扩扩容的线程池,线程数量动态变化,会根据任务量自动扩容
        Executors.newCachedThreadPool();
        // 创建一个只有一个线程的线程池
        Executors.newSingleThreadExecutor();
        // 创建一个带有定时器功能的线程池,类似于 Timer,只不过执行的时候不是由扫描线程自己执行,而是由单独的线程池来执行
        Executors.newScheduledThreadPool(10);
    }
}

- Using Executors:

Construct a thread pool of 10 threads.
The thread pool provides an important method submit, which can submit several tasks to the thread pool.

Submit the task described by Runnable to the thread pool. At this time, the run method is not called by the main thread, but by one of the 10 threads in the thread pool.

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class demo6 {
    
    
    public static void main(String[] args) {
    
    
        ExecutorService pool = Executors.newFixedThreadPool(10);
        pool.submit(new Runnable() {
    
    
            @Override
            public void run() {
    
    
                System.out.println("hello threadPool!");
            }
        });
    }
}

operation result:

hello threadPool!

After running the program, it is found that the main thread is over, but the whole process is not over, and the threads in the thread pool are all foreground threads, which will prevent the process from ending at this time (the same is true for the previous timer Timer)

——Submit 1000 tasks in a loop:

public class ThreadDemo2 {
    
    
    public static void main(String[] args) {
    
    
        ExecutorService pool = Executors.newFixedThreadPool(10);
        for (int i = 0; i < 1000; i++) {
    
    
            int n = i;
            pool.submit(new Runnable() {
    
    
                @Override
                public void run() {
    
    
                    System.out.println("hello pool! " + n);
                }
            });
        }
    }
}

insert image description here

It should be noted here that currently 1,000 tasks are placed in the thread pool. The
1,000 tasks are distributed evenly by these 10 threads. It is almost 100 tasks performed by one person, but note that this is not a strict average, and there may be many One, some, one less, all normal (random scheduling)
(after each thread executes a task, the next task is taken immediately... Since the execution time of each task is about the same, the number of tasks done by each thread is about the same)

Further, it can be considered that these 1000 tasks are queued in a queue.
These 10 threads will fetch the tasks in the queue one by one, execute one when they are fetched, and execute the next one after execution.


factory pattern

ExecutorService pool = Executors.newFixedThreadPool(10);
Here new is a part of the method name, not the new keyword
. This operation uses a static method of a certain class to directly construct an object (equivalent to hiding the new operation behind such a method)

A method like this is called a "factory method".
The class that provides this factory method is also called a "factory class". Here this code uses the "factory pattern", which is a design pattern

Factory mode: - In one sentence, use ordinary methods instead of constructors to create objects
Why replace it? There are pitfalls in the construction method!!!
The pitfalls are reflected in the fact that only one kind of object is constructed, which is easy to handle.
If you want to construct multiple objects in different situations, it will be difficult...

——Give me a chestnut:
There is a class that uses multiple methods to construct a point on the plane

class Point {
    
    
    // 使用笛卡尔坐标系提供的坐标,来构造点
    public Point(double x, double y) {
    
    }
	
    // 使用极坐标,来构造点
    public Point(double r, double a) {
    
    }
}

Obviously, there is a problem with this code!!! Normally, multiple construction methods
are provided through "overloading".
Overloading requires that the method name is the same, but the number or type of parameters is different

However, the above two methods have the same method name, the same number of parameters, and the same parameter types, which cannot constitute overloading and cannot be compiled correctly on Java.

To solve this problem, you can use the factory pattern:

class PointFactory {
    
    
    public static Point makePointByXY(double x, double y) {
    
    }

    public static Point makePointByRA(double r, double a) {
    
    }
}
Point p = PointFactory.makePointByXY(10,20);

Ordinary methods have no restrictions on method names.
Therefore, there are many ways to construct them, and you can directly use different method names. At this time, it is not important whether the parameters of the methods need to be distinguished.

In many cases, design patterns are designed to avoid pitfalls in programming language syntax
. Different languages ​​have different grammatical rules. Therefore, in different languages, the design patterns that can be used may be different. Some design patterns have been integrated in The grammar of the language is internal...
The design patterns we talk about every day are mainly based on languages ​​​​such as C++/Java/C#. The design patterns mentioned here may not be suitable for other languages.

Like the factory mode, it is of little value to Python. The Python construction method is not as pitted as C++/Java. You can directly use other means to distinguish different versions in the construction method

- Reasons why i cannot be used directly:

insert image description here

Lambda variable capture

Obviously, the run method here belongs to Runnable, and the execution timing of this method is not immediately
but at a certain node in the future (later in the queue of the thread pool, when it is queued, let the corresponding thread execute it)

The i in the fori loop is a local variable in the main thread (on the stack of the main thread). It will be destroyed as the execution of the code block in the main thread ends
. It has not been queued in the thread pool yet, at this time i is already about to be destroyed

insert image description here

In order to avoid the difference in scope, i has been destroyed when the subsequent run is executed,
so there is variable capture, that is, let the run method copy the i of the main thread just now to the stack of the current run... (
in the definition of run When running, secretly remember the current value of i
. When executing run later, create a local variable also called i, and assign this value to the past...)

In Java, there are some additional requirements for variable capture.
Before JDK 1.8, variable capture was required, and only variables modified by final could be captured. Later, it was found that this was too troublesome.
Starting from 1.8, the standard was relaxed a little, and the requirements were not Must have the final keyword, as long as the variable is not modified in the code, it can also be captured

Here, i is modified and cannot be captured
while n is not modified. Although there is no final modification, it can also be captured

C++, JS also have similar variable capture syntax, but without the above restrictions...


4.3. Implement a thread pool

Inside the thread pool are:

  1. Be able to describe the task first (use Runnable directly)
  2. Need to organize tasks (use BlockingQueue directly)
  3. Ability to describe worker threads
  4. Also need to organize these threads
  5. Need to implement, add tasks to the thread pool
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.LinkedBlockingQueue;

// 实现一个固定线程数的线程池
class MyThreadPool {
    
    
    // 1、描述一个任务,不像定时器涉及"时间",直接用 Runnable,不需要额外类
    // 2、使用一个数据结构(阻塞队列)来组织若干个任务
    private BlockingQueue<Runnable> queue = new LinkedBlockingQueue<>();

    // 在构造方法中,创建若干个线程 (n 表示线程的数量)
    public MyThreadPool(int n) {
    
    
        // 在这里创建线程
        for (int i = 0; i < n; i++) {
    
    
            Thread t = new Thread(() -> {
    
    
                while (true) {
    
     // 从队列中循环地取任务
                    try {
    
    
                        // 循环地获取任务队列中的任务,然后执行
                        // 队列为空,直接阻塞。队列非空,就获取内容
                        Runnable runnable = queue.take(); // 获取任务
                        runnable.run(); // 执行任务
                    } catch (InterruptedException e) {
    
    
                        e.printStackTrace();
                    }
                }
            });
            t.start();
        }
    }

    // 创建一个方法,能够允许程序员放任务到线程池中
    // 注册任务给线程池,由这 10 个线程执行
    public void submit(Runnable runnable) {
    
    
        try {
    
    
            queue.put(runnable);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
    }
}

public class TestDemo {
    
    
    public static void main(String[] args) {
    
    
        MyThreadPool pool = new MyThreadPool(10);
        for (int i = 0; i < 1000; i++) {
    
    
            int n  = i;
            pool.submit(new Runnable() {
    
    
                @Override
                public void run() {
    
    
                    System.out.println("hello " + n);
                }
            });
        }
    }
}

Guess you like

Origin blog.csdn.net/qq_56884023/article/details/131942901