Java concurrent programming principle 2 (AQS, ReentrantLock, thread pool)

1. AQS:

1.1 What is AQS?

AQS is an abstract queue synchronizer, abstract queued sychronizer, which is essentially an abstract class.

There is a core attribute state in AQS, followed by a doubly linked list and a single item linked list.

First, the state is modified based on volatile, and then modified based on CAS. At the same time, the three major characteristics can be guaranteed. (atomic, visible, ordered)

Secondly, a doubly linked list is also provided. There is a doubly linked list composed of Node objects.

Finally, in the Condition inner class, a one-way linked list composed of Node objects is also provided.

AQS is the basic class of a large number of tools under JUC. Many tools are implemented based on AQS, such as lock, CountDownLatch, Semaphore, thread pool, etc., all use AQS.


What is the state: the state is an int type value, the synchronization state, as for what the state is, it depends on the implementation of the subclass.

What are condition and one-way linked list: We all know that sync provides the use of wait method and notify method, and lock lock also needs to implement this mechanism. Lock lock implements await and signal methods based on the condition inside AQS. (for wait and notify of sync)


When sync executes the wait method when the thread holds the lock, it will throw the thread into the WaitSet waiting pool to wait for wake-up

lcok executes the await method when the thread holds the lock, encapsulates the thread as a Node object, throws it into the Condition one-way linked list, and waits for wakeup


What Condition is doing: Encapsulate the thread holding the lock as a Node and throw it into the Condition one-way linked list, and suspend the thread at the same time. If the thread wakes up, throw the Node in the Condition to the AQS doubly linked list and wait for the lock to be acquired.

1.2 Why does AQS traverse from back to front when waking up a thread?

If the thread does not acquire resources, it needs to encapsulate the thread as a Node object, arrange it in the AQS doubly linked list, and possibly suspend the thread

If the next node of the head node is the first to be woken up when waking up the thread, if the next node of the head node is canceled, the logic of AQS is to traverse forward from the tail node to find the valid node closest to the head?

To explain this problem clearly, you need to understand how a Node object is added to the doubly linked list.

Based on the addWaiter method, first point the prev of the current Node to the tail node, then point the tail to myself, and then let the prev node point to me

As shown in the figure below, if only step 2 is executed, the Node is added to the AQS queue at this time, but the current node will not be found after the prev node.

image.png

1.3 Why does AQS use a two-way linked list (why not use a one-way linked list)?

Because there is an operation to cancel a node in AQS, after the node is canceled, it needs to be disconnected from the doubly linked list of AQS.

It is also necessary to ensure the integrity of the doubly linked list,

  • The next pointer of the prev node needs to point to the next node.
  • The prev pointer of the next node needs to point to the prev node.

If it is a normal doubly linked list, it can be operated directly.

But if it is a one-way linked list, you need to traverse the entire one-way linked list to complete the above operations. It is a waste of resources.

1.4 Why does AQS have a virtual head node

There is a sentinel node, point more convenient operation.

The other is because inside AQS, each Node will have some state, this state is not only for itself, but also for subsequent nodes

  • 1: The current node is canceled.
  • 0: Default state, nothing happens.
  • -1: The successor node of the current node, pending.
  • -2: Represents that the current node is in the Condition queue (await suspends the thread)
  • -3: It means that it is currently a shared lock. When waking up, subsequent nodes still need to be woken up.

The ws of the Node node represents a lot of information. In addition to the state of the current node, it will also maintain the state of the successor node.

If the virtual head node is canceled, a node cannot save the state of the current stage and the state of the successor node at the same time.

At the same time, when releasing the lock resource, it must be based on whether the state of the head node is -1. To decide whether to wake up the successor node.

If -1, wake up normally

If it is not -1, do you need to wake up? This reduces a possible traversal operation and improves performance.

1.5 The underlying implementation principle of ReentrantLock

ReentrantLock is implemented based on AQS.

When the thread is locked based on ReentrantLock, it is necessary to modify the state attribute based on CAS. If it can be changed from 0 to 1, it means that the lock resource is acquired successfully.

If CAS fails, it will be added to the AQS doubly linked list and queued (threads may be suspended), waiting to acquire the lock.

If the thread holding the lock executes the await method of the condition, the thread will be encapsulated as Node and added to the one-way linked list of the condition, waiting to be awakened and re-competing for the lock resource

In Java, except for the Worker lock in the thread pool mentioned later, all are reentrant locks.

1.6 The difference between fair lock and unfair lock of ReentrantLock

  • The implementation of the lock method and the tryAcquire method in fair locks and unfair locks has one internal difference, and the others are the same
    • Unfair lock lock: directly try to change the state from 0 to 1, if successful, take the lock and go directly, if it fails, execute tryAcquire
    • Fair lock lock: execute tryAcquire directly
    • Unfair lock tryAcquire: If no thread currently holds the lock resource, try again to change the state from 0 to 1. If it succeeds, take the lock and go directly
    • Fair lock tryAcquire: If no thread currently holds the lock resource, first look at whether there is a queue.
      • If there is no queue, directly try to change the state from 0 to 1
      • If there is a queue, I am not the first one, so don't rush, just keep waiting.
      • If there is a queue, I am the first, directly try to change the state from 0 to 1
    • If no lock is obtained, the follow-up logic of fair lock and unfair lock is the same. After queuing, there is no so-called queue jumping.

Example in life: Unfair locks will have the opportunity to try to forcibly acquire lock resources twice, and if they succeed, they will happily leave, and if they fail, they will disappear and go to queue.

  • Someone came to do nucleic acid
    • Fair lock: first look, if there is a queue, then go to the queue
    • Unfair lock: No matter what the situation, try to do it on the stool first. If you sit on it, you will be detained directly, and leave after the detention, if you fail to make it to the stool
      • Is someone pinching their throats?
        • No one is being buckled, go up and try to do the stool! If it's successful, leave after deduction.
        • If someone is buckling, stop and go to the queue.

1.7 How ReentrantReadWriteLock implements the read-write lock

If an operation writes less and reads more and uses a mutex, the performance is too low, because there is no concurrency problem in reading and reading.

How to solve it, there is a read-write lock.

ReentrantReadWriteLock is also a read-write lock based on AQS, but the lock resource is identified by state.

How to identify two lock information based on an int, there is a write lock and a read lock, how to do it?

An int occupies 32 bits.

When the write lock acquires the lock, the value of the lower 16 bits of the state is modified based on the CAS.

When the read lock acquires the lock, the value of the upper 16 bits of the state is modified based on the CAS.

The re-entry of the write lock is directly identified based on the lower 16 of the state, because the write lock is mutually exclusive.

The reentry of read locks cannot be identified based on the upper 16 bits of the state, because read locks are shared and can be held by multiple threads at the same time. Therefore, the reentry of the read lock is represented by ThreadLocal, and at the same time, the high 16 of the state will be appended.

2. High frequency problem of blocking queue:

2.1 Blocking queue

ArrayBlockingQueue,LinkedBlockingQueue,PriorityBlockingQueue

ArrayBlockingQueue: The bottom layer is implemented based on an array, remember to set the boundary when new.

LinkedBlockingQueue: The bottom layer is implemented based on a linked list. It can be considered as an unbounded queue, but the length can be set.

PriorityBlockingQueue: The bottom layer is a binary heap implemented based on an array, which can be considered an unbounded queue because the array will expand.

ArrayBlockingQueue and LinkedBlockingQueue are the two most commonly used blocking queues for the ThreadPoolExecutor thread pool.

PriorityBlockingQueue: The blocking queue used by the ScheduleThreadPoolExecutor timed task thread pool is the same as the underlying implementation of PriorityBlockingQueue. (In fact, the essence is DelayWorkQueue)

2.2 False wakeup

False wakeup is reflected in the source code of the blocking queue.

For example, when consumer 1 consumes data, it will first determine whether there are elements in the queue. If the number of elements is 0, consumer 1 will hang up.

The position where the element is judged to be 0 here will cause a problem if the if loop is used.

If the producer adds a piece of data, it will wake up consumer 1.

But if consumer 1 does not get the lock resource, consumer 2 gets the lock resource and takes away the data.

When consumer 1 gets the lock resource again, it cannot get any elements from the queue. lead to logical problems.

The solution is to set the position of judging the number of elements to while judging.

3. Thread pool

3.1 7 parameters of the thread pool (will not be notified when going home)

Number of core threads, maximum number of threads, maximum idle time, time unit, blocking queue, thread factory, rejection policy

3.2 What is the state of the thread pool and how is it recorded?

The thread pool is not always active!
The thread pool has 5 states. ,
image.png

The state of the thread pool is recorded in the ctl attribute. The essence is int type
image.png

The high three bits of ctl record the state of the thread pool

The lower 29 bits record the number of worker threads. Even if the maximum number of threads you specify is Integer.MAX_VALUE, he can't reach it

3.3 Common rejection strategies for thread pools (no notifications about going home)

AbortPolicy: throw an exception (default)

image.png

CallerRunsPolicy, who submits the task and who executes it. asynchronous to synchronous

image.png

DiscardPolicy: The task does not directly

image.png

DiscardOldestPolicy: Lose the earliest task and try again to hand over the current task to the thread pool for processing

image.png

In general, when the built-in thread pool cannot satisfy the business, define a rejection policy for the thread pool.

Just implement the following interface.

image.png

3.4 Execution process of thread pool (will not be notified about going home)

The core thread is not built after new, it is a lazy loading mechanism, and the core thread will be built only after adding tasks

2 core threads 5 maximum thread blocking queue length is 2

image.png

3.5 Why does the thread pool add non-core threads of empty tasks

image.png

Avoid the thread pool with tasks in the work queue, but no worker threads for processing.

The thread pool can set the number of core threads to 0. In this way, the task is thrown into the blocking queue, but there is no worker thread, isn't this cool~~

The core thread in the thread pool is not guaranteed not to be recycled. There is a property in the thread pool. If it is set to true, the core thread will also be killed.

image.png

3.6 When there is no task, what are the worker threads in the thread pool doing?

The thread will hang, the default core thread is WAITING state, and the non-core thread is TIMED_WAITING

If it is a core thread, by default, the take method will be executed at the position of the blocking queue until the task is obtained.

If it is a non-core thread, by default, the poll method will be executed at the position of the blocking queue and wait for the maximum idle time. If there is no task, it will be pulled away directly. If there is work, then it will work normally.

3.7 What problems will be caused by the exception of the worker thread?

Does it throw an exception, does it affect other threads, and does the worker thread slam?

If the task is executed by the execute method, the worker thread will throw an exception.

If the task is a futureTask executed by the submit method, the worker thread will capture and save the exception in the FutureTask, and the exception information can be obtained based on the get of the futureTask

An abnormal worker thread will not affect other worker threads.

The exception in runWorker will be thrown into the run method, and the run method will end abnormally. When the run method ends, the thread will be dead!

If it is submit, and the exception is not thrown, then no ga~

3.8 What is the purpose of worker threads inheriting AQS?

The essence of the worker thread is the Worker object

Inheriting AQS is related to shutdown and shutdownNow.

If it is shutdown, it will interrupt the idle worker thread, and judge whether the worker thread can be interrupted based on the state value in the AQS implemented by the Worker.

If the state of the worker thread is 0, it means it is idle and can be interrupted. If it is 1, it means it is working.

If it is shutdownNow, directly forcibly interrupt all worker threads

3.9 How to set the core parameters?

The purpose of the thread pool is to make full use of CPU resources. Improve overall system performance.

The thread pool reference methods for different businesses within the system are also different.

If it is a CPU-intensive task, it is generally the number of core threads of the number of CPU cores + 1. This is enough to give full play to the CPU performance.

If it is an IO-intensive task, because the degree of IO is different, some are 1s, some are 1ms, and some are 1 minute, so when an IO-intensive task is processed by a thread pool, it is necessary to observe the CPU resources through pressure testing. Occupancy to determine the number of core threads. Generally, it is enough to play the CPU performance to 70~80. Therefore, the parameter settings of the thread pool need to go through pressure testing and multiple adjustments to get specific.

For example, a business needs to query three services

Guess you like

Origin blog.csdn.net/lx9876lx/article/details/129112863