Java Face Classic 02-Concurrency articles-thread six states, thread pool, wait&sleep, lock&synchronized, volatile, pessimistic lock & optimistic lock, Hashtable, ThreadLocal

concurrent articles

1. Thread state

Require

  • Master the six states of Java threads
  • Mastering Java thread state transitions
  • Can understand the difference between five states and six states

Six states and transitions

insert image description here

respectively

  • new build
    • When a thread object is created, but the start method has not been called, it is in the new state
    • Not associated with the underlying thread of the operating system at this time
  • runnable
    • After calling the start method, it will enter the runnable from the newly created
    • At this time, it is associated with the underlying thread and is scheduled for execution by the operating system
  • the end
    • The code in the thread has been executed and enters the finalization from runnable
    • At this time, the association with the underlying thread will be canceled
  • block
    • When the acquisition of the lock fails, the blocking queue that can run into the Monitor is blocked , and no cpu time is occupied at this time
    • When the lock-holding thread releases the lock, it will wake up the blocked thread in the blocking queue according to certain rules, and the awakened thread will enter the runnable state
  • wait
    • When the lock is acquired successfully, but because the condition is not satisfied, the wait() method is called. At this time, the lock is released from the runnable state and enters the Monitor waiting set to wait , which also does not occupy cpu time.
    • When other lock-holding threads call the notify() or notifyAll() method, the waiting threads in the waiting set will be woken up according to certain rules and restored to the runnable state
  • time limit wait
    • When the lock is acquired successfully, but because the condition is not satisfied, the wait(long) method is called. At this time, the lock is released from the runnable state and enters the Monitor waiting set for time-limited waiting , which also does not occupy cpu time.
    • When other lock-holding threads call the notify() or notifyAll() method, the time-limited waiting threads in the waiting set will be awakened according to certain rules , restored to a runnable state, and re-competed for the lock
    • If the wait times out, it will also recover from the time-limited waiting state to the runnable state, and re-compete for the lock
    • Another situation is that calling the sleep(long) method will also enter the time-limited waiting state from the runnable state , but it has nothing to do with the Monitor, and does not need to be actively woken up. When the timeout expires, it will naturally return to the runnable state

Other situations (just need to know)

  • You can use the interrupt() method to interrupt waiting , time-limited waiting threads and restore them to a runnable state
  • Park, unpark and other methods can also make threads wait and wake up
  • Code demo:
public class TestThreadState {
    
    
    // 唯一锁对象
    static final Object LOCK = new Object();

    public static void main(String[] args) {
    
    
        testNewRunnableTerminated();
    }

    private static void testNewRunnableTerminated(){
    
    

        // Runnable方式+lambda表达式创建多线程
        Thread t1 = new Thread(()->{
    
    
            System.out.println("running...");//3
        },"t1");

        System.out.println("state: "+t1.getState());//1
        t1.start();
        System.out.println("state: "+t1.getState());//2

        System.out.println("state: "+t1.getState());//4
    }
}

Code demo: Only the debug mode can see the effect better, because Debug can control the execution sequence of multi-threads, note that multi-thread can choose Thread-Debug mode Multi-
thread related code Debug mode
insert image description here

Click to switch threads to see where the specific execution is:
insert image description here

Switch to t1, release it directly, let t1 finish executing before the main thread, and the main thread will print the final final state
insert image description here

insert image description here

In fact, changing it to the following method can ensure that the t1 thread is probably completed before the main thread (let the main thread sleep for 1 second first, and you don’t need to debug so much trouble)

public class TestThreadState {
    
    
    // 唯一锁对象
    static final Object LOCK = new Object();

    public static void main(String[] args) {
    
    
        testNewRunnableTerminated();
    }

    private static void testNewRunnableTerminated() {
    
    

        // Runnable方式+lambda表达式创建多线程
        Thread t1 = new Thread(()->{
    
    
            System.out.println("running...");//3
        },"t1");

        System.out.println("state: "+t1.getState());//1
        t1.start();
        System.out.println("state: "+t1.getState());//2

        try {
    
    
            Thread.sleep(1000);
        } catch (InterruptedException e) {
    
    
            throw new RuntimeException(e);
        }
        System.out.println("state: "+t1.getState());//4
    }
}

insert image description here

  • Code demo 2: blocked
private static void testBlocked() throws InterruptedException {
    
    
    Thread t2 = new Thread(() -> {
    
    

        System.out.println("before sync");
        synchronized (LOCK){
    
    //3 被锁住
            System.out.println("in sync");//6 
        }
    },"t2");
    t2.start();
    System.out.println("state: "+t2.getState());//1   RUNNABLE
    synchronized (LOCK){
    
    //2 先进来
        System.out.println("state: "+t2.getState());//4  BLOCKED
    }//5 出来释放锁
    System.out.println("state: "+t2.getState());//7 RUNNABLE
}

insert image description here
Debug mode controls the execution sequence as marked after the code line, you can see that t2 is first RUNNABLE->BLOCKED; then byBLOCKED->RUNNABLE

  • Code Demo 3: WAITING
private static void testWaiting() {
    
    
   Thread t2 = new Thread(() -> {
    
    
       synchronized (LOCK) {
    
    //注意也要加锁
           System.out.println("before waiting");//2
           try {
    
    
               LOCK.wait();//3  这里一旦wait,就立刻释放了锁 下面主线程就可以进入 锁代码块了
           } catch (InterruptedException e) {
    
    
               e.printStackTrace();
           }
       }
   }, "t2");


   t2.start();
   System.out.println("state: " + t2.getState());//1  RUNNABLE
   synchronized (LOCK) {
    
    
       System.out.println("state: " + t2.getState());//4 WAITING
       LOCK.notify();//5
       System.out.println("state: " + t2.getState());//6 BLOCKED(锁被主线程占了)
   }//这里一结束并释放锁,上面t2线程立刻被自动解锁 BLOCKED->RUNNABLE
   System.out.println("state: " + t2.getState());//7 RUNNABLE
}

insert image description here

five states

The statement of the five states comes from the division of the operating system level
insert image description here

  • Running state: allocated to the cpu time, can actually execute the code in the thread
  • Ready state: eligible for cpu time, but not yet its turn
  • Blocked state: not eligible for cpu time
    • Covers blocking , waiting , timed waiting mentioned in java state
    • There is more blocking I/O, which means that when a thread calls blocking I/O, the actual work is completed by the I/O device. At this time, the thread has nothing to do but wait.
  • New and final state: similar to the state of the same name in java, no longer verbose

The runnable state Runnable in java includes ready, running, and blocking I/O in os (because the java code cannot distinguish between blocking I/O and waiting for input, it is considered to be executing)
insert image description here

2. Thread pool

Require

  • Master the 7 core parameters of the thread pool

Seven parameters

  1. corePoolSize number of core threads - the maximum number of threads that will be kept in the pool (can be 0, that is, all threads will not be kept after execution)
  2. maximumPoolSize maximum number of threads = core threads + maximum number of emergency threads
  3. keepAliveTime Survival time - the survival time of the rescue thread, if there are no new tasks within the survival time, this thread resource will be released
  4. unit time unit - the survival time unit of the emergency thread, such as seconds, milliseconds, etc.
  5. workQueue - When there are no idle core threads, new tasks will be queued in this queue, and when the queue is full, emergency threads will be created to execute tasks
  6. threadFactory thread factory - you can customize the creation of thread objects, such as setting the thread name, whether it is a daemon thread, etc.
  7. handler rejection strategy - when all threads (including emergency threads) are busy and the workQueue is full, the rejection strategy will be triggered
    1. Throw exception java.util.concurrent.ThreadPoolExecutor.AbortPolicy
    2. The task is executed by the caller thread java.util.concurrent.ThreadPoolExecutor.CallerRunsPolicy (the task code of this thread is executed by the thread that calls t.start(), which is equivalent to t.run(). The thread pool is full , you don’t need to create multiple threads, just execute it in your own thread. eg: what is called in main is directly executed in the main thread)
    3. Discard task java.util.concurrent.ThreadPoolExecutor.DiscardPolicy ( do not report exception, nor execute you, silently discard )
    4. Discard the earliest queued task java.util.concurrent.ThreadPoolExecutor.DiscardOldestPolicy ( discard the task at the head of the task queue, and then insert me at the end of the queue )

Core thread: After the thread executes the task, it must be kept in the thread pool.
Emergency thread: After the thread executes the task, it does not need to stay in the thread pool.
Emergency thread: There is an upper limit for the core thread, and the task queue also has an upper limit. When busy, the blocking queue (also called the task queue workQueue) is also full, and an emergency thread will be created at this time. After the emergency thread executes the assigned task, it will continue to find the thread in the task queue to execute him until the task of the queue task is executed (of course, the task assigned by the emergency thread may not be executed so easily, and the emergency thread may also I'm busy, I can't manage the task queue if I can't take care of myself)
Rejection strategy: core thread is full, blocking queue is full, emergency thread is also full, first aid can't save you, there is no way but to refuse~

insert image description here

code description

day02.TestThreadPoolExecutor demonstrates the core composition of the thread pool in a more vivid way

3. wait vs sleep

Require

  • be able to tell the difference

One commonality, three differences

common ground

  • The effects of wait(), wait(long) and sleep(long) are to make the current thread temporarily give up the right to use the CPU and enter the blocking state

difference

  • Method attribution is different

    • sleep(long) is a static method of Thread
    • And wait(), wait(long) are all member methods of Object, each object has
  • Wake up at different times

    • Threads executing sleep(long) and wait(long) will wake up after waiting for the corresponding milliseconds
    • wait(long) and wait() can also be woken up by notify, if wait() does not wake up, it will wait forever
    • They can all be interrupted and woken up (get the corresponding thread object, call its interrupt method, let it throw an exception and be woken up)
  • Different lock characteristics (emphasis)

    • The invocation of the wait method must first acquire the lock of the wait object, while sleep has no such restriction
    • The wait method (in the synchronized code block) will release the object lock after execution , allowing other threads to acquire the object lock (I give up the cpu, but you can still use it)
    • And if sleep is executed in a synchronized code block, it will not release the object lock (I give up the cpu, and you can't use it)

The call of the wait method must first obtain the lock of the wait object : the wait method can only be called within the code block locked by synchronized. Calling wait in a normal code block will throw IllegalMonitorStateExceptionan exception. Illegal monitor state exception

insert image description here
insert image description here
They can all be interrupted and woken up: t1.interrupt() forces an exception to be thrown, and then immediately executes the code in the catch(){...} code block, which is also a forced wakeup

4. lock vs synchronized

Require

  • Master the difference between lock and synchronized
  • Understand ReentrantLock's fair and unfair locks
  • Understanding condition variables in ReentrantLock

three levels

difference

  • grammatical level
    • synchronized is a keyword, the source code is in jvm, implemented in c++ language
    • Lock is an interface, the source code is provided by jdk and implemented in java language
    • When using synchronized, the exit synchronization code block lock will be released automatically, but when using Lock, you need to manually call the unlock method to release the lock
  • functional level
    • Both belong to pessimistic locks, and both have basic mutual exclusion, synchronization, and lock reentry functions
    • Lock provides many functions that synchronized does not have, such as obtaining waiting state, fair lock, interruptible, timeout, and multiple condition variables
    • Lock has implementations suitable for different scenarios, such as ReentrantLock, ReentrantReadWriteLock
  • performance level
    • When there is no competition , synchronized has done a lot of optimization , such as biased locks, lightweight locks, and the performance is not bad
    • Lock implementations generally provide better performance when contention is high

synchronized Use wait and notify to achieve synchronization
Lock Use condition variables (await and signal) to achieve synchronization
.
Lock reentry function: add multiple locks to the same object, and unlock multiple locks in the future
.
Lock is implemented in java language, which is more convenient , for example to see which threads are blocked. Synchronized is implemented in C++, so I can’t see it
.
Synchronized only supports unfairness (you can jump in the queue, not necessarily first-come-first-executed, and jumping in the queue is generally more efficient).
Synchronized does not support interruption and timeout
. Synchronized is equivalent to only one condition variable and one waiting queue; Lock has multiple condition variables and multiple waiting queues
.
Reentrant: reentrant
ReentrantReadWriteLock is more suitable for scenarios with more reads and fewer writes

fair lock

  • Fair embodiment of fair lock
    • Threads already in the blocking queue (regardless of timeout) are always fair, first in first out
    • Fair lock refers to threads that are not in the blocking queue to compete for the lock. If the queue is not empty, wait honestly to the end of the queue
    • Unfair lock means that threads that are not in the blocking queue compete for the lock, and compete with the thread awakened by the queue head. Whoever grabs it counts.
  • Fair locks will reduce throughput and are generally not used

condition variable

  • The function of the condition variable in ReentrantLock is similar to the normal synchronized wait and notify, which is used in the linked list structure for temporary waiting when the thread obtains the lock and finds that the condition is not satisfied
  • The difference from the synchronized waiting set is that there can be multiple condition variables in ReentrantLock, which can achieve finer waiting and wake-up control

code description

  • day02.TestReentrantLock demonstrates the internal structure of ReentrantLock in a more vivid way

There are two queues in the Lock method:
1. Block queue: Threads that have not scrambled for the lock enter this blocking queue one by one
2. Waiting queue: The thread that scrambled for the lock finds that the condition is not satisfied when executing the code in the lock (eg: The synchronization problem caused by waiting for the result of another thread), at this time, the thread actively executes the conditon1.await() method to block itself, and at this time enters the waiting queue of the condition variable conditon1 to wait. There can be multiple condition variables, which are used to suspend threads that need to be blocked for various reasons.
conditon1.await() Go to the semaphore waiting queue to wait for
conditon1.signal() Wake up a thread (queue head) in the semaphore waiting queue. After waking up, it will go to the end of the blocked queue (synchronized will go to the blocked queue head)
conditon1.signalAll( ) Wake up the semaphore and wait for all the threads (queue heads) in the queue to go to the end of the blocked queue after waking up

The bottom layer of synchronized is C++, which cannot be demonstrated through code Debug. The bottom layer implementation is not the same as that of Lock. For example, the thread after the semaphore wakes up will go from the waiting queue to the head of the blocked queue (higher priority)

5. volatile

Require

  • Three issues to consider in mastering thread safety
  • What problems can be solved by mastering volatile

atomicity

  • Cause: Under multi-threading, the instructions of different threads are interleaved, which leads to the confusion of reading and writing of shared variables
  • Solution: Use pessimistic locks or optimistic locks, volatile cannot solve atomicity

An example of the atomicity problem is as follows:
insert image description here
Originally, the value after +5 -5 should remain 10, but due to the interleaved operation, the final balance=5
is solved: lock->become atomic
orCASTo solve (CAS=>compareAndSet before modifying, first compare whether the variable value is equal to the old value, that is, to see if it has been secretly modified by other threads.) CAS can also
guarantee atomicity

visibility

  • Cause: Modifications to shared variables due to compiler optimizations, or cache optimizations, or CPU instruction reordering optimizations that are not visible to other threads
  • Solution: Decorating shared variables with volatile can prevent optimizations such as compilers from occurring , and make the modification of shared variables by one thread visible to another thread

Example of visibility problem:
insert image description here
A simple explanation: Thread-0 thread’s modification of the stop shared variable is not seen by the main thread (the os has done some instruction optimization to improve efficiency, the speed is faster, but it also brings some questions)
insert image description here

But the above explanation is lacking. More precisely, it should be caused by JIT (JIT (just in time): just-in-time compiler) optimization. JIT will optimize
those frequently called methods or loops, that is, frequently executed For example, the body code of the while(!stop){…} method below is too simple, causing it to loop 10 million times in 0.1s, and read the value of stop in the memory every time, which is too much in comparison. It’s slow, JIT couldn’t sit still after seeing it, so it optimized it for him, and directly cached the last machine instruction while(!false) of compilation-interpretation-》, and stopped re-reading the memory stop. I think stop===false, so the stop is modified and can’t be found later.
Of course, this optimization is because the code has been repeatedly executed too many times, and the number of times has exceeded the JIT optimization threshold, such as (500,000 times), before it will be optimized.
The number of code executions is very small, and JIT will not optimize

The above Thread.sleep(100); is enough to make the while(!stop) loop execute 10 million times, far exceeding the threshold. Change it to Thread.sleep(1);
If the infinite loop is executed more than 10,000 times within 1ms, it will not be optimized, and it will read the memory, and you can see that the stop has been modified, and it will not appear Stuck problem.
insert image description here

Solution: If you modify the variable with volatile, the variable will not be cached by JIT compilation and optimization. Every time you really read the value in the memory, you can see the modification immediately, which solves this problem. (JIT compilation optimization can increase program efficiency by 10 to 100 times and cannot be turned off)
insert image description here

orderliness

  • Cause: Due to compiler optimization, or cache optimization, or CPU instruction reordering optimization, the actual execution order of instructions is inconsistent with the writing order
  • Solution: Modifying shared variables with volatile will add different barriers when reading and writing shared variables, preventing other read and write operations from crossing the barriers, thereby achieving the effect of preventing reordering
  • Notice:
    • The barrier added to volatile variable writing is to prevent other write operations above the barrier from being queued under the volatile variable writing (the previous write instructions are honestly written first (but the latter ones can go ahead of me and write first)) =>write after
    • The barrier added to the volatile variable read is to prevent other read operations below from crossing the barrier and being arranged on top of the volatile variable read (the following read instructions wait for me to finish reading honestly, and then read behind me (but the read instructions in front of me can run to I'll read it later)) = "read first
    • The barriers added by volatile reads and writes can only prevent instruction reordering within the same thread

insert image description here

Therefore, volatile modification must be added to variables that are read first and then written
insert image description here
Novices do not recommend using volatile

code description

  • day02.threadsafe.AddAndSubtract demonstrates atomicity
  • day02.threadsafe.ForeverLoop demo visibility
    • Note: This example has been proven to be a visibility problem caused by compiler optimization
  • day02.threadsafe.Reordering demonstrates ordering
    • It needs to be packaged into a jar package and tested
  • Please also refer to the video explanation

6. Pessimistic locking vs optimistic locking

Require

  • Master the difference between pessimistic locking and optimistic locking

Comparing pessimistic locking and optimistic locking

  • Representatives of pessimistic locks are synchronized and Lock locks

    • The core idea is [Threads can only operate shared variables if they own the lock. Only one thread can successfully occupy the lock each time, and the thread that fails to acquire the lock must stop and wait]
    • The thread from running to blocking, and then from blocking to waking up involves thread context switching. If it happens frequently, it will affect performance
    • In fact, when a thread acquires synchronized and Lock locks, if the lock is already occupied, it will do several retries to reduce the chance of blocking
  • The representative of optimistic lock is AtomicInteger, which uses cas to ensure atomicity

    • Its core idea is [no need to lock, only one thread can successfully modify the shared variable each time, other failed threads do not need to stop, keep retrying until success]
    • Since the thread is always running, there is no need to block, so there is no thread context switching involved
    • It requires multi-core cpu support, and the number of threads should not exceed the number of cpu cores

The typical optimistic lock is AtomicInteger, and the bottom layer of AtomicInteger is the continuous attempt of Unsafe
insert image description here
bottom layer, which is similar to the following code principle
insert image description here
compareAndSetXXX => abbreviation == CAS ==
Note that CAS can only guarantee atomicity, not visibility (JIT optimizes cache bytecode -"Machine code)
visibility is still guaranteed by volatile So even if cas operates shared variables, shared variables still have to be decorated with volatile
insert image description here

How do pessimistic and optimistic locks ensure the thread safety of shared variables?
Pessimistic lock: synchronized ensures that only one thread can enter a shared code block at a time, that is, the entire code block is atomic. Of course, access to shared variables is thread-safe. (To put it simply, atomicity prevents the interleaving of instructions, sacrifices efficiency, and ensures safety)
insert image description here
Optimistic lock, cas new and old value comparison ensures that only one thread can modify the same value each time, and the thread that fails to modify keeps retrying , re-read the new value and then operate, ensuring safety. CAS has no mutual exclusion or blocking, instructions can be interleaved at will, and the correctness of shared variables can still be guaranteed through comparison before modification.
insert image description here

code description

  • day02.SyncVsCas demonstrates the use of optimistic locks and pessimistic locks to solve atomic assignments
  • Please also refer to the video explanation

summary:Atomicity can only be solved by adding locks: pessimistic locks and optimistic locks are fine
Visibility: can only be solved by volatile, to prevent the variables caused by the rearrangement of compiler optimization instructions from being modified but not visible
Orderliness: it can only be solved by volatile, to prevent the rearrangement of compiler optimization instructions, etc., causing the actual execution order of the instructions to be inconsistent with the writing order, resulting in unexpected results, but volatile must be added to the "read first, then write" shared variables

7. Hashtable vs ConcurrentHashMap

Require

  • Master the difference between Hashtable and ConcurrentHashMap
  • Master the implementation differences of ConcurrentHashMap in different versions

For a more vivid demonstration, see hash-demo.jar in the data. It needs jdk14 or higher environment to run. Enter the jar package directory and execute the following command

java -jar --add-exports java.base/jdk.internal.misc=ALL-UNNAMED hash-demo.jar

Hashtable vs ConcurrentHashMap

  • Both Hashtable and ConcurrentHashMap are thread-safe Map collections (their key and value cannot be empty)
  • Hashtable has low concurrency, the entire Hashtable corresponds to a lock, and only one thread can operate it at the same time (each expansion n->2*n+1)
  • ConcurrentHashMap has a high degree of concurrency. The entire ConcurrentHashMap corresponds to multiple locks. As long as threads access different locks, there will be no conflicts (the number of locks determines the degree of concurrency)

ConcurrentHashMap 1.7

  • Data structure: Segment(大数组) + HashEntry(小数组) + 链表, each segment corresponds to a lock, if multiple threads access different segments, there will be no conflict

(Each large array corresponds to a small array, eg: capacity capacity=32, large array length clevel=8, then each large array element must have a small array with a size of capacity/clevel=32/8=4) (The expansion of the expansion factor leads to an increase in capacity, and the clevel cannot be changed, which eventually leads to expansion of the small array (The small array is the corresponding actual storage array in the HashMap. If the small array conflicts, it will be zippered and treed)
Note: Maybe capacity<clevel At this time, the minimum capacity of the small array is 2

  • Concurrency: The size of the Segment array is the concurrency, which determines how many threads can access concurrently at the same time. The Segment array cannot be expanded, which means that the concurrency is fixed when ConcurrentHashMap is created
  • index calculation
    • Suppose the length of the large array is 2 m 2^m2m , the index of the key in the large array is the high m bits of the secondary hash value of the key (calculate the hash(key) value and then take the high m bits, and put it under the large array corresponding to the serial number)
    • Suppose the length of the small array is 2 n 2^n2n , the index of the key in the small array is the lower n digits of the secondary hash value of the key (calculate the hash(key) value and then take the lower n digits, and place it at the corresponding subscript of the small array)
  • Expansion: The expansion of each small array is relatively independent (clevel (concurrency) small arrays are each expanded) . When the small array exceeds the expansion factor, the expansion will be triggered, and the expansion will be doubled each time (each expansion n->2* n)
  • Segment[0] prototype: When creating other small arrays for the first time, this prototype will be used as the basis. The length of the array and the expansion factor will be based on the prototype
    . There is no small array below, it is empty
    The Segment[0] small array will also store values ​​normally, and it will also be expanded to a large size. At this time, even if other new Segments have only one value, an array of the same size as the Segment[0] small array will be created (in fact, it is a design pattern Prototype pattern in )

ConcurrentHashMap 1.8
(There is no Segment large array, it is directly array+linked list/red-black tree)

  • Data structure: Node 数组 + 链表或红黑树, each head node of the array is used as a lock, if the head nodes accessed by multiple threads are different, there will be no conflict. If competition occurs when generating the head node for the first time, use cas instead of synchronized to further improve performance
  • Concurrency: The size of the Node array is the same as the concurrency. Unlike 1.7, the Node array can be expanded (as long as the operation is a different linked list head (that is, the element that is not a mapping conflict), it can be executed concurrently)
  • Expansion conditions: When the Node array is full of 3/4, it will expand ( n*扩容因子expand when it reaches 1.7) (expansion is to create a new array with a new 2*ncapacity first, and then recalculate the hash to copy the elements one by one)
  • Expansion unit: use the linked list as a unit to migrate the linked list from the back to the front. After the migration is completed, replace the old array head node with ForwardingNode
  • Concurrent get during expansion
    • According to whether it is ForwardingNode (updated chain head) to decide whether to search in the new array or in the old array, it will not block
    • If the length of the linked list exceeds 1, the node needs to be copied (create new node), the fear is that the next pointer will change after node migration
    • If the index of the last few elements of the linked list remains unchanged after expansion, the node does not need to be copied (nextThe relationship has not changed
  • Concurrent put during capacity expansion
    • If the put thread is the same linked list as the expansion thread operation, the put thread will block
    • If the linked list of the put thread operation has not been migrated, that is, the head node is not ForwardingNode, it can be executed concurrently
    • If the linked list of the put thread operation has been migrated, that is, the head node is ForwardingNode, it can assist in expansion (the put operation is blocked, but this thread will not be idle, please help me with the migration operation of migrating other nodes Bar)
  • Compared with 1.7, it is lazy initialization (1.7 initializes the Segment large array, and the Segment[0] small array is created. 1.8 After initialization, no array is created, lazy style)
  • capacity represents the estimated number of elements,capacity / factory to figure out the initial array size, it needs to be close to take 2 n 2^n2n
  • loadFactor is only used when calculating the initial array size,After that, the expansion factor is fixed at 3/4
  • The expansion problem when the tree threshold is exceeded, if the capacity is already 64, directly tree, otherwise do 3 rounds of expansion on the basis of the original capacity

8. ThreadLocal

Require

  • Master the function and principle of ThreadLocal
  • Grasp the memory release timing of ThreadLocal

effect

  • ThreadLocal can realize the thread isolation of [resource object], let each thread use its own [resource object], and avoid thread safety problems caused by contention (the completely opposite idea solves thread safety problems, not shared, each uses its own Resources. There are more resources)
  • ThreadLocal also implements resource sharing within threads (many methods may need to be executed in a thread, and method local variables can be shared between methods)

insert image description here
Which thread executes tl.get() to obtain the thread's own local variable area, which naturally realizes the isolation between threads.
Then as long as it is the same thread, no matter which part of the code, call the Get method of the ThreadLocal() object tl tl.get()to get it . It is the only local variable area in this thread, which realizes resource sharing in the thread.

principle

Each thread has a member variable of ThreadLocalMap type, which is used to store resource objects

  • Calling the set method is to use ThreadLocal itself as the key and the resource object as the value, and put it into the ThreadLocalMap collection of the current thread

(ThreadLocal here only plays an association role, as a public key, so ThreadLocal can be the same object, representing the same type of resource.Therefore, multiple new ThreadLocals can be used as multiple common keys, and multiple types of values ​​can be stored in the current thread (in the ThreadLocalMap). Capacity is 16))
As for the hash value, the hash value of the first ThreadLocal is 0, after each threadLocal.set(xx), the hash value will be +1640531527, and then directly take the remainder and map to the corresponding position of ThreadLocalMap

  • Calling the get method is to use ThreadLocal itself as the key to find the associated resource value in the current thread
  • Calling the remove method is to use ThreadLocal itself as the key to remove the resource value associated with the current thread

Some features of ThreadLocalMap

  • The hash value of the key is uniformly distributed
  • The initial capacity is 16, the expansion factor is 2/3, and the expansion capacity is doubled
  • After the key index conflicts, use the open addressing method to resolve the conflict (ThreadLocalMap does not use the zipper method) ( the open addressing method is to find the next free location later )

weak reference key

The key in ThreadLocalMap is designed as a weak reference for the following reasons

  • Thread may need to run for a long time (such as threads in the thread pool), if the key is no longer used, the memory it occupies needs to be released when the memory is insufficient (GC)
    Those keys do not need to be used any more, but since they are indeed referenced in the map as a key, if they are strongly referenced, they will not be released by the GC and occupy the memory of the virtual machine all the time.
    (Garbage collection can release weak references, but not strong references. Setting it as a weak reference is to prevent you from forgetting to release memory yourself. During garbage collection, it is found that no one uses these weak references anymore (When an object is not referenced by any variable, then the program can no longer use this object), it will automatically recycle these key objects for you, and will not occupy memory)
    (but the value is a strong reference, and the memory will not be released by GC)

memory release time

  • Passive GC releases keys
    • Only the memory of the key is released, and the memory associated with the value will not be released
  • Lazily and passively release the value (you don’t need to scan the ThreadLocalMap to clean it up, clean up yourself or the next one when get/set)
    • When getting the key, if it is found to be a null key, its value memory will be released (different from ordinary maps, a key will be filled in the null key, but the value will be set to null)
    • When setting the key, heuristic scanning will be used to clear the value memory of the adjacent null key. The number of heuristics is related to the number of elements and whether a null key is found (when setting the key, I found that the key in my area is null, put in a new key, and the value has The old value will definitely be cleaned up, and null key- 非null valuethe value next to it will be cleaned up so that the unreleased memory can be cleaned up without having to scan the ThreadLocalMap.)
  • Actively remove to release key, value
    • The key and value memory will be released at the same time, and the value memory of the adjacent null key will also be cleared
    • It is recommended to use it, because it is generally used as a static variable (that is, a strong reference) when using ThreadLocal , so it cannot passively rely on GC recycling

Memory leak: The garbage collector cannot reclaim a certain part of the memory. This phenomenon is called a memory leak. The
value object has only one reference to Entry. If a weak reference is used, gc may occur before the execution of method stack 2 is completed, causing the value to be recycled. A null pointer exception occurred. ? Make up slowly~

9. State-owned enterprises: the difference between thread and process (the new version of the course will be supplemented)

Guess you like

Origin blog.csdn.net/hza419763578/article/details/130556607