[JAVA stereotyped essay] concurrency related

1. Thread state

Six states and transitions

insert image description here

respectively

  • new build
    • When a thread object is created, but the start method has not been called, it is in the new state
    • Not associated with the underlying thread of the operating system at this time
  • runnable
    • After calling the start method, it will enter the runnable from the newly created
    • At this time, it is associated with the underlying thread and is scheduled for execution by the operating system
  • the end
    • The code in the thread has been executed and enters the finalization from runnable
    • At this time, the association with the underlying thread will be canceled
  • block
    • When the acquisition of the lock fails, the blocking queue that can run into the Monitor is blocked , and no cpu time is occupied at this time
    • When the lock-holding thread releases the lock, it will wake up the blocked thread in the blocking queue according to certain rules, and the awakened thread will enter the runnable state
  • wait
    • When the lock is acquired successfully, but because the condition is not satisfied, the wait() method is called. At this time, the lock is released from the runnable state and enters the Monitor waiting set to wait , which also does not take up cpu time.
    • When other lock-holding threads call the notify() or notifyAll() method, the waiting threads in the waiting set will be woken up according to certain rules and restored to the runnable state
  • time limit wait
    • When the lock is acquired successfully, but because the condition is not satisfied, the wait(long) method is called. At this time, the lock is released from the runnable state and enters the Monitor waiting set for time-limited waiting , which also does not occupy cpu time.
    • When other lock-holding threads call the notify() or notifyAll() method, the time-limited waiting threads in the waiting set will be awakened according to certain rules , restored to a runnable state, and re-competed for the lock
    • If the wait times out, it will also recover from the time-limited waiting state to the runnable state, and re-compete for the lock
    • Another situation is that calling the sleep(long) method will also enter the time-limited waiting state from the runnable state , but it has nothing to do with the Monitor, and does not need to be actively woken up. When the timeout expires, it will naturally return to the runnable state

Other situations (just need to know)

  • You can use the interrupt() method to interrupt waiting , time-limited waiting threads and restore them to a runnable state
  • Park, unpark and other methods can also make threads wait and wake up

five states

The statement of the five states comes from the division of the operating system level

insert image description here

  • Running state: allocated to the cpu time, can actually execute the code in the thread
  • Ready state: eligible for cpu time, but not yet its turn
  • Blocked state: not eligible for cpu time
    • Covers blocking , waiting , timed waiting mentioned in java state
    • There is more blocking I/O, which means that when a thread calls blocking I/O, the actual work is completed by the I/O device. At this time, the thread has nothing to do and can only wait.
  • New and final state: similar to the state of the same name in java, no longer verbose

2. Thread pool

Seven parameters

  1. corePoolSize number of core threads - the maximum number of threads that will be kept in the pool
  2. maximumPoolSize the maximum number of threads - the maximum number of core threads + rescue threads
  3. keepAliveTime Survival time - the survival time of the rescue thread, if there are no new tasks within the survival time, this thread resource will be released
  4. unit time unit - the survival time unit of the emergency thread, such as seconds, milliseconds, etc.
  5. workQueue - When there are no idle core threads, new tasks will be queued in this queue, and when the queue is full, emergency threads will be created to execute tasks
  6. threadFactory thread factory - you can customize the creation of thread objects, such as setting the thread name, whether it is a daemon thread, etc.
  7. handler rejection strategy - when all threads are busy and the workQueue is full, the rejection strategy will be triggered
    1. Throw exception java.util.concurrent.ThreadPoolExecutor.AbortPolicy
    2. Tasks are executed by the caller java.util.concurrent.ThreadPoolExecutor.CallerRunsPolicy
    3. Discard tasks java.util.concurrent.ThreadPoolExecutor.DiscardPolicy
    4. Discard the oldest queued tasks java.util.concurrent.ThreadPoolExecutor.DiscardOldestPolicy

insert image description here

3. wait vs sleep

One commonality, three differences

common ground

  • The effects of wait(), wait(long) and sleep(long) are to make the current thread temporarily give up the right to use the CPU and enter the blocking state

difference

  • Method attribution is different

    • sleep(long) is a static method of Thread
    • And wait(), wait(long) are all member methods of Object, each object has
  • Wake up at different times

    • Threads executing sleep(long) and wait(long) will wake up after waiting for the corresponding milliseconds
    • wait(long) and wait() can also be woken up by notify, if wait() does not wake up, it will wait forever
    • They can all be interrupted to wake up
  • Different lock characteristics (emphasis)

    • The invocation of the wait method must first acquire the lock of the wait object, while sleep has no such restriction
    • After the wait method is executed, the object lock will be released, allowing other threads to acquire the object lock (I give up the cpu, but you can still use it)
    • And if sleep is executed in a synchronized code block, it will not release the object lock (I give up the cpu, and you can't use it)

4. lock vs synchronized

three levels

difference

  • grammatical level
    • synchronized is a keyword, the source code is in jvm, implemented in c++ language
    • Lock is an interface, the source code is provided by jdk and implemented in java language
    • When using synchronized, the exit synchronization code block lock will be released automatically, but when using Lock, you need to manually call the unlock method to release the lock
  • functional level
    • Both belong to pessimistic locks, and both have basic mutual exclusion, synchronization, and lock reentry functions
    • Lock provides many functions that synchronized does not have, such as obtaining waiting state, fair lock, interruptible, timeout, and multiple condition variables
    • Lock has implementations suitable for different scenarios, such as ReentrantLock, ReentrantReadWriteLock
  • performance level
    • When there is no competition, synchronized has done a lot of optimizations, such as biased locks, lightweight locks, and the performance is not bad
    • Lock implementations generally provide better performance when contention is high

fair lock

  • Fair embodiment of fair lock
    • Threads already in the blocking queue (regardless of timeout) are always fair, first in first out
    • Fair lock refers to threads that are not in the blocking queue to compete for the lock. If the queue is not empty, wait honestly to the end of the queue
    • Unfair lock means that threads that are not in the blocking queue compete for the lock, and compete with the thread awakened by the queue head. Whoever grabs it counts.
  • Fair locks will reduce throughput and are generally not used

condition variable

  • The function of the condition variable in ReentrantLock is similar to the normal synchronized wait and notify, which is used in the linked list structure for temporary waiting when the thread obtains the lock and finds that the condition is not satisfied
  • The difference from the synchronized waiting set is that there can be multiple condition variables in ReentrantLock, which can achieve finer waiting and wake-up control

5. volatile

atomicity

  • Cause: Under multi-threading, the instructions of different threads are interleaved, which leads to the confusion of reading and writing of shared variables
  • Solution: Use pessimistic locks or optimistic locks, volatile cannot solve atomicity

visibility

  • Cause: Modifications to shared variables due to compiler optimizations, or cache optimizations, or CPU instruction reordering optimizations that are not visible to other threads
  • Solution: Decorating shared variables with volatile can prevent optimizations such as compilers from occurring, and make the modification of shared variables by one thread visible to another thread

orderliness

  • Cause: Due to compiler optimization, or cache optimization, or CPU instruction reordering optimization, the actual execution order of instructions is inconsistent with the writing order
  • Solution: Modifying shared variables with volatile will add different barriers when reading and writing shared variables, preventing other read and write operations from crossing the barriers, thereby achieving the effect of preventing reordering
  • Notice:
    • The barrier added by the volatile variable is to prevent other write operations above the barrier from being queued under the volatile variable
    • The barrier for volatile variable reading is to prevent other read operations below from crossing the barrier and ranking above the volatile variable reading
    • The barriers added by volatile reads and writes can only prevent instruction reordering within the same thread

6. Pessimistic locking vs optimistic locking

Comparing pessimistic locking and optimistic locking

  • Representatives of pessimistic locks are synchronized and Lock locks

    • The core idea is [Threads can only operate shared variables if they own the lock. Only one thread can successfully occupy the lock each time, and the thread that fails to acquire the lock must stop and wait]
    • The thread from running to blocking, and then from blocking to waking up involves thread context switching. If it happens frequently, it will affect performance
    • In fact, when a thread acquires synchronized and Lock locks, if the lock is already occupied, it will do several retries to reduce the chance of blocking
  • The representative of optimistic lock is AtomicInteger, which uses cas to ensure atomicity

    • Its core idea is [no need to lock, only one thread can successfully modify the shared variable each time, other failed threads do not need to stop, keep retrying until success]
    • Since the thread is always running, there is no need to block, so there is no thread context switching involved
    • It requires multi-core cpu support, and the number of threads should not exceed the number of cpu cores

7. Hashtable vs ConcurrentHashMap

Hashtable vs ConcurrentHashMap

  • Both Hashtable and ConcurrentHashMap are thread-safe Map collections
  • Hashtable has low concurrency, the entire Hashtable corresponds to a lock, and only one thread can operate it at the same time
  • ConcurrentHashMap has high concurrency, and the entire ConcurrentHashMap corresponds to multiple locks. As long as threads access different locks, there will be no conflicts

ConcurrentHashMap 1.7

  • Data structure: Segment(大数组) + HashEntry(小数组) + 链表, each segment corresponds to a lock, if multiple threads access different segments, there will be no conflict
  • Concurrency: The size of the Segment array is the concurrency, which determines how many threads can access concurrently at the same time. The Segment array cannot be expanded, which means that the concurrency is fixed when ConcurrentHashMap is created
  • index calculation
    • Suppose the length of the large array is 2 m 2^m2m , the index of the key in the large array is the high m bits of the secondary hash value of the key
    • Suppose the length of the small array is 2 n 2^n2n , the index of the key in the small array is the lower n bits of the secondary hash value of the key
  • Expansion: The expansion of each small array is relatively independent. When the small array exceeds the expansion factor, the expansion will be triggered, and the expansion will be doubled each time.
  • Segment[0] prototype: When creating other small arrays for the first time, this prototype will be used as the basis. The length of the array and the expansion factor will be based on the prototype

ConcurrentHashMap 1.8

  • Data structure: Node 数组 + 链表或红黑树, each head node of the array is used as a lock, if the head nodes accessed by multiple threads are different, there will be no conflict. If competition occurs when generating the head node for the first time, use cas instead of synchronized to further improve performance
  • Concurrency: The size of the Node array is the same as the concurrency. Unlike 1.7, the Node array can be expanded
  • Expansion condition: When the Node array is full of 3/4, the capacity will be expanded
  • Expansion unit: use the linked list as a unit to migrate the linked list from the back to the front. After the migration is completed, replace the old array head node with ForwardingNode
  • Concurrent get during expansion
    • According to whether it is ForwardingNode to decide whether to search in the new array or in the old array, it will not block
    • If the length of the linked list exceeds 1, you need to copy the node (create a new node), fearing that the next pointer will change after the node migration
    • If the index of the last few elements of the linked list remains unchanged after expansion, the node does not need to be copied
  • Concurrent put during capacity expansion
    • If the put thread is the same linked list as the expansion thread operation, the put thread will block
    • If the linked list of the put thread operation has not been migrated, that is, the head node is not ForwardingNode, it can be executed concurrently
    • If the linked list of the put thread operation has been migrated, that is, the head node is ForwardingNode, it can assist in expansion
  • Lazy initialization compared to 1.7
  • capacity represents the estimated number of elements, and capacity / factory is used to calculate the initial array size, which needs to be close to 2 n 2^n2n
  • loadFactor is only used when calculating the initial array size, and then the expansion is fixed at 3/4
  • The expansion problem when the tree threshold is exceeded, if the capacity is already 64, directly tree, otherwise do 3 rounds of expansion on the basis of the original capacity

8. ThreadLocal

effect

  • ThreadLocal can realize the thread isolation of [resource objects], let each thread use its own [resource objects], and avoid thread safety problems caused by contention
  • ThreadLocal also implements resource sharing within threads

principle

Each thread has a member variable of ThreadLocalMap type, which is used to store resource objects

  • Calling the set method is to use ThreadLocal itself as the key and the resource object as the value, and put it into the ThreadLocalMap collection of the current thread
  • Calling the get method is to use ThreadLocal itself as the key to find the associated resource value in the current thread
  • Calling the remove method is to use ThreadLocal itself as the key to remove the resource value associated with the current thread

Some features of ThreadLocalMap

  • The hash value of the key is uniformly distributed
  • The initial capacity is 16, the expansion factor is 2/3, and the expansion capacity is doubled
  • Use open addressing to resolve conflicts after key index conflicts

weak reference key

The key in ThreadLocalMap is designed as a weak reference for the following reasons

  • Thread may need to run for a long time (such as threads in the thread pool), if the key is no longer used, the memory it occupies needs to be released when the memory is insufficient (GC)

memory release time

  • Passive GC releases keys
    • Only the memory of the key is released, and the memory associated with the value will not be released
  • Lazy passive release of value
    • When getting the key, if it is found to be a null key, release its value memory
    • When setting the key, heuristic scanning will be used to clear the value memory of the adjacent null key. The number of heuristics is related to the number of elements and whether a null key is found
  • Actively remove to release key, value
    • The key and value memory will be released at the same time, and the value memory of the adjacent null key will also be cleared
    • It is recommended to use it, because it is generally used as a static variable (that is, a strong reference) when using ThreadLocal, so it cannot passively rely on GC recycling

If there are any deficiencies, please give more advice,
to be continued, continue to update!
Let's make progress together!

Guess you like

Origin blog.csdn.net/qq_40440961/article/details/129047640