JUC Concurrent Programming Interview Questions (Personal Use)

Thread Pool

1 The role of the thread pool: improve thread utilization, thread reuse, frequent creation and destruction of threads is a waste of resources

The seven parameters of the thread pool:

  1. corePoolSize (number of core threads): The number of active threads that are always maintained in the thread pool, even if they are idle and will not be recycled.

  2. maximumPoolSize (maximum number of threads): The maximum number of threads allowed in the thread pool. When the work queue is full and the core threads are already working, the thread pool will create new threads, but not more than this value.

  3. keepAliveTime (thread idle time): When the number of threads in the thread pool exceeds the number of core threads, the excess idle threads will be terminated after the idle time reaches a certain value.

  4. unit (unit of thread idle time): used to specify the time unit of keepAliveTime, such as seconds, milliseconds, etc.

  5. workQueue (work queue): A queue used to store tasks waiting to be executed, which can be a bounded queue or an unbounded queue, such asLinkedBlockingQueue, ArrayBlockingQueueetc.

  6. threadFactory (thread factory): Factory used to create threads, usually used to customize the name, priority, etc. of the thread.

  7. handler (rejection strategy): When the work queue is full and the number of threads in the thread pool reaches the maximum number of threads, there are four common strategies for processing new tasks. (More details below).

Four common rejection strategies:

  1. ThreadPoolExecutor.AbortPolicy (default policy): Thrown when the work queue is full and the number of threads in the thread pool reaches the maximum number of threadsRejectedExecutionExceptionException, reject the submission of new tasks.

  2. ThreadPoolExecutor.CallerRunsPolicy: When the work queue is full, this policy will return the task to the caller (that is, the current thread), so that the task submitter will execute the task himself, not will be discarded.

  3. ThreadPoolExecutor.DiscardOldestPolicy: When the work queue is full, this policy will discard the oldest task in the queue (that is, the task at the head of the queue), and then try to resubmit the current task .

  4. ThreadPoolExecutor.DiscardPolicy: When the work queue is full, this policy will silently discard new tasks without throwing exceptions or executing tasks

Process: After the task is added to the thread pool, first determine whether the number of core threads has reached the maximum. If so, add it to the task queue. If the task queue is full, start non-core threads. If When the maximum number of threads is reached, the denial policy begins.

Closing the thread pool: The difference between shutdown and shutdownnow. The former determines the submission of tasks and completes the unfinished tasks in the previous task queue before closing them. The latter directly closes all running tasks.

How to set the maximum number of threads: cpu-intensive, set to the number of cpu cores, IO-intensive, twice the number of cpu cores

Why do you need to customize thread pool parameters?

We found that the newFixedThreadPool and newSingleThreadExecutor methods both use the task queue of LinkedBlockingQueue. The default size of LikedBlockingQueue is Integer.MAX_VALUE. The thread pool size defined in newCachedThreadPool is Integer.MAX_VALUE.

Therefore, the reason why Alibaba prohibits the use of Executors to create thread pools is that the request queue length of FixedThreadPool and SingleThreadPool is Integer.MAX_VALUE, which may accumulate a large number of requests, resulting in OOM.

The number of threads allowed by CachedThreadPool is Integer.MAX_VALUE, which may create a large number of threads, causing OOM.

synchronized 

principle:

synchronized is the keyword used to achieve synchronization in Java. Its principle involves the concepts of Java object headers and object monitors. Here's how synchronized works:

  1. Java object header: Each Java object has an object header in memory, which contains metadata information of the object, such as hash code, GC generation information, etc. . These metadata are stored in the object header and occupy a certain amount of memory space.

  2. Object Monitor (Monitor): A part of the Java object header is used to achieve synchronization and is called an object monitor or lock. Each object has an associated object monitor, which is used to control concurrent access to the object. Object monitors can be locked and unlocked.

 The underlying semantics of Synchronized are accomplished through a monitor (monitor lock) object.
Each object has a monitor lock (monitor). Each Synchronized modified code will be in a locked state when its monitor is occupied and will try to obtain ownership of the monitor. The process is: 1. If monitor The entry number is 0, then the thread enters the monitor, and then sets the entry number to 1, and the thread is the owner of the monitor. 2. If the thread already occupies the monitor and just re-enters, the number of entries into the monitor is increased by 1. 3. If other threads have already occupied the monitor, Then the thread enters the blocking state until the entry number of the monitor is 0, and then tries to obtain ownership of the monitor again.






 

characteristic

Reentrancy: A thread that obtains the lock can enter the synchronization method multiple times, such as calling another synchronization method of the lock object in the synchronization method. The underlying use Counter implementation 1. If the entry number of the monitor is 0, the thread enters the monitor, and then sets the entry number to 1. The thread is the owner of the monitor. 2. If the thread already occupies the monitor and just re-enters, the number of entries into the monitor is increased by 1. 3. If other threads have already occupied the monitor, Then the thread enters the blocking state until the entry number of the monitor is 0, and then tries to obtain ownership of the monitor again.



Uninterruptible:After a thread enters the synchronization method, other threads can only block and wait, and cannot interrupt the thread that has obtained the lock and entered the synchronization method

Upgrade of four locks in synchronized

In Java, thesynchronized keyword supports several different lock promotion strategies. These strategies enable locks to effectively protect shared resources in a multi-threaded environment while minimizing inter-thread contention and lock overhead.

      0 Stateless lock: No thread enters the synchronized code block

  1. Biased Locking: Biased locking is introduced to optimize single-threaded scenarios. In a single-threaded environment, there is no need for complex competition, so the first thread to obtain the lock will obtain the biased lock. When it enters the synchronized block again, it will obtain the lock directly without competition. The biased lock will be upgraded to a lightweight lock only if other threads compete for the lock.

  2. Lightweight Locking: Lightweight lock is a lock upgrade strategy optimized for multi-threaded scenarios. When multiple threads try to compete for the lock, the JVM will upgrade the biased lock to a lightweight lock. Lightweight locks use CAS operations (compare and swap) to try to acquire the lock instead of blocking the thread. If the attempt to acquire the lock succeeds, the thread can quickly enter the critical section. If it fails (the lightweight lock will be upgraded to a heavyweight lock after reaching a certain cas threshold), it will be upgraded to a heavyweight lock.

  3. Heavyweight Locking: Heavyweight lock is the most traditional lock escalation strategy. When multiple threads compete for the lock, it will cause other threads to block until the lock is obtained. The thread releases the lock. This lock escalation strategy involves threadblocking and waking up at the operating system level, so the performance overhead is relatively high.

The difference between synchronized and Lock lock

1 synchronized is a keyword, Lock is an interface

2 synchronized is an unfair lock, Lock can achieve fair lock

3 synchronized will automatically release the lock, Lock must be locked and released manually

4 Synchronized cannot be woken up accurately when communicating between threads

5 lock lock more flexible lock code

What is spin
Many of the codes in synchronized are just very simple codes, the execution time is very fast, and the waiting threads are all locked at this time
It may be a less worthwhile operation, because thread blocking involves the problem of switching between user mode and kernel mode. Since the code in
synchronized executes very fast, it is better to prevent the thread waiting for the lock from being blocked, but to do a busy loop at the boundary of synchronized
. This is spin. If you loop multiple times and find that the lock has not been obtained, blocking again may be a better strategy.

 

CAS

cas is a method under the unsafe package. Implementation mechanism based on optimistic locking. The principle is: memory value, expected value, new value. If the memory value is equal to the expected value, the new value is assigned to the memory value, otherwise it fails. cas is used for lightweight locks.

Synchronous operation of i++: 1 Get the memory value of i 2 A=i+1 3 i=A 

cas is the third step to achieve

Problems caused by cas:

1. ABA problem:
For example, a thread one takes out A from the memory location V. At this time, another thread two also takes out A from the memory, and two performs some The operation changes to B, and then two changes the data at the V position to A. At this time, thread one performs the CAS operation and finds that A is still in the memory, and then the one operation succeeds. Although the CAS operation for thread one is successful, there may be an underlying problem. Starting from Java 1.5, the atomic package of JDK provides a class AtomicStampedReference to solve the ABA problem. Question: To withdraw money, I pressed the operation of withdrawing 50 several times, and the total was 100. The first time it was successful, it became 50. If someone transfers 50 to you, and now there is 100 in the card, the second withdrawal operation will also be successful. becomes 50, causing data inconsistency problem
2. Long cycle time and high overhead:
In the case of serious resource competition (serious thread conflict), CAS spin The probability will be relatively high, which will waste more CPU resources and be less efficient than synchronized.
3. Only atomic operations of a shared variable can be guaranteed:
When performing operations on a shared variable, we can use cyclic CAS to ensure atomic operations. , but when operating on multiple shared variables, cyclic CAS cannot guarantee the atomicity of the operation. In this case, locks can be used.

Volatile:

Visibility, orderliness

ThreadLocal

Use of ThreadLocal_How to use threadlocal-CSDN Blog


 

Guess you like

Origin blog.csdn.net/qq_52135683/article/details/133885980