[Interview] Java Concurrency (2)

0. Question outline

Two, JUC tools, thread pool

2.1 JUC package

1、Java并发包提供了哪些并发工具类?【第19讲】

2、如何保证集合是线程安全的? ConcurrentHashMap如何实现高效地线程安全? 【第10讲】
 - 追问1:HashMap/HashTable/ConcurrentHashMap结构,底层(*4),如何保证线程安全,怎么实现(*3- 追问2:ConcurrentHashMap中1.71.8的区别(*23、并发包中的ConcurrentLinkedQueue和LinkedBlockingQueue有什么区别?【第20讲】

2.2 Thread pool

1、为什么用线程池,有何好处?(*2)

2、Java并发类库提供的线程池有哪几种? 分别有什么特点?【第21讲】
 - 追问1:几种队列(*3),分别作用(*2- 追问2:排队策略有哪些?
 - 追问3:线程池的(核心)参数(*4- 追问4:如果任务数超过的核心线程数,会发生什么?

Two, JUC tools, thread pool

2.1 JUC package

1. What kind of concurrency tools does the Java concurrency package provide? [Lecture 19]

The concurrency package we usually call is java.util.concurrent and its sub-packages, which concentrate various basic tool classes for Java concurrency, and specifically include several aspects:

Provides various synchronization structures that are more advanced than synchronized, including CountDownLatch, CyclicBarrier, Semaphore, etc., which can achieve richer multi-threaded operations, such as using Semaphore as a resource controller to limit the number of threads that can work at the same time.

Various thread-safe containers, such as the most common ConcurrentHashMap, ordered ConcurrentSkipListMap, or thread-safe dynamic array CopyOnWriteArrayList through similar snapshot mechanisms.

Various concurrent queue implementations, such as various BlockingQueue implementations, more typical ArrayBlockingQueue, SynchronousQueue, or PriorityBlockingQueue for specific scenarios, etc.

The powerful Executor framework can create various types of thread pools and schedule task operation. In most cases, it is no longer necessary to implement thread pools and task schedulers from scratch.

2. How to ensure that the collection is thread-safe? How does ConcurrentHashMap achieve efficient thread-safety? [Lecture 10]

Java provides different levels of thread safety support. In the traditional collection framework, in addition to synchronous containers such as Hashtable, it also provides a so-called Synchronized Wrapper. We can call the packaging method provided by the Collections tool class to obtain a synchronized packaging container (such as Collections.synchronizedMap). But they all use very coarse-grained synchronization methods, and their performance is relatively low in the case of high concurrency.

In addition, a more common choice is to use the thread-safe container class provided by the concurrent package, which provides: various concurrent containers, such as ConcurrentHashMap, CopyOnWriteArrayList.
Various thread-safe queues (Queue/Deque), such as ArrayBlockingQueue, SynchronousQueue.
Thread-safe versions of various ordered containers, etc.

Specific ways to ensure thread safety include from simple synchronize methods to more refined ones, such as concurrent implementations such as ConcurrentHashMap based on separate locks. The specific choice depends on the development scenario requirements. In general, the container general scenario provided in the concurrent package is far better than the early simple synchronization implementation.

Follow-up 1: HashMap/HashTable/ConcurrentHashMap structure, the bottom layer (*4), how to ensure thread safety, how to implement it (*3)

……

Follow-up 2: The difference between 1.7 and 1.8 in ConcurrentHashMap (*2)

……

3. What is the difference between ConcurrentLinkedQueue and LinkedBlockingQueue in the concurrent package? [Lecture 20]

Sometimes we call all the containers under concurrent packages as concurrent containers, but strictly speaking, "Concurrent*" containers like ConcurrentLinkedQueue really represent concurrency.

Regarding the difference between them in the question:

  • The Concurrent type is based on lock-free, and can generally provide higher throughput in common multi-threaded access scenarios.
  • The LinkedBlockingQueue is internally based on locks and provides a waiting method for BlockingQueue.

I don’t know if you have noticed that the containers (Queue, List, Set) and Map provided by the java.util.concurrent package can be roughly divided into three categories: Concurrent*, CopyOnWrite and Blocking. They are also thread-safe containers. It can be simply considered:

  • Concurrent type does not have the relatively heavy modification overhead of containers like CopyOnWrite.
  • However, everything comes at a price, and Concurrent often provides lower traversal consistency. You can understand the so-called weak consistency in this way. For example, when using an iterator to traverse, if the container is modified, the iterator can still continue to traverse.
  • Corresponding to weak consistency is the "fail-fast" common behavior of synchronous containers that I have introduced, that is, if the container is modified during the traversal process, ConcurrentModificationException will be thrown and the traversal will not continue.
  • Another manifestation of weak consistency is that the accuracy of operations such as size is limited, and may not be 100% accurate.
  • At the same time, the read performance has a certain degree of uncertainty.

2.2 Thread pool

1. Why use thread pool and what are the benefits? (*2)

Threads cannot be started repeatedly. There is a certain overhead in creating or destroying threads. The thread pool can create certain idle threads. When tasks arrive, they will select idle threads for processing. After processing, they will not exit and wait for the next time. When most threads are blocked, it will Automatically destroy some threads and reclaim system resources. In short, thread pool technology can improve system resource utilization efficiency and simplify thread management.

2. What kinds of thread pools are provided by the Java Concurrent Class Library? What are the characteristics of each? [Lecture 21]

Developers use the general thread pool creation method provided by Executors to create thread pools with different configurations. The main difference lies in different ExecutorService types or different initial parameters.

Executors currently provides 5 different thread pool creation configurations:

  • newCachedThreadPool(): Cacheable thread pool. When no cache is available, it will be created; when it is idle, it will be recycled. The size of the thread pool is not limited, and it is completely dependent on the maximum thread size that the operating system can create. (For internal use SynchronousQueue)
  • newFixedThreadPool(int nThreads): Use an unbounded queue with a fixed length. Exceeding and so on, the worker thread exits will create a new worker thread to make up the number.
  • newSingleThreadExecutor(): Using unbounded queues, the number of worker threads is limited to 1, and only one task is active at most, ensuring the sequential execution of all tasks.
  • newSingleThreadScheduledExecutor()And newScheduledThreadPool(int corePoolSize), create a fixed-length (1 or more) thread pool, which can be scheduled or periodically scheduled.
  • newWorkStealingPool(int parallelism):ForkJoinPool is built internally, and the Work-Stealing algorithm is used to process tasks in parallel, and the processing order is not guaranteed. [ Often ignored, Java 8 added ]
Follow-up 1: Several types of queues (*3), function separately (*2)
Queue接口
    |———— BlockingQueue接口(阻塞队列)
        |———— ArrayBlockingQueue类
        |———— DelayQueue类
        |———— LinkedBlockingQueue类
        |———— PriorityBlockingQueue类
        |———— SynchronousQueue类

ArrayBlockingQueue: A BlockingQueue with a specified size, the internal implementation is an array. Its construction must specify the size. The objects it contains are sorted in FIFO order.

LinkedBlockingQueue: BlockingQueue with optional size, if the size is specified during construction, there will be a size limit for generation; if the size is not specified, it is determined by Integer.MAX_VALUE. The objects it contains are sorted in FIFO order.

PriorityBlockingQueue : Similar to LinkedBlockingQueue, but the order of the objects it contains is not FIFO, but is determined by the natural order of the objects or the Comparator of the constructor.

SynchronizedQueue: Only one element is allowed in the queue, and its operation must be completed alternately.

Follow-up 2: What are the queuing strategies?

Submit directly. The default option of the work queue is SynchronousQueue, which submits tasks directly to threads without holding them. Here, if there is no thread that can be used to run the task immediately, the attempt to add the task to the queue will fail, so a new thread will be constructed. This strategy can avoid locks when processing request sets that may have internal dependencies. Direct submission usually requires unbounded maximumPoolSizes to avoid rejecting newly submitted tasks. This strategy allows unbounded threads to have the possibility of growth when commands continue to arrive at more than the average number that the queue can handle.

Unbounded queue. Using an unbounded queue (for example, LinkedBlockingQueue without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. In this way, the created thread will not exceed corePoolSize. (Therefore, the value of maximumPoolSize is invalid.) When each task is completely independent of other tasks, that is, when the task execution does not affect each other, it is suitable to use an unbounded queue; for example, in a Web page server. This kind of queuing can be used to handle transient burst requests. When commands arrive continuously at an average number that exceeds the queue can handle, this strategy allows unbounded threads to have the possibility of growth.

Bounded queue. When using limited maximumPoolSizes, bounded queues (such as ArrayBlockingQueue) help prevent resource exhaustion, but may be more difficult to adjust and control. Queue size and maximum pool size may need to be compromised: using large queues and small pools can minimize CPU usage, operating system resources, and context switching overhead, but may result in artificially reduced throughput. If tasks block frequently (for example, if they are I/O boundaries), the system may schedule time for more threads than you allow. The use of small queues usually requires a larger pool size and higher CPU usage, but may encounter unacceptable scheduling overhead, which will also reduce throughput.

Follow-up 3: The (core) parameter of the thread pool (*4)

corePoolSize: the number of core threads
maximumPoolSize: the maximum number of threads
keepAliveTime: if the keepAliveTime elapses, the threads exceeding corePoolSize will be recycled without receiving a new task
unit: time unit
workQueue: queue for storing workers
threadFactory: factory
handler for creating threads : Task rejection strategy. When the task queue is full and new tasks come in, this interface will be called back. There are several default implementations, and it is usually recommended to implement them according to specific business

public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue) {
    
    
    this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
         Executors.defaultThreadFactory(), defaultHandler);
}
Follow-up 4: If the number of submitted tasks exceeds the number of core threads, what will happen?

When the number of submitted tasks exceeds the number of core threads, the submitted tasks are stored in the workQueue.

Two, reference

1. Lecture 10 | How to ensure that the collection is thread-safe? How does ConcurrentHashMap achieve efficient thread-safety?
2. Lecture 20 | What is the difference between ConcurrentLinkedQueue and LinkedBlockingQueue in the concurrent package?
3. How to set thread pool parameters? Meituan gave an answer that shocked the interviewer.
4. Java thread pool implementation principle and its practice in Meituan business
5. JAVA thread pool parameter explanation
6. Thread pool
7. [Concurrent programming] blocking queue and thread pool
8. ConcurrentHashMap explanation
9. Detailed explanation of the principle of thread pool and effect

Guess you like

Origin blog.csdn.net/HeavenDan/article/details/112907989