50 multi-threaded concurrent interview questions

1. Why use multithreading

The reason for choosing multithreading is because it is fast. for example:

If you want to move 1,000 bricks to the roof of the building, assuming there are several elevators to the roof, do you think it is faster to use one elevator or use several elevators at the same time? This elevator can be understood as a thread.

Therefore, we use multithreading because:  in the correct scenario, setting the appropriate number of threads can be used to increase the running speed of the program. More professionally speaking, it is to make full use of the utilization rate of CPU and I/O to increase the running speed of the program.

Of course, there are advantages and disadvantages. In a multi-threaded scenario, if we want to ensure thread safety, we need to consider locking. If locking is not appropriate, it will consume a lot of performance.

2. How many ways are there to create threads?

There are mainly the following ways to create threads in Java:

  • Define Threada subclass of a class and override runthe methods of that class

  • Define Runnablethe implementation class of the interface and override run()the method of the interface

  • Define Callablethe implementation class of the interface, and rewrite the call()method of the interface, generally Futureused together

  • The way of thread pool

2.1 Define a subclass of the Thread class and rewrite the run method of the class

public class ThreadTest {     public static void main(String[] args) {         Thread thread = new MyThread();         thread.start();     } } classMyThread extends Thread {     @Override     public void run() {        System.out.println

2.2 Define the implementation class of the Runnable interface and rewrite the run() method of the interface

public class ThreadTest {     public static void main(String[] args) {         MyRunnable myRunnable = new MyRunnable();         Thread thread = newThread(myRunnable);         thread.start();     } } class MyRunnableimplements Runnable {     @Override     public void run() {        System.out.println

2.3 Define the implementation class of the Callable interface and rewrite the call() method of the interface

Can be used if the thread you want to execute has a return Callable.

public class ThreadTest {     public static void main(String[] args) throws ExecutionException, InterruptedException {         MyThreadCallable mc = new MyThreadCallable();         FutureTask<Integer> ft = newFutureTask<>(mc);         Thread thread = new Thread(ft);         thread.start();         System.out.println(ft.get());     } } classMyThreadCallable implements Callable {     @Override     public Stringcall()throws Exception 

2.4 Ways of thread pool

In daily development, we generally use the thread pool to execute asynchronous tasks.

public class ThreadTest {     public static void main(String[] args) throws Exception {         ThreadPoolExecutor executorOne = newThreadPoolExecutor(5, 5, 1,                 TimeUnit.MINUTES, newArrayBlockingQueue<Runnable>(20), new CustomizableThreadFactory("Tianluo-Thread-pool"));         executorOne.execute(() -> {             System.out.println;         });         //关闭线程池         executorOne.shutdown();     } } 复制代码

3. The difference between the start() method and the run() method

In fact, the main difference between startand runis as follows:

  • startA method can start a new thread. runThe method is just an ordinary method of the class. If runthe method is called directly, there is still only the main thread in the program.

  • startMethods implement multithreading, while runmethods do not implement multithreading.

  • startcannot be called repeatedly, whereas runmethods can.

  • startThe code in the method runcan continue to execute the following code without finishing execution, that is to say, thread switching is performed . However, if you call runthe method directly, you must wait for its code to be executed before continuing to execute the following code.

You can take a look at the code example~

public class ThreadTest {     public static void main(String[] args){         Thread t=new Thread(){             public void run(){                 pong();             }         };         t.start();         t.run();         t.run();         System.out.println"+ Thread.currentThread().getName());     }     static void pong(){         System.out.println+ Thread.currentThread().getName());     } } 

4. The difference between thread and process

  • A process is a running application, and a thread is an execution sequence inside a process

  • A process is the smallest unit of resource allocation, and a thread is the smallest unit of CPU scheduling.

  • A process can have multiple threads. Threads are also called lightweight processes, and multiple threads share the resources of the process

  • The cost of switching between processes is high, and the cost of switching between threads is small

  • The process has more resources, and the thread has fewer resources.

  • The process has an address space, but the thread itself has no address space, and the address space of the thread is contained in the process

for example:

You open QQ and open a process; open Thunder and open a process.

In this process of QQ, a thread is opened for text transmission, a thread for voice transmission, and another thread for pop-up dialog boxes.

So running a certain software is equivalent to opening a process. During the running process of this software (in this process), multiple jobs are supported to complete the operation of QQ, so each of these "multiple jobs" has a thread.

So a process manages multiple threads.

In layman's terms: "The process is the father and mother, in charge of many thread sons"...

5. What is the difference between Runnable and Callable?

  • RunnableThe method in the interface run()has no return value, it is a type, and what it does is to execute the code in the method voidpurely ;run()

  • CallableThe method in the interface call()has a return value and is a generic type. It is generally Future、FutureTaskused together to obtain the results of asynchronous execution.

  • CallableInterface call()methods allow exceptions to be thrown; Runnableinterface run()methods cannot continue to throw exceptions;

You can see them both API:

@FunctionalInterface public interface Callable<V> {     /**      * 支持泛型V,有返回值,允许抛出异常      */     V call() throws Exception; } @FunctionalInterface public interface Runnable {     /**      *  没有返回值,不能继续上抛异常      */     public abstract void run(); } 复制代码

In order to facilitate everyone's understanding, I wrote a demo, you can take a look at it:

*  @date 2022-07-11  */ public class CallableRunnableTest {     public static void main(String[] args) {         ExecutorService executorService =Executors.newFixedThreadPool(5);         Callable<String> callable =newCallable<String>() {             @Override             public String call() throws Exception {                 return          }         };         //支持泛型         Future<String> futureCallable = executorService.submit(callable);         try {            System.out.println("获取callable的返回结果:"+futureCallable.get());         } catch (InterruptedException e) {             e.printStackTrace();         } catch (ExecutionException e) {             e.printStackTrace();         }         Runnable runnable = new Runnable() {             @Override             public void run() {                 System.out.println,runnable,;             }         };         Future<?> futureRunnable = executorService.submit(runnable);         try {            System.out.println("获取runnable的返回结果:"+futureRunnable.get());         } catch (InterruptedException e) {             e.printStackTrace();         } catch (ExecutionException e) {             e.printStackTrace();         }         executorService.shutdown();     } } 

6. Talk about the function and principle of volatile

The volatile keyword is the lightest synchronization mechanism provided by the Java virtual machine. It is used as a modifier to modify variables. It guarantees variable visibility to all threads and prohibits instruction reordering, but does not guarantee atomicity .

Let's first recall the java memory model (jmm):

  • The Java virtual machine specification attempts to define a Java memory model to shield the memory access differences of various hardware and operating systems, so that Java programs can achieve consistent memory access effects on various platforms.

  • The Java memory model stipulates that all variables are stored in main memory, and each thread has its own working memory. The variables here include instance variables and static variables, but do not include local variables, because local variables are thread-private.

  • The thread's working memory stores the main memory copy of the variables used by the thread, and all operations on the variables by the thread must be performed in the working memory instead of directly operating the main memory. And each thread cannot access the working memory of other threads.

The volatile variable ensures that the new value can be immediately synchronized back to the main memory, and refreshed from the main memory immediately before each use, so we say that volatile guarantees the visibility of multi-threaded operation variables.

volatile guarantees visibility and prohibits instruction rearrangement, both are related to memory barriers. Let's look at a demo code used by volatile:

 public class Singleton {      private volatile static Singleton instance;      private Singleton (){}      public static Singleton getInstance() {      if (instance == null) {          synchronized (Singleton.class) {          if (instance == null) {              instance = new Singleton();          }          }      }      return instance;      }   }   复制代码

After compiling, compare the assembly code generated with volatilekeywords and without keywords, and find that when there are keywords modified, there will be one more , that is, one more lock prefix instruction, and the lock instruction is equivalent to a memory barriervolatilevolatilelock addl $0x0,(%esp)

lockThe instruction is equivalent to a memory barrier, which guarantees the following:

  1. When reordering, the following instructions cannot be reordered to the position before the memory barrier

  2. Write this processor's cache to memory

  3. If it is a write action, it will invalidate the corresponding cache in other processors.

Points 2 and 3 are volatilethe embodiment of guaranteeing visibility, and point 1 is the embodiment of prohibiting command rearrangement .

Four categories of memory barriers: (Load stands for read instructions, Store stands for write instructions)

  • Insert a StoreStore barrier in front of every volatile write operation.

  • Insert a StoreLoad barrier after each volatile write operation.

  • Insert a LoadLoad barrier after each volatile read operation.

  • Insert a LoadStore barrier after each volatile read operation.

Some friends may still be a little confused about this, the memory barrier is too abstract. Let's take a look at the code:

The memory barrier ensures that the previous instructions are executed first, so this ensures that instruction rearrangement is prohibited. At the same time, the memory barrier ensures that the cache is written to the memory and other processor caches are invalidated, which also ensures visibility. Haha, there is something about the bottom layer of volatile Realized, let's talk about this

7. Tell me the difference between concurrency and parallelism?

Concurrency and parallelism were originally concepts in the operating system , representing the way the CPU executes multiple tasks.

  • Sequence: The current task cannot start until the previous task that started to execute is completed

  • Concurrency: The current task can start executing regardless of whether the previous task started to execute is completed or not

(That is, if AB is executed sequentially, A must be completed before B, but concurrent execution is not necessarily.)

  • Serial: There is a task execution unit, which can only physically execute one task and one task

  • Parallel: There are multiple task execution units, and multiple tasks can be physically executed together

(That is, at any point in time, only one task must be executed in serial execution, but not necessarily in parallel.)

Zhihu has a very interesting answer , you can read it:

  • You eat halfway through the meal, the phone comes, and you don't pick it up until after you finish eating, which means that you don't support concurrency or parallelism.

  • You are halfway through your meal, and the phone call comes, you stop to answer the phone, and continue to eat after answering, which means that you support concurrency.

  • You are eating halfway through your meal, and the phone call comes, and you are eating while talking on the phone, which shows that you support parallelism.

The key to concurrency is that you have the ability to handle multiple tasks, not necessarily at the same time. The key to parallelism is your ability to handle multiple tasks at the same time. So I think the most critical point of them is: whether they are at the same time .

Source: Zhihu

8. What is the implementation principle of synchronized and lock optimization?

Synchronized is a keyword in Java and is a kind of synchronization lock. The synchronized keyword can act on methods or code blocks.

during general interviews. It can be answered like this:

8.1 monitorenter、monitorexit、ACC_SYNCHRONIZED

If synchronized acts on the code block , decompilation can see two instructions: monitorenter、monitorexit, the JVM uses monitorenter和monitorexittwo instructions to achieve synchronization; if the function synchronized acts on the method , decompilation can see ACCSYNCHRONIZED标记that the JVM adds in the method access identifier (flags) ACCSYNCHRONIZEDto achieve synchronization.

  • Synchronization code block is monitorenter和monitorexitrealized through, when the thread executes to monitorenter, it must obtain the monitor lock before executing the following method. When the thread executes to monitorexit, the lock must be released.

  • The synchronization method is realized by setting the ACCSYNCHRONIZED flag. When the thread executes the method with the ACCSYNCHRONI flag, it needs to obtain the monitor lock. Each object is associated with a monitor, and the thread can own or release the monitor.

8.2 monitor monitor

What is monitor? The monitors of the operating system are conceptual principles, and ObjectMonitor is the realization of its principles.

In the Java virtual machine (HotSpot), Monitor (monitor) is implemented by ObjectMonitor, and its main data structure is as follows:

ObjectMonitor() {     _header       = NULL;     _count        = 0; // 记录个数     _waiters      = 0,     _recursions   = 0;     _object       = NULL;     _owner        = NULL;     _WaitSet      = NULL;  // 处于wait状态的线程,会被加入到_WaitSet     _WaitSetLock  = 0 ;     _Responsible  = NULL ;     _succ         = NULL ;     _cxq          = NULL ;     FreeNext      = NULL ;     _EntryList    = NULL ;  // 处于等待锁block状态的线程,会被加入到该列表     _SpinFreq     = 0 ;     _SpinClock    = 0 ;     OwnerIsThread = 0 ;   } 复制代码

The meanings of several key fields in ObjectMonitor are shown in the figure:

8.3 Working mechanism of Java Monitor

  • The thread that wants to get the monitor will first enter the _EntryList queue.

  • When a thread obtains the monitor of the object, it enters the Owner area, sets it as the current thread, and adds 1 to the counter at the same time.

  • If the thread calls the wait() method, it will enter the WaitSet queue. It will release the monitor lock, that is, assign null to the owner, decrement the count by 1, and enter the WaitSet queue to block and wait.

  • If other threads call notify() / notifyAll(), a thread in the WaitSet will be woken up, and the thread will try to acquire the monitor lock again, and if it succeeds, it will enter the Owner area.

  • After the execution of the synchronization method is completed, the thread exits the critical section, sets the owner of the monitor to null, and releases the monitor lock.

8.4 Objects are associated with monitors

  • In the HotSpot virtual machine, the layout of objects stored in memory can be divided into three areas: object header (Header), instance data (Instance Data) and object padding (Padding) .

  • The object header mainly includes two parts of data: Mark Word (mark field), Class Pointer (type pointer) .

Mark Word is used to store the runtime data of the object itself, such as hash code (HashCode), GC generation age, lock status flag, lock held by thread, bias thread ID, bias timestamp, etc.

Heavyweight lock, pointer to mutex. In fact, synchronized is a heavyweight lock, that is to say, for synchronized object locks, the Mark Word lock flag is 10, and the pointer points to the starting address of the Monitor object.

9. What are the states of a thread?

The thread has 6 states, namely: New, Runnable, Blocked, Waiting, Timed_Waiting, Terminated.

The conversion relationship diagram is as follows:

  • New: This is the state after the thread object is created but start()the method has not been called yet.

 public class ThreadTest {     public static void main(String[] args) {         Thread thread = new Thread();         System.out.println(thread.getState());     } } //运行结果: NEW 复制代码

  • Runnable: It includes two states : ready ( ready) and running ( ). runningIf startthe method is called, the thread enters Runnablethe state. It means that my thread can be executed (equivalent to readythe state at this time), and if the thread is allocated CPU time by the scheduler, it can be executed (in the state at this time running).

public class ThreadTest {     public static void main(String[] args) {         Thread thread = new Thread();         thread.start();         System.out.println(thread.getState());     } } //运行结果: RUNNABLE 复制代码

  • Blocked: Blocked (blocked by synchronization lock or IO lock). Indicates that the thread is blocked by the lock, and the thread is blocked when entering synchronizedthe keyword-modified method or code block ( waiting to acquire the lock ). For example, there is a code in a critical section that needs to be executed, then the thread needs to wait, and it will enter this state. It is generally RUNNABLEtransformed from state. If the thread acquires the lock, it becomes RUNNABLEthe state

Thread t = new Thread(new Runnable {     void run() {         synchronized (lock) { // 阻塞于这里,变为Blocked状态             // dothings         }      } }); t.getState(); //新建之前,还没开始调用start方法,处于New状态 t.start(); //调用start方法,就会进入Runnable状态 复制代码

  • WAITING : A permanent waiting state. Threads entering this state need to wait for other threads to take some specific actions (such as notifications). Threads in this state will not be allocated CPU execution time, they have to wait to be explicitly woken up, otherwise they will be in a state of waiting indefinitely. General Object.wait.

Thread t = new Thread(new Runnable {     void run() {         synchronized (lock) { // Blocked             // dothings             while (!condition) {                 lock.wait(); // into Waiting             }         }      } }); t.getState(); // New t.start(); // Runnable 复制代码

  • TIMED_WATING: Waiting for the specified time to wake up again. There is a timer in it to calculate, and the most common Thread.sleepmethod is to trigger it. After triggering, the thread enters Timed_waitingthe state, and then it is triggered by the timer, and then enters Runnablethe state.

Thread t = new Thread(new Runnable {     void run() {         Thread.sleep(1000); // Timed_waiting     } }); t.getState(); // New t.start(); // Runnable 复制代码

  • Terminated (TERMINATED): Indicates that the thread has been executed.

Let's take a look at the code demo:

*/ public class ThreadTest {     private static Object object = new Object();     public static void main(String[] args) throws Exception {         Thread thread = new Thread(new Runnable() {             @Override             public void run() {                 try {                     for(int i = 0; i< 1000; i++){                         System.out.print("");                     }                     Thread.sleep(500);                     synchronized (object){                         object.wait();                     }                 } catch (InterruptedException e) {                     e.printStackTrace();                 }             }         });         Thread thread1 = new Thread(new Runnable() {             @Override             public void run() {                 try {                     synchronized (object){                         Thread.sleep(1000);                     }                     Thread.sleep(1000);                     synchronized (object){                         object.notify();                     }                 } catch (InterruptedException e) {                     e.printStackTrace();                 }             }         });                  System.out.println("1"+thread.getState());         thread.start();         thread1.start();         System.out.println("2"+thread.getState());         while (thread.isAlive()){             System.out.println("---"+thread.getState());             Thread.sleep(100);         }         System.out.println("3"+thread.getState());     } } 运行结果: 1NEW 2RUNNABLE ---RUNNABLE ---TIMED_WAITING ---TIMED_WAITING ---TIMED_WAITING ---TIMED_WAITING ---BLOCKED ---BLOCKED ---BLOCKED ---BLOCKED ---BLOCKED ---WAITING ---WAITING ---WAITING ---WAITING ---WAITING ---WAITING ---WAITING ---WAITING ---WAITING 复制代码

10. The difference between synchronized and ReentrantLock?

  • SynchronizedJVMis implementation ReenTrantLock- dependent , but APIimplementation-specific.

  • Before Synchronizedoptimization, synchronizedthe performance of the two is ReenTrantLockmuch worse than that, but since the Synchronizedintroduction of biased locks and lightweight locks (spin locks), the performance of the two is almost the same.

  • SynchronizedThe use of is more convenient and concise, and it is up to the compiler to ensure the lock and release of the lock. Instead, ReenTrantLockyou need to manually declare to lock and release the lock. It is best to declare the release lock in finally.

  • ReentrantLockYou can specify whether it is a fair lock or an unfair lock. It synchronizedcan only be an unfair lock.

  • ReentrantLockIt can respond to interrupts and can be reincarnated, Synchronizedbut it cannot respond to interrupts

11. The difference between wait(), notify() and suspend(), resume()

  • wait()The method makes the thread enter the blocked waiting state and release the lock

  • notify()Wake up a thread in the waiting state, which is generally wait()used in conjunction with the method.

  • suspend()Make the thread enter the blocked state, and will not automatically recover, and must be called accordingly resume()to make the thread re-enter the executable state. suspend()method can easily cause deadlock problems.

  • resume()Methods suspend()are used in conjunction with methods.

suspend() is not recommended , because suspend()after the method is called, the thread will not release the resources (such as locks) it has already occupied, but will enter the sleep state while occupying the resources, which will easily lead to deadlock problems.

12. CAS? What's wrong with CAS and how can it be fixed?

CAS, the full name is Compare and Swap, the translation is to compare and exchange;

CASInvolves 3 operands, the memory address value V, the expected original value A, and the new value B; if the value V of the memory location matches the expected original A value, it is updated to the new value B, otherwise it is not updated

What's wrong with CAS?

  • ABA questions

In a concurrent environment, assuming that the initial condition is A, when the data is modified, the modification will be executed if it is found to be A. But although what you see is A, there may be a situation where A changes to B, and B changes back to A. At this time, A is no longer A, and even if the data is successfully modified, there may be problems.

It can be solved by AtomicStampedReference solving the ABA problem , which, a tagged atomic reference class, ensures the correctness of CAS by controlling the version of the variable value.

  • long cycle time overhead

Spin CAS, if it keeps looping and fails, it will bring a very large execution overhead to the CPU. In many cases, the idea of ​​CAS reflects that there is a number of spins, just to avoid this time-consuming problem~

  • Atomic operations can only be guaranteed for one variable.

What CAS guarantees is the atomicity of operations performed on a variable. If operations are performed on multiple variables, CAS currently cannot directly guarantee the atomicity of operations. This problem can be solved in two ways: 1. Use a mutex to ensure atomicity; 2. Encapsulate multiple variables into objects, and use AtomicReference to ensure atomicity.

Friends who are interested can read my previous practical article~A practice of CAS optimistic lock to solve concurrency problems

13. Talk about the difference between CountDownLatch and CyclicBarrier

CountDownLatch和CyclicBarrierBoth are used to let the thread wait and run again when a certain condition is met. The main differences are:

  • CountDownLatch: One or more threads, wait for other threads to complete something before executing;

  • CyclicBarrier: Multiple threads wait for each other until they reach the same synchronization point, and then continue to execute together.

For example:

  • CountDownLatch: Assume that the teacher agrees with the students to gather at the gate of the park on weekends, and tickets will be issued when everyone is present. Then, to issue tickets (this main thread), you need to wait for all the students to arrive (multiple other threads are completed) before it can be executed.

  • CyclicBarrier: Multiple sprinters want to start the track and field competition. Only when all the athletes are ready, the referee will fire the gun to start. At this time, all the athletes will walk like flying.

14. What is false sharing in a multi-threaded environment

14.1 What is false sharing?

The CPU's cache is cached in units of cache lines. When multiple threads modify variables that are independent of each other, and these variables are in the same cache line, they will affect each other's performance. This is false sharing

Modern computer calculation model:

  • The execution speed of the CPU is several orders of magnitude faster than that of the memory. In order to improve the execution efficiency, the modern computer model has evolved into a model of CPU, cache (L1, L2, L3), and memory.

  • When the CPU performs calculations, it first queries the data from the L1 cache, and then goes to the L2 cache if it cannot find it, and so on, until the data is obtained in the memory.

  • In order to avoid frequently fetching data from memory, smart scientists design a cache line with a size of 64 bytes.

It is precisely because of the existence of the cache line that the false sharing problem is caused, as shown in the figure:

Assuming data a、bis loaded into the same cache line.

  • When thread 1 modifies the value of a, CPU1 will notify other CPU cores that the current cache line (Cache line) has expired.

  • At this time, if thread 2 initiates modification of b, because the cache line has expired, "core2 will read the Cache line data from the main memory again at this time". After reading, because it wants to modify the value of b, then CPU2 notifies other CPU cores that the current cache line (Cache line) has become invalid again.

  • Jiang Zi, if the content of the same cache line is read and written by multiple threads, it is easy to compete with each other, and frequent writes back to the main memory will greatly reduce performance.

14.2 How to solve the false sharing problem

Since false sharing is caused by storing independent variables in the same cache line, a cache line size is 64 bytes. Then, we can use the space-for-time method, that is, the way of data filling , to disperse independent variables to different Cache lines~

Let's look at an example:

 */ public class FalseShareTest  {     public static void main(String[] args) throws InterruptedException {         Rectangle rectangle = new Rectangle();         long beginTime = System.currentTimeMillis();         Thread thread1 = new Thread(() -> {             for (int i = 0; i < 100000000; i++) {                 rectangle.a = rectangle.a + 1;             }         });         Thread thread2 = new Thread(() -> {             for (int i = 0; i < 100000000; i++) {                 rectangle.b = rectangle.b + 1;             }         });         thread1.start();         thread2.start();         thread1.join();         thread2.join();         System.out.println("执行时间" + (System.currentTimeMillis() - beginTime));     } } class Rectangle {     volatile long a;     volatile long b; } //运行结果: 执行时间2815 复制代码

A long type is 8 bytes, we don't have 7 long type variables between variables a and b, what is the output result? as follows:

class Rectangle {     volatile long a;     long a1,a2,a3,a4,a5,a6,a7;     volatile long b; } //运行结果 执行时间1113 复制代码

It can be found that using the method of filling data to divide the read and write variables into different cache lines can be very good and high performance~

15. Understanding of Fork/Join framework

The Fork/Join framework is a framework provided by Java7 for executing tasks in parallel. It is a framework for dividing a large task into several small tasks, and finally summarizing the results of each small task to obtain the result of the large task.

The Fork/Join framework needs to understand two points, "divide and conquer" and "work stealing algorithm".

divide and conquer

The definition of the above Fork/Join framework is the embodiment of the idea of ​​​​divide and conquer.

work stealing algorithm

Split large tasks into small tasks, put them in different queues for execution, and hand them over to different threads for execution. Some threads have finished executing their own tasks first, while other threads are still slowly processing their own tasks. At this time, in order to fully improve efficiency, a work theft algorithm is needed~

The work-stealing algorithm is, "the process by which a thread steals tasks from other queues for execution." Generally, it refers to the task of the fast thread (stealing thread) grabbing the slow thread. At the same time, in order to reduce lock competition, a double-ended queue is usually used, that is, the fast thread and the slow thread are at one end.

16. Talk about the principle of ThreadLocal?

17. Why does TreadLocal cause memory leaks?

17.1 What about memory leaks caused by weak references?

17.2 The key is a weak reference, will GC recycling affect the normal work of ThreadLocal?

17.3 ThreadLocal memory leak demo

18 Why is the key of ThreadLocalMap a weak reference, what is the design concept?

19. How to ensure shared ThreadLocal data between parent and child threads

20. How to ensure that the result of i++ under multithreading is correct?

21. How to detect deadlock? How to prevent deadlock? Four necessary conditions for deadlock

22. What happens if there are too many threads?

23. Talk about the happens-before principle

24. How to share data between two threads

25. What is the function of LockSupport?

26 How to tune the thread pool and how to confirm the optimal number of threads?

27. Why use a thread pool?

28. Java's thread pool execution principle

29. Talk about the core parameters of the thread pool

30. When submitting a new task, how to deal with exceptions?

31. AQS components, implementation principles

31.1 State maintenance

31.2 CLH Queue

31.3 ConditionObject notification

31.4 Template Method Design Pattern Column method can implement wait/notify pattern. And what about Lock?

31.5 Exclusive and Shared Modes.

31.6 Custom Synchronizers

32 Semaphore principle

32.1 Semaphore use demo

32.2 Principle of Semaphore

33 What optimizations did synchronized do? What is a biased lock? What is a spin lock? Lock rent?

34 What is context switching?

35. Why wait (), notify (), notifyAll () in the object, not in the Thread class

36. What is the difference between the submit() and execute() methods in the thread pool?

37 The principle of AtomicInteger?

38 What is the thread scheduling algorithm used in Java?

39. The difference between shutdown() and shutdownNow()

40 Talk about several common thread pools and usage scenarios?

40.1 newFixedThreadPool

40.2 newCachedThreadPool

40.3 newSingleThreadExecutor single-threaded thread pool

40.4 newScheduledThreadPool

41 What is FutureTask

42 The difference between interrupt(), interrupted() and isInterrupted() in java

43 There are three threads T1, T2, T3, how to ensure that they are executed in order

44 What are the blocking queues

45 What is the concurrency of ConcurrentHashMap in Java?

46 What are the commonly used scheduling methods for Java threads?

46.1 Thread sleep

46.2 Thread Interruption

46.3 Thread Waiting

46.4 Thread Yields

46.5 Thread Notification

47. The locking principle of ReentrantLock

47.1 Templates used by ReentrantLock

47.2 What is an unfair lock and what is a fair lock?

47.3 lock() lock process

48. Communication between threads

48.1 The volatile and synchronized keywords

48.2 Wait/Notify Mechanism

48.3 Piped I/O streams

48.4 The join() Method

48.5 ThreadLocal

49 Write 3 multithreading best practices you follow

50. Why does the Java development manual issued by Ali enforce that the thread pool does not allow the use of Executors to create?

Guess you like

Origin blog.csdn.net/l688899886/article/details/127079247