[Java Interview Questions] Multithreading

Article Directory

original

Original: https://www.yuque.com/unityalvin/baguwen/xbg8ds

1. What is a process

Every program running in the background is a process

2. What is a thread

A thread is the smallest unit that an operating system can perform operation scheduling. It is included in the process and is the actual operating unit in the process.

3. What is concurrency and what is parallelism

Parallel means that two or more events occur at the same time, while concurrency means that two or more events occur at the same time interval.

4. Why use multithreading

From the bottom of the computer: A thread can be compared to a lightweight process, which is the smallest unit of program execution. The cost of switching and scheduling between threads is much smaller than that of a process. Also, the era of multi-core CPUs means that multiple threads can run concurrently, which reduces the overhead of thread context switching.

From the perspective of contemporary Internet development trends: The current system requires millions or even tens of millions of concurrency, and multi-threaded concurrent programming is the basis for developing high-concurrency systems. Making good use of multi-threaded mechanisms can greatly improve the overall concurrency of the system. capability and performance.

5. What problems may be caused by using multithreading?

The purpose of using multithreading is to improve the execution efficiency and running speed of the program, but this does not mean that multithreading is omnipotent. Many problems may be encountered in the process of using multithreading, such as: memory leaks, deadlocks, thread insecurity etc.

6. Talk about the life cycle and status of threads

  1. After the thread is created it will be in NEW (new) state,
  2. After calling the start() method to start running, the thread will enter the READY (ready) state.
  3. The thread that is ready will enter the RUNNING (running) state after obtaining the CPU time slice (timeslice).
  4. When the thread executes the wait() method, the thread will enter the WAITING (waiting) state. Threads that enter the waiting state need to rely on notifications from other threads to return to the running state.
  5. Through the sleep(long millis) method or wait(long millis) method, the thread can be set to the TIME_WAITING (timeout waiting) state, which is equivalent to adding a timeout period to the waiting state. Once timed out, the thread will return to the ready state
  6. When a thread calls a synchronization method, if the lock is not acquired, the thread will enter the BLOCKED (blocked) state.
  7. The thread will enter the TERMINATED state after executing the Runnable's run() method.

In the Thread.State enumeration class:
insert image description here

7. Why is the run() method executed when the start() method is called? Why can't we call the run() method directly?

start() will perform the corresponding preparation work of the thread, and then automatically execute the content of the run() method, which is a real multi-threaded work.

The direct execution of the run() method will execute the run() method as an ordinary method under the main thread, and will not execute it in a certain thread, so this is not multi-threaded work.

8. What is context switching

The current task will save its state before switching to another task after executing the CPU time slice, so that the state of this task can be loaded again when switching back to this task next time. The process of a task from saving to reloading is a context switch.

9. What is thread deadlock

Deadlock refers to a phenomenon in which two or more threads wait for each other due to competition for resources during the execution process, which will cause multiple threads to be blocked at the same time, and the program cannot be terminated normally.

10. Why deadlock occurs (4)

A deadlock must meet the following four conditions:

  1. Mutually exclusive condition: the resource is only occupied by one thread at any one time.
  2. Request and holding conditions: When a process is blocked due to requesting resources, it will hold on to the obtained resources.
  3. Non-deprivation condition: The resources obtained by a thread cannot be forcibly deprived by other threads before they are used up, and the resources are released only after they are used up.
  4. Circular waiting condition: A head-to-tail circular waiting resource relationship is formed among several processes.

11. How to avoid thread deadlock (3)

  1. Destruction request and retention conditions: apply for all resources at one time.
  2. Destroying the non-deprivation condition: When a thread occupying some resources further applies for other resources, if the application cannot be obtained, it can actively release the resources it occupies.
  3. Destruction of circular waiting conditions: prevent by applying for resources in sequence. Apply for resources in a certain order, and release resources in reverse order.

12. How to solve deadlock

  1. First use the jps command to locate the thread number
  2. Then use jstack to find the deadlock

13. What are fair locks and unfair locks

● Fair lock means that multiple threads acquire locks in the order in which they apply for locks, similar to queuing, first come, first served, such as: ReentrantLock set to true
● Unfair lock means that in the case of high concurrency, it is possible that the later threads are more likely than the previous ones. Threads acquire locks first, and the order in which multiple threads acquire locks does not follow the order in which they apply for locks, which may cause priority inversion or starvation, such as: synchronized, ReentrantLock with a default of false

14. What is a reentrant lock

● A reentrant lock is also called a recursive lock, referring to the same thread. After the outer function acquires the lock, the inner recursive function can still acquire the code of the lock.
● ReentrantLock / synchronized is a typical reentrant lock, the biggest function of reentrant lock is to avoid deadlock

15. What are pessimistic locks and optimistic locks?

  • Optimistic lock: Optimistic lock is very optimistic when operating data, thinking that others will not modify data at the same time. Therefore, the optimistic lock will not lock, but only judge whether others have modified the data during the update: if others have modified the data, give up the operation, otherwise perform the operation.
  • Pessimistic lock: Pessimistic lock is more pessimistic when operating data, thinking that others will modify the data at the same time. Therefore, the data is directly locked when operating the data, and the lock will not be released until the operation is completed; during the locking period, other people cannot modify the data.

16. How do you choose pessimistic locks and optimistic locks?

In today's high-concurrency, high-performance, and high-availability environment, optimistic locks are generally used more often, because optimistic locks are not actually locked, and the efficiency is high. Although the lock granularity is not well grasped, it may cause update failures, but it is also Compared with pessimistic locking and relying on database locks, it is less efficient and better.

17. What is a spin lock

It means that the thread trying to acquire the lock will not be blocked immediately, but will acquire the lock in a cyclic manner. The advantage of this is to reduce the consumption of thread context switching. The disadvantage is that the loop will consume CPU
insert image description here

18. What is an exclusive lock (write) and a shared lock (read)

Exclusive lock: means that the lock can only be held by one thread at a time, both ReentrantLock and synchronized are exclusive locks

Shared lock: means that the lock can be held by multiple threads

19. Why use a read-write lock

In order to meet the requirements of concurrency, multiple threads should be able to read a resource class at the same time, but if one thread wants to write a shared resource, no other thread should be able to read or write the resource at this time.

20. Tell me about the differences and similarities between the sleep() method and the wait() method?

● The main difference between the two is: the sleep() method does not release the lock, but the wait() method releases the lock.
● Both can suspend the execution of a thread.
● wait() is usually used for interaction/communication between threads, and sleep() is usually used to suspend execution.
● After the wait() method is called, the thread will not wake up automatically, and other threads need to call the notify() or notifyAll() method of the same object. After the sleep() method is executed, the thread will automatically wake up. Or you can use wait(long timeout) to wake up the thread automatically after timeout.

21. Talk about the understanding of the synchronized keyword

The synchronized keyword solves the synchronization of accessing resources between multiple threads, and it can ensure that only one thread can execute the method or code block modified by it at any time.

In earlier versions of Java, synchronized was a heavyweight lock, which was inefficient.

22. Why was the previous synchronized inefficient?

The synchronized in the Java virtual machine is implemented based on entering and exiting the Monitor object (also known as the monitor or monitor lock), and the monitor lock (monitor) is implemented by relying on the Mutex Lock of the underlying operating system. Java threads are Mapped to the operating system's native threads. If you want to suspend or wake up a thread, you need the help of the operating system to complete it, and the operating system needs to switch from user mode to kernel mode when switching between threads. The transition between these states takes a relatively long time, and the time cost Relatively high.

However, starting from JDK1.6, Java officials have greatly optimized synchronized from the JVM level, such as spin locks, adaptive spin locks, lock elimination, lock coarsening, biased locks, lightweight locks and other technologies to reduce The overhead of lock operations.

23. What has been optimized by synchronized after JDK1.6?

  • Firstly, the locks are reclassified, and the levels from low to high are: no lock status -> biased lock status -> lightweight lock status -> heavyweight lock status,

  • Biased locks
    ● Biased locks are for a thread. After the thread acquires the lock, there will be no operations such as unlocking, which can save a lot of overhead.
    ● Bias locking is enabled by default in JDK1.6 and higher, but it is not activated until a few seconds after the program starts. You can use it -XX:BiasedLockingStartupDelay=0to close the startup delay of the biased lock, or you can use it -XX:-UseBiasedLocking=falseto close the biased lock. After closing, the program will directly enter the lightweight lock state.
    ● Suitable for synchronization scenarios with only one thread access

  • Lightweight lock
    ● When there are two threads competing for the lock, the biased lock will be invalid, and the lock will expand at this time, and it will be upgraded to a lightweight lock. When the lightweight lock competes for the thread, it will not block, which improves the program. response speed, but if the thread to be competed has not been obtained, the lock will spin and consume CPU
    ● Suitable for scenarios with low latency, fast synchronization, and very fast execution

  • Heavyweight lock
    ● During the lock competition period, if the lock is not acquired, it will not spin, but directly block the thread.
    ● Suitable for scenarios with high throughput, fast synchronization and slow execution speed

24. How do you use the synchronized keyword? (3)

  1. Modified instance method
    ○ acts on the current object instance to lock, and obtains the lock of the current object instance before entering the synchronization code
    ○ synchronized void method() { //business code}
  2. Modify the static method
    ○ That is, lock the current class, which will act on all object instances of the class, and obtain the lock of the current class before entering the synchronization code.
    ○ synchronized static void method() { //business code}
  3. Modified code block specifies locking object
    ○ Locks a given object/class. Indicates that a lock on the given object is to be acquired before entering the synchronized code base.
    ○ synchronized(this) { //business code}

25. Can the construction method be modified with synchronized?

No, because the constructor itself is safe, there is no such thing as a synchronized constructor

26. Talk about the underlying principle of synchronized

The implementation of the synchronized synchronization statement block uses the monitorenter and monitorexit instructions, where the monitorenter instruction points to the beginning of the synchronization code block, and the monitorexit instruction points to the end of the synchronization code block.

The method modified by synchronized does not have the monitorenter instruction and the monitorexit instruction, but instead has the ACC_SYNCHRONIZED flag, which indicates that the method is a synchronous method.

But the essence of both is to obtain the monitor monitor.

27. Talk about the difference between synchronized and ReentrantLock (5)

original composition

● synchronized is a keyword and belongs to the JVM level
● ReentrantLock is a specific class (java.util.concurrent.locks.lock) is a lock at the api level

Instructions

● Synchronized does not require the user to manually release the lock. After the synchronized code is executed, the system will automatically release the occupation of the lock on site. ●
ReentrantLock requires the user to manually release the lock. If the lock is not actively released, deadlock may occur Phenomenon, the lock() and unlock() methods are required to cooperate with the try/finally statement

Is it interruptible

● synchronized cannot be interrupted unless an exception is thrown or normal operation is completed
● ReentrantLock can be interrupted, set the timeout method tryLock(long timeout, TimeUnit ), LockInterruptibly() is placed in the code block, and the interrupt() method can be called

Is locking fair

● synchronized unfair lock
● ReentrantLock Both are available, the default is false Unfair lock, the constructor can pass true/false

Lock binding multiple conditions (Condition)

● Synchronized No
● ReentrantLock is used to wake up the threads that need to be woken up in groups, and can wake up precisely, instead of either randomly waking up one or waking up all threads like synchronized

28. Talk about JMM (Java Memory Model)

Before JDK1.2, Java's memory model implementation reads variables from shared memory, and no special attention is required. Under the current Java memory model, threads can save variables in local memory (such as machine registers) instead of directly reading and writing in shared memory. This may cause a thread to modify the value of a variable in shared memory, while another thread continues to use its copy of the shared variable in local memory, which will cause data inconsistency.

To solve this problem, you need to declare the variable as volatile, which instructs the JVM that this variable is shared and unstable, and it will be read in shared memory every time it is used.

Therefore, in addition to preventing JVM instruction rearrangement, the volatile keyword also has an important role in ensuring the visibility of variables.

More detailed: https://blog.csdn.net/m0_55155505/article/details/126134031#JMM_10

29. What does volatile mean?

Volatile is a lightweight synchronization mechanism provided by the Java virtual machine, which can guarantee visibility and order, but cannot guarantee atomicity.

Three important properties of concurrent programming

  1. Atomicity: It means that an operation is uninterruptible, and it must be executed completely or not executed at all.
  2. Visibility: When multiple threads access the same variable, and one thread modifies the value of the variable, other threads can immediately see the modified value.
  3. Orderliness: The order in which the program is executed is executed in the order in which the code is executed

30. Where have you used volatile

It has been used in the singleton mode, and originally only used double-ended lock checking (judgment before and after locking), but due to instruction rearrangement, it may cause a certain thread to not be null when the instance is detected for the first time, but In fact, the instance has not been initialized at all, so use volatile to prohibit instruction rearrangement

31. The difference between synchronized and volatile

The synchronized keyword and the volatile keyword are two complementary existences, not opposite ones!

● The volatile keyword is a lightweight implementation of thread synchronization, so the performance of volatile is definitely better than the synchronized keyword. But the volatile keyword can only be used for variables and the synchronized keyword can modify methods and code blocks.
● The volatile keyword can guarantee the visibility of the data, but cannot guarantee the atomicity of the data. The synchronized keyword guarantees both.
● The volatile keyword is mainly used to solve the visibility of variables among multiple threads, while the synchronized keyword solves the synchronization of resource access among multiple threads.

32. How to solve the problem of not guaranteeing atomicity

  • synchronized
  • lock
  • AtomicInteger

33. Why adding AtomicInteger can solve the problem of not guaranteeing atomicity?

Because there is a method in it, getAndIncrement, which means to add 1 to a value atomically

In this way, other threads must wait for the thread that operates it to finish executing before they can operate on it, and finally solve the problem of not guaranteeing atomicity

The bottom layer of AtomicInteger is CAS

The full name of CAS is Compare And Swap "Comparison and Exchange". It is a CPU concurrent primitive. Its function is to judge whether the value of a certain location in the memory is the expected value. If it is, it will be changed to a new value. Otherwise, it will continue to compare until Until the values ​​in the main memory and the working memory are consistent, the whole process is atomic and will not cause data inconsistency.

for example:

  1. There is a 5 in the main physical memory, and there are two threads to operate on it at this time
  2. Both threads A and B make a copy of this value of the main physical memory
  3. When thread A took it from the main physical memory, it was 5. When it was about to write into the main physical memory, it found that the value of the main physical memory was still 5, which means that no one has touched it. At this time, the value was successfully changed to 2019
  4. At the same time, the B thread also needs to write to the main physical memory. When it judges whether the value taken away is consistent with the main physical memory, it finds that the value of the main physical memory has been modified, and the B thread fails to modify the value of the main physical memory. constant.

34. The underlying principle of CAS

Unsafe class plus spin

Unsafe exists in the sun.misc package and is the core class of CAS

Since the Java method cannot directly access the underlying layer, it needs to access it through a native method, and all methods of the Unsafe class are natively modified, and its internal methods can directly manipulate the memory like a C pointer and call the underlying resources of the operating system Executing corresponding tasks is equivalent to Java directly calling the intermediate class of operating system resources, so the execution of CAS in Java depends on the method of Unsafe class

35. What are the disadvantages of CAS (3)

  1. Long cycle time and high overhead
    There is a do while in the getAndAddInt() method. If the CAS fails, it will keep trying. If the CAS fails for a long time, it will bring a lot of overhead to the CPU.
  2. We can only guarantee the atomic operation of a shared variable.
    When performing an operation on a shared variable, we can use the CAS method to ensure the atomicity, but when operating on multiple shared variables, the circular CAS cannot guarantee the atomicity of the operation. At this time You need to use locks to ensure atomicity
  3. There will be ABA problems

36. ABA Questions

An important prerequisite for the implementation of the CAS algorithm is to take out the data at a certain moment in the memory and compare and replace it at the current moment. This time difference will cause data changes

Example:

  • For example, a thread one takes out A from the memory location V, at this time another thread two also takes out A from the memory, and thread two performs some operations to change the value into B,

  • Then thread two changes the data into A. At this time, when thread one performs CAS operation, it finds that there is still A in the memory, and then thread one operates successfully.

  • During this period, the value in the memory has been changed, but since it was changed back to the original value in the end, CAS did not notice

How to solve ABA?

  • Use the version number mechanism, and compare the version number at the same time when modifying. If the version number is consistent with the value, modify it, otherwise it will not be modified.

37、ThreadLocal

ThreadLocal allows each thread to have its own local variable. Each thread that accesses this variable will have a local copy of this variable. They can use get() to obtain the default value, or use the set() method to set the value Change the value of the copy saved for the current thread, thus avoiding thread safety issues.

ThreadLocal The underlying
ThreadLocal internally maintains a ThreadLocalMap data structure similar to Map.
insert image description here

The key of ThreadLocalMap is the ThreadLocal object, and the value is the value set by calling the set method of the ThreadLocal object. The value we put in ThreadLocal is just the encapsulation of ThreadLocalMap, passing the variable value.
insert image description here

38. Do you understand ThreadLocal memory leaks?

The key used in ThreadLocalMap is a weak reference to ThreadLocal, while the value is a strong reference. Therefore, if ThreadLocal is not strongly referenced externally, the key will be cleaned up during garbage collection, but the value will not be cleaned up. In this way, an Entry whose key is null will appear in the ThreadLocalMap. If we do not take any measures, the value will never be reclaimed by GC, and a memory leak may occur at this time. This situation has been considered in the implementation of ThreadLocalMap. When the set(), get(), and remove() methods are called, the records whose key is null will be cleaned up. It is best to call the remove() method manually after using the ThreadLocal method

39. The benefits of using thread pool

Thread reuse: reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.

Control the maximum number of concurrency: improve response speed. When a task arrives, the task can be executed immediately without waiting for thread creation.

Manage Threads: Improve thread manageability. Threads are scarce resources. If created without limit, it will not only consume system resources, but also reduce the stability of the system. Using thread pool can be used for unified allocation, tuning and monitoring.

40. The difference between implementing the Runnable interface and the Callable interface

  1. Because of concurrency, asynchrony leads to the emergence of the Callable interface
  2. The main reason is that Callable can be used to realize that when multiple tasks are executed, if one task takes a long time, it can be executed in the background. The main thread completes other tasks first, and finally waits for the end of the background tasks, and then summarizes them together. calculation.

41. Thread pool

Summarized in another article: https://blog.csdn.net/m0_55155505/article/details/125191350

42. Have you understood AQS?

The full name of AQS (AbstractQueuedSynchronizer) is a class under the JUC package. It is a framework for building locks and synchronizers. Using AQS can easily and efficiently construct a large number of widely used synchronizers.

43. Components of AQS

Semaphore: semaphore, which has two main functions, one is for the mutual exclusion of multiple shared resources, and the other is for controlling the number of concurrent threads. There are two main methods

● acquire(): acquire, when a thread calls acquire() operation, it either acquires the semaphore successfully (the semaphore minus 1), or waits until a thread releases the semaphore, or times out.
● release(): release, which will actually increase the value of the semaphore by 1, and then wake up the waiting thread.

CountDownLatch: A countdown timer, which allows a thread to wait until the countdown ends before starting execution. There are two main methods

  1. await(): When one or more threads call the await() method, those threads will block.
  2. countDown(): When other threads call the countDown() method, the counter will be decremented by 1. When the value of the counter becomes 0, the thread blocked by the await() method will be woken up, and then continue to execute.

CyclicBarrier: Its main function is to block a group of threads when they reach a barrier (also called a synchronization point). The barrier will not open until the last thread reaches the barrier, and all threads blocked by the barrier will continue to work. Threads are barriers entered through the CyclicBarrier.await() method

Guess you like

Origin blog.csdn.net/m0_55155505/article/details/126937184