Multithreading VS concurrency becomes

1. Three elements of concurrent programming?

 

 

 
1) Atomicity
Atomicity refers to one or more operations, either all are executed and not interrupted by other operations during the execution, or all are not executed.
2) Visibility
Visibility means that when multiple threads operate on a shared variable, after one thread modifies the variable, other threads can immediately see the result of the modification.
3) Orderliness
Orderliness, that is, the order of execution of the program is executed according to the order of the code.

2. What are the ways to achieve visibility?

Synchronized or Lock: Ensure that only one thread acquires the lock to execute code at the same time, and flushes the latest value to the main memory before the lock is released to achieve visibility.

3. The value of multithreading?

1) Give full play to the advantages of 

multi -core CPU. Multi-threading can really give full play to the advantages of multi-core CPU to achieve the purpose of making full use of the CPU. Multi-threading is used to accomplish several things at the same time without interfering with each other. 

2) Preventing blocking 

From the perspective of program efficiency, single-core CPUs will not only not give play to the advantages of multi-threading, but will cause thread context switching due to running multi-threads on a single-core CPU, which will reduce the overall efficiency of the program. But for single-core CPUs, we still have to apply multi-threading, just to prevent blocking. Imagine that if a single-core CPU uses a single thread, then as long as the thread is blocked, for example, reading a certain data remotely, the peer has not returned and the timeout period has not been set, then your entire program will be before the data is returned. Stopped running. Multi-threading can prevent this problem. Multiple threads are running at the same time. Even if the code execution of one thread is blocked for reading data, it will not affect the execution of other tasks. 

3) Ease of modeling 

This is another advantage that is not so obvious. Suppose there is a big task A, single-threaded programming, then there are a lot of considerations, and it is troublesome to build the entire program model. But if you decompose this large task A into several small tasks, task B, task C, and task D, build the program model separately, and run these tasks separately through multi-threading, it will be much simpler.

4. What are the ways to create threads?

1) Inherit the Thread class to create a thread class 

2) Create a thread class through the Runnable interface 

3) Create a thread through Callable and Future 

4) Create a thread through the thread pool

5. Comparison of the three ways of creating threads?

1) Create multiple threads by implementing Runnable and Callable interfaces. 

The advantage is: the 

thread class only implements the Runnable interface or the Callable interface, and can also inherit from other classes. 

In this way, multiple threads can share the same target object, so it is very suitable for multiple same threads to process the same resource, so that CPU, code and data can be separated to form a clear model and better reflect The idea of ​​object-oriented. 

The disadvantage is: 

programming is slightly complicated, if you want to access the current thread, you must use the Thread.currentThread() method. 

2) Use the method of inheriting the Thread class to create multiple threads. The 

advantage is: 

simple to write, if you need to access the current thread, you don't need to use the Thread.currentThread() method, you can get the current thread directly by using this. 

The disadvantage is: the 

thread class has inherited the Thread class, so it can no longer inherit from other parent classes. 

3) The difference between Runnable and Callable 

Callable stipulates (override) method is call(), Runnable stipulates (override) method is run(). 

Callable tasks can return values ​​after execution, but Runnable tasks cannot return values. 

The Call method can throw exceptions, but the run method cannot. 

Running the Callable task can get a Future object, which represents the result of asynchronous calculation. It provides a method to check whether the calculation is complete, to wait for the completion of the calculation, and to retrieve the result of the calculation. Through the Future object, you can understand the execution of the task, cancel the execution of the task, and get the execution result.

6. The state flow diagram of the thread

1. Initial state 
A thread class can be obtained by implementing the Runnable interface and inheriting from Thread. When a new instance comes out, the thread enters the initial state. 

2.1. Ready state 
Ready state only means that you are qualified to run. If the scheduler does not pick you up, you will always be in ready state. 
The start() method of the thread is called, and the thread enters the ready state. 
The sleep() method of the current thread ends, and the join() of other threads ends, waiting for the user's input to be completed, and a thread gets the object lock, and these threads will also enter the ready state. 
The current thread time slice is used up, call the yield() method of the current thread, and the current thread enters the ready state. 
After the thread in the lock pool gets the object lock, it enters the ready state. 
2.2. Running state The state of the 
thread when the thread scheduler selects a thread from the runnable pool as the current thread. This is also the only way for a thread to enter the running state. 

3. Blocking state The 
blocking state is the state when the thread is blocked when it enters the method or code block modified by the synchronized keyword (acquisition of the lock). 

4. Waiting 
. Threads in this state will not be allocated CPU execution time. They have to wait to be explicitly awakened, otherwise they will wait indefinitely. 

5. Overtime waiting 
. Threads in this state will not be allocated CPU execution time, but there is no need to wait indefinitely for being awakened by other threads. They will automatically wake up after a certain period of time. 

6. Termination state
7. Waiting in the queue 
When the thread's run() method completes, or the main thread's main() method completes, we consider it terminated. This thread object may be alive, but it is no longer a separate thread of execution. Once the thread is terminated, it cannot be reborn.
Calling the start() method on a terminated thread will throw a java.lang.IllegalThreadStateException. 
Before calling the wait() and notify() methods of obj, the obj lock must be obtained, that is, it must be written in the synchronized(obj) code segment. 
When calling Object.wait() into the waiting queue

7. Java threads have five basic states

1) New state (New): When the thread object pair is created, it enters the new state, such as: Thread t = new MyThread(); 

2) Ready state (Runnable): When the thread object's start() method (t .start();), the thread enters the ready state. A thread in the ready state just means that the thread is ready and waiting for the CPU to schedule execution at any time. It does not mean that the thread will be executed immediately after t.start() is executed; 

3) Running state: when the CPU starts When the thread in the ready state is scheduled, the thread can be actually executed at this time, that is, it enters the running state. Note: The ready 

state is the only entry to the running state, that is, if a thread wants to enter the running state for execution, it must first be in the ready state; 

4) Blocked: the thread in the running state is due to some The reason is to temporarily give up the right to use the CPU and stop execution. At this time, it enters the blocking state. Until it enters the ready state, it has the opportunity to be called by the CPU again to enter the running state. 

According to the different causes of the blocking, the blocking state can be divided into three types: 

a. Waiting for blocking: The thread in the running state executes the wait() method to make the thread enter the waiting for blocking state; 

b. Synchronous blocking-the thread is getting synchronized Synchronization lock fails (because the lock is occupied by other threads), it will enter the synchronization blocking state; 

c. Other blocking-by calling the thread sleep () or join () or issuing an I/O request, the thread will enter the blocking state status. When the sleep() state times out, join() waits for the thread to terminate or time out, or the I/O processing is completed, the thread reverts to the ready state. 

5) Dead state (Dead): The thread finishes its execution or exits the run() method due to an exception, and the thread ends its life cycle.

8. What is a thread pool? What are the ways to create it?

The thread pool is to create several threads in advance. If there is a task to be processed, the thread in the thread pool will process the task. After the processing is completed, the thread will not be destroyed, but will wait for the next task. Since creating and destroying threads consume system resources, you can consider using thread pools to improve system performance when you want to create and destroy threads frequently. 

Java provides an implementation of the java.util.concurrent.Executor interface for creating thread pools.

9. The creation of four thread pools:

1) newCachedThreadPool creates a cacheable thread pool 

2) newFixedThreadPool creates a fixed-length thread pool to control the maximum concurrent number of threads. 

3) newScheduledThreadPool creates a fixed-length thread pool to support timing and periodic task execution. 

4) newSingleThreadExecutor creates a single-threaded thread pool, which will only use a single worker thread to perform tasks.

10. What are the advantages of thread pools?

1) Reuse existing threads to reduce the overhead of object creation and destruction. 

2) It can effectively control the maximum number of concurrent threads, improve the utilization rate of system resources, and avoid excessive resource competition and blockage. 

3) Provide functions such as timing execution, regular execution, single thread, and concurrent number control.

11. What are the commonly used concurrency tools?

CountDownLatch

CyclicBarrier

Semaphore

Exchanger

12. The difference between CyclicBarrier and CountDownLatch

1) CountDownLatch simply means that a thread waits until the other threads it is waiting for are executed and the countDown() method is called to send a notification, the current thread can continue to execute. 

2) cyclicBarrier is that all threads wait until all threads are ready to enter the await() method, all threads start executing at the same time! 

3) The counter of CountDownLatch can only be used once. The counter of CyclicBarrier can be reset using the reset() method. So CyclicBarrier can handle more complex business scenarios. For example, if an error occurs in the calculation, you can reset the counter and let the threads execute it again. 

4) CyclicBarrier also provides other useful methods, such as getNumberWaiting method to get the number of threads blocked by CyclicBarrier. The isBroken method is used to know whether the blocked thread is interrupted. It returns true if it is interrupted, otherwise it returns false.

13. What is the role of synchronized?

In Java, the synchronized keyword is used to control thread synchronization, that is, in a multi-threaded environment, the synchronized code segment is controlled not to be executed by multiple threads at the same time. 

Synchronized can be added to a piece of code, can also be added to the method.

14. The role of the volatile keyword

For visibility, Java provides the volatile keyword to ensure visibility. 

When a shared variable is modified by volatile, it will ensure that the modified value will be updated to the main memory immediately. When other threads need to read it, it will go to the memory to read the new value. 

From a practical point of view, an important role of volatile is to combine with CAS to ensure atomicity. For details, see the classes under the java.util.concurrent.atomic package, such as AtomicInteger.

15. What is CAS

CAS is the abbreviation of compare and swap, which is what we call compare exchange. 

Cas is a lock-based operation, and it is an optimistic lock. Locks are divided into optimistic locks and pessimistic locks in java. Pessimistic lock is to lock the resource, and the next thread can only access it after a thread that previously acquired the lock releases the lock. Optimistic locking adopts a broad attitude, dealing with resources in some way without locking, such as obtaining data by adding version to the record, and its performance is greatly improved compared with pessimistic locking. 

CAS operation consists of three operands-memory location (V), expected original value (A) and new value (B). If the value in the memory address is the same as the value of A, then the value in the memory is updated to B. CAS obtains data through an infinite loop. If in the first round of the loop, the value in the address obtained by the a thread is modified by the b thread, then the a thread needs to spin, and it may have a chance to execute in the next loop. 

Most of the classes under the java.util.concurrent.atomic package are implemented using CAS operations (AtomicInteger, AtomicBoolean, AtomicLong).

16. The CAS problem

1) CAS is likely to cause ABA problems. 

A thread a changes the value to b, and then to a. At this time, CAS thinks that there is no change, but it has changed. The solution to this problem can be identified by the version number. , Version is incremented by 1 for each operation. In java5, AtomicStampedReference has been provided to solve the problem. 

2) The atomicity of the code block cannot 

be guaranteed. The knowledge of the CAS mechanism guarantees the atomic operation of a variable, but the atomicity of the entire code block cannot be guaranteed. For example, if you need to ensure that three variables are updated atomically together, you have to use synchronized. 

3) CAS causes the increase in CPU utilization 

. As mentioned before, CAS is a cyclic judgment process. If the thread has not obtained the status, the cpu resource will always be occupied.

17. What is Future?

In concurrent programming, we often use the non-blocking model. In the previous three implementations of multithreading, whether it is inheriting the thread class or implementing the runnable interface, it is impossible to guarantee that the previous execution results will be obtained. By implementing the Callback interface and using Future, you can receive multi-threaded execution results. 

Future represents the result of an asynchronous task that may not be completed. For this result, a Callback can be added to perform the corresponding operation after the task execution succeeds or fails.

18. What is AQS

AQS is the abbreviation of AbustactQueuedSynchronizer. It is a Java-improved low-level synchronization tool class. It uses an int type variable to represent the synchronization state and provides a series of CAS operations to manage the synchronization state. 

AQS is a framework for building locks and synchronizers. AQS can be used to easily and efficiently construct a large number of synchronizers for a wide range of applications, such as ReentrantLock, Semaphore, and others such as ReentrantReadWriteLock, SynchronousQueue, FutureTask, etc. It is based on AQS.

19. AQS supports two synchronization methods:

1) Exclusive type 

2) Shared type 

This is convenient for users to implement different types of synchronization components, such as exclusive type such as ReentrantLock, shared type such as Semaphore, CountDownLatch, combined type such as ReentrantReadWriteLock. In short, AQS provides the bottom support for the use, how to assemble and realize, users can play freely.

20, what is ReadWriteLock

First of all, to be clear, it's not that ReentrantLock is bad, but ReentrantLock has some limitations. If you use ReentrantLock, it may be to prevent thread A from writing data and thread B reading data from inconsistent data, but in this way, if thread C is reading data, thread D is also reading data, reading data will not change the data, there is no need Locking, but still locking, reducing the performance of the program. 

Because of this, the ReadWriteLock was born. ReadWriteLock is a read-write lock interface. ReentrantReadWriteLock is a specific implementation of the ReadWriteLock interface. It realizes the separation of read and write. The read lock is shared and the write lock is exclusive. There will be no mutual exclusion between read and write. Write and read, and write and write are mutually exclusive, which improves the performance of reading and writing.

21. What is FutureTask

This is actually mentioned before, FutureTask represents an asynchronous operation task. A specific implementation class of Callable can be passed into FutureTask, which can wait to obtain the result of the asynchronous operation task, determine whether it has been completed, and cancel the task. Of course, because FutureTask is also an implementation class of the Runnable interface, FutureTask can also be placed in the thread pool.

22, the difference between synchronized and ReentrantLock

Synchronized is the same keyword as if, else, for, while, and ReentrantLock is a class, which is the essential difference between the two. Since ReentrantLock is a class, it provides more and more flexible features than synchronized. It can be inherited, can have methods, and can have a variety of class variables. ReentrantLock is more extensible than synchronized in several aspects: 

1 ) ReentrantLock can set the waiting time for acquiring the lock, so as to avoid deadlock 

2) ReentrantLock can acquire various lock information 

3) ReentrantLock can flexibly implement multiple notifications. 

In addition, the lock mechanism of the two is actually different. . The bottom layer of ReentrantLock calls the Unsafe park method to lock, and the synchronized operation should be the mark word in the object header. I am not sure about this.

23. What is an optimistic lock and a pessimistic lock

1) Optimistic lock: Just like its name, it is optimistic about the thread safety problems caused by concurrent operations. Optimistic lock believes that competition does not always occur, so it does not need to hold the lock. It will compare-replace these two The action is an atomic operation to try to modify the variables in the memory. If it fails, it means that there is a conflict, and there should be corresponding retry logic. 

2) Pessimistic lock: still like its name, it is pessimistic about the thread safety issues caused by concurrent operations. Pessimistic lock believes that competition will always occur, so every time a resource is operated, it will hold an exclusive Locks are like synchronized, no matter what the situation is, you can operate the resources directly after the lock is on.

24. How does thread B know that thread A has modified the variable?

Volatile modified variable 

synchronized modified modified variable method 

wait/notify 

while polling

25, synchronized, volatile, CAS comparison

Synchronized is a pessimistic lock, which is preemptive and will cause other threads to block. 

Volatile provides visibility of multi-threaded shared variables and prohibits instruction reordering optimization. 

CAS is an optimistic lock based on conflict detection (non-blocking)

26. What is the difference between sleep method and wait method?

This question is often asked. Both the sleep method and the wait method can be used to give up the CPU for a certain amount of time. The difference is that if the thread holds the monitor of an object, the sleep method will not give up the monitor of this object, and the wait method will give up this. Object monitor

27. What is ThreadLocal? What is the use?

ThreadLocal is a local thread copy variable tool class. It is mainly used to map the private thread and the copy object stored by the thread, and the variables between the threads do not interfere with each other. In high concurrency scenarios, stateless calls can be realized, which is especially suitable for variable values ​​that each thread depends on. The scene of the completion of the operation. 

Simply put, ThreadLocal is a way of changing space for time. In each Thread, a ThreadLocal.ThreadLocalMap implemented by the open address method is maintained. Data is isolated and data is not shared. Naturally, there is no thread safety problem.

28. Why wait() method and notify()/notifyAll() method should be called in synchronized block

This is mandatory by the JDK. Both the wait() method and the notify()/notifyAll() method must obtain the lock of the object before calling

29. What are the methods for multi-thread synchronization?

Synchronized keyword, Lock lock implementation, distributed lock, etc.

30, thread scheduling strategy

The thread scheduler selects the thread with the highest priority to run, but if the following situations occur, the thread will be terminated: 

1) The yield method is called in the thread body to give up the right to occupy the cpu 

2) The sleep is called in the thread body Method to make the thread go to sleep 

3) The thread is blocked due to IO operations 

4) Another higher priority thread appears 

5) In a system that supports time slices, the thread’s time slice is used up

31. What is the concurrency of ConcurrentHashMap

The concurrency of ConcurrentHashMap is the size of the segment. The default is 16, which means that up to 16 threads can operate ConcurrentHashMap at the same time. This is also the biggest advantage of ConcurrentHashMap to Hashtable. In any case, Hashtable can have two threads to obtain Hashtable at the same time. Data?

32. How to find which thread uses the longest CPU in the Linux environment

1) Obtain the project's pid, jps or ps -ef | grep java, as mentioned earlier 

2) top -H -p pid, the order cannot be changed

33. Java deadlock and how to avoid it?

A deadlock in Java is a programming situation in which two or more threads are permanently blocked, and at least two threads and two or more resources occur in a Java deadlock situation. 

The root cause of the deadlock in Java is that a cross-closed loop application occurred when applying for a lock.

34, the cause of deadlock

1) Multiple threads are involved in multiple locks, and these locks are crossed, so it may lead to a closed loop of lock dependence. 

For example: a thread applies for lock B when it has acquired lock A and has not released it. At this time, another thread has acquired lock B, and must acquire lock A before releasing lock B. Therefore, the closed loop occurs and it falls into a deadlock loop. . 

2) The default lock request operation is blocking. 

Therefore, to avoid deadlock, when encountering a situation where multiple object locks are intersected, all methods in the classes of these objects must be carefully reviewed to see if there is a possibility of loops that cause lock dependencies. In short, try to avoid calling delay methods and synchronization methods of other objects in a synchronized method.

35. How to wake up a blocked thread

If the thread is blocked by calling wait(), sleep() or join(), you can interrupt the thread and wake it up by throwing InterruptedException; if the thread encounters IO blocking, there is nothing you can do, because IO is the operating system Realized, there is no way for Java code to directly touch the operating system.

36. How does immutable objects help multithreading?

A problem mentioned earlier is that immutable objects ensure the memory visibility of the object, and the reading of immutable objects does not require additional synchronization means, which improves the efficiency of code execution.

37. What is multi-threaded context switching

Multi-threaded context switching refers to the process in which CPU control is switched from an already running thread to another thread that is ready and waiting to obtain CPU execution rights.

38. What happens if the thread pool queue is full when you submit a task?

Here is a distinction: 

1) If you are using an unbounded queue LinkedBlockingQueue, that is, an unbounded queue, it does not matter, continue to add tasks to the blocking queue to wait for execution, because LinkedBlockingQueue can be regarded as an infinite queue, which can store tasks infinitely 

2) If If you use a bounded queue such as ArrayBlockingQueue, the task will be added to the ArrayBlockingQueue first. When the ArrayBlockingQueue is full, the number of threads will be increased according to the value of maximumPoolSize. If the number of threads is increased or the processing cannot be achieved, and the ArrayBlockingQueue continues to be full, then the rejection will be used. The policy RejectedExecutionHandler handles full tasks, the default is AbortPolicy

39. What is the thread scheduling algorithm used in Java

Preemptive. After a thread runs out of CPU, the operating system will calculate a total priority based on thread priority, thread starvation and other data and allocate the next time slice to a thread for execution.

40. What is Thread Scheduler and Time Slicing?

The thread scheduler is an operating system service that is responsible for allocating CPU time for threads in the Runnable state. Once we create a thread and start it, its execution depends on the implementation of the thread scheduler. Time slicing refers to the process of allocating available CPU time to available Runnable threads. Allocating CPU time can be based on thread priority or thread waiting time. Thread scheduling is not controlled by the Java virtual machine, so it is a better choice to control it by the application (that is, don't let your program depend on the priority of the thread).

41, what is spin

Many codes in synchronized are just simple codes, and the execution time is very fast. At this time, all the waiting threads are locked. It may be a not worthwhile operation, because thread blocking involves the problem of switching between user mode and kernel mode. Since the code in synchronized executes very fast, let the thread waiting for the lock not be blocked, but do a busy loop on the boundary of synchronized, which is spin. If you do multiple busy loops and find that the lock has not been obtained, block again. This may be a better strategy.

42、Java

What is the Lock interface in the Concurrency API 

? What are its advantages over synchronization? 

The Lock interface provides a more scalable lock operation than synchronization methods and synchronization blocks. They allow more flexible structures, can have completely different properties, and can support multiple related types of conditional objects. 

Its advantages are: it 

can make the lock fairer, it 

can make the thread respond to interruption while waiting for the lock, it 

can let the thread try to acquire the lock, and return immediately when it cannot acquire the lock or wait for a period of time. It 

can be in a different range and in a different order. Acquire and release locks

 

43. Thread safety of singleton mode

It's an old-fashioned question. The first thing to say is that the thread safety of the singleton mode means that an instance of a certain class will only be created once in a multi-threaded environment. There are many ways to write singleton mode. Let me summarize: 

1) Hungry man-style singleton mode: thread-safe 

2) Lazy man-style singleton mode: non-thread-safe 

3) Double check lock singleton mode: Thread safe

44. What is the role of Semaphore?

Semaphore is a semaphore, and its role is to limit the number of concurrent code blocks. Semaphore has a constructor that can pass in an int type integer n, which means that a certain piece of code can only be accessed by at most n threads. If it exceeds n, please wait until a certain thread finishes executing this code block, and the next thread Re-enter. It can be seen that if the int type integer n=1 passed in the Semaphore constructor, it is equivalent to becoming a synchronized.

45. What is the Executors class?

Executors provides some tool methods for Executor, ExecutorService, ScheduledExecutorService, ThreadFactory and Callable classes. 

Executors can be used to easily create thread pools

46. ​​Thread class construction method and static block are called by which thread

This is a very tricky and cunning question. Remember: the construction method and static block of the thread class are called by the thread where the thread class new is located, while the code in the run method is called by the thread itself. 

If the above statement confuses you, let me give you an example. Suppose Thread1 is new in Thread2, and Thread2 is new in the main function. Then: 

1) The construction method and static block of Thread2 are called by the main thread, and the run of Thread2 () The method is called by Thread2 itself 

2) The construction method and static block of Thread1 are called by Thread2, and the run() method of Thread1 is called by Thread1 itself.

47. Which is a better choice, synchronization method or synchronization block?

 

48. What is the exception caused by too many Java threads?

1) The life cycle overhead of threads is very high. 

2) Excessive CPU resources 

are consumed. If the number of runnable threads is more than the number of available processors, then some threads will be idle. A large number of idle threads will take up a lot of memory, putting pressure on the garbage collector, and a large number of threads will also produce other performance overhead when competing for CPU resources. 

3) Reduce stability. 

JVM has a limit on the number of threads that can be created. This limit value will vary with different platforms, and is subject to multiple factors, including the startup parameters of JVM and the request stack in the Thread constructor. The size, and the limits of the underlying operating system on threads. If these restrictions are violated, OutOfMemoryError may be thrown.

 

 

Guess you like

Origin blog.csdn.net/Tom_sensen/article/details/109839106