[Java concurrent programming interview questions (60 questions)]

Concurrent programming in Java

Java concurrent programming interview questions (60 questions)

Base

1. What is the difference between parallelism and concurrency?

From the perspective of the operating system, a thread is the smallest unit of CPU allocation.

  • Parallelism means that both threads are executing at the same time. This requires two CPUs to execute two threads respectively.
  • Concurrency means that there is only one execution at the same time, but within a period of time, both threads are executed. The implementation of concurrency relies on CPU switching threads. Because the switching time is very short, it is basically imperceptible to the user.

image-20230816193037641

Just like when we go to the canteen to get food, parallelism means that we line up at multiple windows and several aunties get food at the same time; concurrency means that we crowd into one window, and the aunties give one spoonful to one, and then rush to give another one a spoonful.

image-20230816193112204

2. What are processes and threads?

To talk about threads, we must first talk about processes.

  • Process: A process is a running activity of code on a data collection. It is the basic unit of resource allocation and scheduling in the system.
  • Thread: A thread is an execution path of a process. There is at least one thread in a process. Multiple threads in the process share the resources of the process.

When the operating system allocates resources, it allocates resources to processes, but the CPU resource is special. It is allocated to threads, because it is the thread that really occupies the CPU for running, so it is also said that the thread is the basic unit of CPU allocation .

For example, in Java, when we start the main function, we actually start a JVM process, and the thread in which the main function is located is a thread in this process, also called the main thread.

image-20230816193351037

There are multiple threads in a process. Multiple threads share the heap and method area resources of the process, but each thread has its own program counter and stack.

3. Tell me how many ways to create a thread?

There are three main ways to create threads in Java , namely inheriting the Thread class , implementing the Runnable interface , and implementing the Callable interface .

image-20230816193515973

  • Inherit the Thread class, override the run() method, and call the start() method to start the thread.
public class ThreadTest {
    
    
    /**
    * 继承Thread类
    */
    public static class MyThread extends Thread {
    
    
        @Override
        public void run() {
    
    
        	System.out.println("This is child thread");
        }
    }
    public static void main(String[] args) {
    
    
        MyThread thread = new MyThread();
        thread.start();
    }
}
  • Implement the Runnable interface and override the run() method
public class RunnableTask implements Runnable {
    
    
    public void run() {
    
    
    	System.out.println("Runnable!");
    }
    public static void main(String[] args) {
    
    
        RunnableTask task = new RunnableTask();
        new Thread(task).start();
    }
}

Both of the above have no return value, but what if we need to get the execution result of the thread?

  • Implement the Callable interface and override the call() method. In this way, you can obtain the return value of task execution through FutureTask.
public class CallerTask implements Callable<String> {
    
    
    public String call() throws Exception {
    
    
    	return "Hello,i am running!";
    }
    public static void main(String[] args) {
    
    
        //创建异步任务
        FutureTask<String> task = new FutureTask<String>(new CallerTask());
        //启动线程
        new Thread(task).start();
        try {
    
    
        	//等待执行完成,并获取返回结果
        	String result = task.get();
        	System.out.println(result);
        } catch (InterruptedException e) {
    
    
        	e.printStackTrace();
        } catch (ExecutionException e) {
    
    
        	e.printStackTrace();
        }
    }
}

4. Why is the run() method executed when the start() method is called? Why not call the run() method directly?

When the JVM executes the start method, it will first create a thread , and the created new thread will execute the thread's run method, which achieves the multi-threading effect.

image-20230816194036965

Why can't we call the run() method directly? It is also clear that if the run() method of Thread is called directly, the run method will still run in the main thread, which is equivalent to sequential execution, and will not achieve the multi-threading effect .

5. What are the commonly used scheduling methods for threads?

image-20230816194139190

Thread waiting and notification

There are some functions in the Object class that can be used for thread waiting and notification.

  • wait(): When a thread A calls a wait()method of a shared variable, thread A will be blocked and suspended, and will return only when the following situations occur
    : (1) Thread A calls a shared object notify()or notifyAll()method;
    (2) Other threads call thread A method interrupt(), thread A throws InterruptedExceptionan exception and returns.
  • wait(long timeout): This method wait()has one more timeout parameter than the method. The difference is that if thread A calls the wait(long timeout)method of the shared object and is not awakened by other threads within the specified timeout ms, then this method will still be terminated due to timeout. return.
  • wait(long timeout, int nanos), the internal call is wait(long timout)a function.

The above are the methods for thread waiting, and the following two methods are mainly used to wake up the thread:

  • notify(): After a thread A calls notify()a method on a shared object, it will wake up a thread that was suspended after calling the wait series of methods on the shared variable. There may be multiple threads waiting on a shared variable, and which waiting thread is awakened is random.
  • notifyAll(): Unlike calling a function on a shared variable notify()that will wake up a thread blocked on the shared variable, notifyAll()the method will wake up all threads that are suspended on the shared variable due to calling the wait series of methods.

The Thread class also provides a method for waiting:

  • join(): If a thread A executes thread.join()the statement, the meaning is: the current thread A waits for the thread thread to terminate before thread.join()returning.

Thread sleep

  • sleep(long millis): Static method in the Thread class. When an executing thread A calls the sleep method of Thread, thread A will temporarily give up the execution right for the specified time , but the monitor resources owned by thread A, such as locks, are still held. Not to be given away . After the specified sleep time is up, the function will return normally, then participate in CPU scheduling, and continue running after obtaining CPU resources.

yield priority

  • yield(): Static method in the Thread class. When a thread calls the yield method, it actually hints to the thread scheduler that the current thread requests to give up its CPU , but the thread scheduler can unconditionally ignore this hint.

Thread interrupt

Thread interruption in Java is a cooperation mode between threads. Setting the interruption flag of a thread does not directly terminate the execution of the thread. Instead, the interrupted thread handles it by itself according to the interruption status.

  • void interrupt(): Interrupt threads. For example, when thread A is running, thread B can call the thread interrupt()method to set the thread's interrupt flag to true and return immediately. Setting the flag is just setting the flag. Thread A is not actually interrupted and will continue to execute.
  • boolean isInterrupted()Method: Check whether the current thread is interrupted.
  • boolean interrupted()Method: Detect whether the current thread is interrupted. Different from isinterrupted, this method will clear the interrupt flag if it finds that the current thread is interrupted.

6. How many states does a thread have?

In Java, there are six states of threads:

image-20230816195113161

A thread is not in a fixed state during its own life cycle, but switches between different states as the code is executed. The Java thread state changes as shown in the figure:

image-20230816195222621

7. What is thread context switching?

The purpose of using multi-threading is to make full use of the CPU, but we know that concurrency is actually one CPU to cope with multiple threads.

image-20230816195320452

In order to make users feel that multiple threads are executing at the same time, CPU resources are allocated using time slice rotation, that is, each thread is assigned a time slice, and the thread occupies the CPU to perform tasks within the time slice . When a thread uses up its time slice, it will be in a ready state and let other threads occupy the CPU. This is a context switch.

image-20230816195431460

8. Do you understand daemon threads?

Threads in Java are divided into two categories, namely daemon thread (daemon thread) and user thread (user thread) .

User threads are ordinary threads among the threads started by the virtual machine. When all user threads finish running, the virtual machine will stop running, even if some daemon threads are still running.

A daemon thread is a thread created in a program and its role is to provide services for other threads. When all user threads finish running, the daemon thread will also end, regardless of whether it has completed execution. Daemon threads are usually used to perform some auxiliary tasks, such as garbage collection, cache cleaning, etc. They do not need to wait for all tasks to be completed before exiting.

The difference between them is when the virtual machine ends the process.

So what is the difference between daemon threads and user threads?

  • When the last non-daemon thread warps, the JVM will exit normally , regardless of whether there is currently a daemon thread, which means that whether the daemon thread ends does not affect the JVM exit.

  • In other words, as long as one user thread has not ended, the JVM will not exit under normal circumstances.

9. What are the communication methods between threads?

image-20230816200054249

volatile and synchronized keywords:

  • The keyword volatile can be used to modify fields (member variables), which tells the program that any access to the variable needs to be obtained from the shared memory, and changes to it must be refreshed back to the shared memory synchronously. It can ensure that all threads access the variable. visibility.

  • The keyword synchronized can be used to modify methods or in the form of synchronized blocks. It mainly ensures that multiple threads can only have one thread in a method or synchronized block at the same time. It ensures the visibility and visibility of thread access to variables. Exclusivity.

Wait/notify mechanism:

One thread can modify the value of an object through Java's built-in waiting/notification mechanism ( wait()/notify()), and another thread senses the change and then performs the corresponding operation.

Pipeline input/output streams:

  • The difference between the pipeline input/output stream and the ordinary file input/output stream or network input/output stream is that it is mainly used for data transmission between threads, and the transmission medium is memory.
  • Pipe input/output streams mainly include the following four specific implementations: PipedOutputStream, PipedInputStream, PipedReader and PipedWriter. The first two are byte-oriented, and the latter two are character-oriented.

Use Thread.join():

Thread.join(): The function of join() is to "wait for the process to terminate", that is, after the child thread calls the join() method, the code behind the main thread cannot be executed until the child thread ends. Generally, the input applied to one thread may depend on the output of another or multiple threads. In this case, the thread needs to wait for the dependent thread to complete execution before it can continue execution.

  • If a thread A executes the thread.join() statement, the meaning is: the current thread A waits for the thread thread to terminate before returning from thread.join().

  • In addition to the join() method, thread Thread also provides two methods with timeout characteristics: join(long millis) and join(long millis, int nanos).

Use ThreadLocal:

  • ThreadLocal, or thread variable, is a storage structure with a ThreadLocal object as a key and any object as a value. This structure is attached to the thread, which means that a thread can query a value bound to this thread based on a ThreadLocal object.
  • You can set a value through the set(T) method, and then obtain the originally set value through the get() method in the current thread.

Regarding multi-threading, there is a high probability that there will also be some written test questions, such as alternate printing, bank transfers, production and consumption models, etc.

ThreadLocal

In fact, ThreadLocal does not have many application scenarios, but it is an interview veteran who has been bombarded thousands of times. It involves multi-threading, data structures, and JVM. There are many questions to ask, so you must win.

10.What is ThreadLocal?

ThreadLocal, which is a thread local variable . If you create a ThreadLocal variable, then each thread that accesses this variable will have a local copy of this variable. When multiple threads operate this variable, they actually operate the variables in their own local memory, thus achieving thread isolation . function to avoid thread safety issues .

image-20230816204228473

  • Create
    a ThreadLoca variable localVariable, and any thread can access localVariable concurrently.
//创建一个ThreadLocal变量
public static ThreadLocal<String> localVariable = new ThreadLocal<>();
  • Writing
    threads can use localVariable anywhere to write to variables.
localVariable.set("鄙人某某”);
  • A read
    anywhere a thread reads is the variable it writes to.
localVariable.get();

11.Have you ever used ThreadLocal in your work?

Useful for storing user information context.

Our system application is a typical MVC architecture. Every time a logged-in user accesses the interface, a token will be carried in the request header. The control layer can parse the user's basic information based on this token. So here comes the question, what should we do if user information is used in both the service layer and the persistence layer, such as rpc calls, update user acquisition, etc.?

One way is to explicitly define user-related parameters, such as account number, user name... In this way, we may need to modify the code in a large area, which is somewhat confusing. So what should we do?

At this time, we can use ThreadLocal to intercept the request at the control layer and store the user information in ThreadLocal, so that we can take out the user data stored in ThreadLocal anywhere.

image-20230816204624449

Data isolation of cookies, sessions, etc. in many other scenarios can also be implemented through ThreadLocal.

Our commonly used database connection pool also uses ThreadLocal:

  • The connections of the database connection pool are managed by ThreadLoca to ensure that the operations of the current thread are all the same Connnection.

12.How is ThreadLocal implemented?

Let's take a look at the set(T) method of ThreadLocal and find that we first get the current thread, then get it ThreadLocalMap, and then store the elements in this map.

public void set(T value) {
    
    
    //获取当前线程
    Thread t = Thread.currentThread();
    //获取ThreadLocalMap
    ThreadLocalMap map = getMap(t);
    //将当前元素存入map
    if (map != null)
    	map.set(this, value);
    else
    	createMap(t, value);
}

The secret of ThreadLocal implementation lies in this ThreadLocalMap. You can define a ThreadLocal.ThreadLocalMapmember variable of type in the Thread class threadLocals.

public class Thread implements Runnable {
    
    
    //ThreadLocal.ThreadLocalMap是Thread的属性
    ThreadLocal.ThreadLocalMap threadLocals = null;
}

Since ThreadLocalMap is called Map, there is no doubt that it is a <key, value> type data structure. We all know that the essence of a map is an array of nodes in the form of <key, value>. So what do the nodes of ThreadLocalMap look like?

static class Entry extends WeakReference<ThreadLocal<?>> {
    
    
    /** The value associated with this ThreadLocal. */
    Object value;
    
    //节点类
    Entry(ThreadLocal<?> k, Object v) {
    
    
        //key赋值
        super(k);
        //value赋值
        value = v;
    }
}

For the nodes here, the key can simply be regarded as ThreadLocal, and the value is the value put in the code. Of course, the key is actually not ThreadLocal itself, but a weak reference to it. You can see that the key of Entry inherits WeakReference (weak reference ), let’s take a look at how the key is assigned:

public WeakReference(T referent) {
    
    
	super(referent);
}

The assignment of key uses the assignment of WeakReference.

image-20230816205238410

So, how to answer the ThreadLocal principle? To answer these points:

  • The Thread class has an instance variable threadLocals of type ThreadLocal.ThreadLocalMap. Each thread has its own ThreadLocalMap.
  • ThreadLocalMap maintains an Entry array internally. Each Entry represents a complete object. The key is a weak reference to ThreadLocal, and the value is the generic value of ThreadLocal.
  • When each thread sets a value to ThreadLocal, it stores it in its own ThreadLocalMap. When reading, it also uses a certain ThreadLocal as a reference and finds the corresponding key in its own map, thereby achieving thread isolation.
  • ThreadLocal itself does not store values, it only serves as a key to allow threads to access values ​​from ThreadLocalMap.

13.What’s the matter with ThreadLocal memory leak?

Let's first analyze the memory when using ThreadLocal. We all know that in the JVM, the stack memory thread is private and stores references to objects, and the heap memory threads are shared and store object instances .

Therefore, ThreadLocal and Thread references are stored in the stack , and their specific instances are stored in the heap .

image-20230816205623476

The key used in ThreadLocalMap is a weak reference to ThreadLocal.

"Weak reference: As soon as the garbage collection mechanism runs, regardless of whether the JVM has sufficient memory space, the memory occupied by the object will be reclaimed."

So now comes the problem. Weak references are easily recycled. If ThreadLocal (Key of ThreadLocalMap) is recycled by the garbage collector, but the life cycle of ThreadLocalMap is the same as Thread. If it is not recycled at this time, this will happen. This situation: The key of ThreadLocalMap is gone, but the value is still there, which will cause a memory leak problem .

So how to solve the memory leak problem?

It's very simple. After using ThreadLocal, call the remove() method in time to release the memory space.

ThreadLocal<String> localVariable = new ThreadLocal();
try {
    
    
	localVariable.set("鄙人某某”);
	……
} finally {
    
    
	localVariable.remove();
}

So why should the key be designed as a weak reference?

The key is designed as a weak reference to prevent memory leaks.

If the key is designed to be a strong reference, if the ThreadLocal Reference is destroyed, the strong reference pointing to ThreadLocal will no longer exist, but at this time the key still has a strong reference pointing to ThreadLocal, which will cause the ThreadLocal to not be recycled, and a memory failure will occur. Leakage problem.

14.Do you understand the structure of ThreadLocalMap?

Although ThreadLocalMap is called Map, it actually does not implement the Map interface, but its structure is still similar to HashMap. It mainly focuses on two elements: element array and hash method .

image-20230817095135571

  • Element array
    is a table array that stores elements of Entry type. Entry is a structure with ThreaLocal weak reference as key and Object as value.
private Entry[] table;
  • Hash method
    The hash method is how to map the corresponding key to the corresponding subscript of the table array. ThreadLocalMap uses the hash remainder method, taking out the threadLocalHashCode of the key, and then subtracting one from the length of the table array (equivalent to the remainder ).
int i = key.threadLocalHashCode & (table.length - 1);

There is something about the threadLocalHashCode calculation here. Every time a ThreadLocal object is created, it will be added 0x61c88647. This value is very special. It is the Fibonacci number, also called the golden section number. hashThe advantage of increments of this number is that hashthe distribution is very even.

private static final int HASH_INCREMENT = 0x61c88647;

private static int nextHashCode() {
    
    
    return nextHashCode.getAndAdd(HASH_INCREMENT);
}

15.How does ThreadLocalMap resolve Hash conflicts?

We may all know that HashMap uses a linked list to resolve conflicts, which is the so-called chain address method.

ThreadLocalMap does not use a linked list, so it naturally does not use the chain address method to resolve conflicts. It uses another method - the open addressing method .

What does open addressing mean? To put it simply, if this pit is occupied by someone, then go on to find an empty pit.

image-20230817100657281

As shown in the figure above, if we insert a data with value=27, it should fall into slot 4 after hash calculation, and slot 4 already has Entry data, and the key of the Entry data is not equal to the current one. At this time, the search will be linearly backward, and the search will not stop until the slot with null Entry is found, and the element will be placed in the empty slot.

When getting, the position in the table will also be located based on the hash value of the ThreadLocal object, and then it will be judged whether the key in the entry object of the slot is consistent with the key of get. If it is inconsistent, the next position will be judged.

16.Do you understand the expansion mechanism of ThreadLocalMap?

At the end of the ThreadLocalMap.set() method, if no data has been cleaned after performing the heuristic cleaning work, and the Entrynumber in the current hash array has reached the expansion threshold of the list (len*2/3), the logic will be executed rehash():

if (!cleanSomeSlots(i, sz) && sz >= threshold)
	rehash();

Let’s look at the specific implementation of rehash(): here we will first clean up expired entries, and then judge based on conditions, size >= threshold - threshold/4that is, size >= threshold * 3/4to decide whether expansion is needed resize().

Next, let’s look at the specific resize() method. The expanded newTabsize is twice that of the old array, and then the old table array is traversed. The hash method recalculates the position, opens the address to resolve the conflict, and then puts it into the new one. After the traversal is completed, newTaball oldTabthe The entrydata has been put into it newTab, and then tablethe reference points to it newTab.

image-20230817101211297

17. How do parent and child threads share data?

Can the parent thread use ThreadLocal to pass values ​​to the child thread? Undoubtedly, no. so what should I do now?

Another class can be used at this time - InheritableThreadLocal.

It is very simple to use. InheritableThreadLocalYou can set the value in the instance of the main thread and get it in the child thread.

public class InheritableThreadLocalTest {
    
    
    public static void main(String[] args) {
    
    
        final ThreadLocal threadLocal = new InheritableThreadLocal();
        // 主线程
        threadLocal.set("不擅技术");
        //子线程
        Thread t = new Thread() {
    
    
            @Override
        	public void run() {
    
    
                super.run();
                System.out.println("鄙人某某 ," + threadLocal.get());
            }
        };
        t.start();
    }
}

What's the principle?

The principle is very simple, there is another variable in the Thread class

ThreadLocal.ThreadLocalMap inheritableThreadLocals = null;

During Thread.init, if the parent thread inheritableThreadLocalsis not empty, it is assigned to the current thread (child thread) inheritableThreadLocals.

if (inheritThreadLocals && parent.inheritableThreadLocals != null)
    this.inheritableThreadLocals =
    	ThreadLocal.createInheritedMap(parent.inheritableThreadLocals);

Java memory model

18. Tell me about your understanding of Java Memory Model (JMM)?

Java Memory Model (JMM) is an abstract model that is defined to shield memory access differences among various hardware and operating systems .

JMM defines the abstract relationship between threads and main memory: shared variables between threads are stored in main memory (Main Memory), each thread has a private local memory (Local Memory), and the local memory stores the Threads read/write copies of shared variables.

image-20230817105734609

Local memory is an abstract concept of JMM and does not really exist. It actually covers caches, write buffers, registers, and other hardware and compiler optimizations.

image-20230817105922582

The figure shows a dual-core CPU system architecture. Each core has its own controller and arithmetic unit. The controller includes a set of registers and operation controllers. The arithmetic unit performs arithmetic and logical operations.

Each core has its own first-level cache, and in some architectures there is a second-level cache shared by all CPUs. Then the working memory in the Java memory model corresponds to the LI cache or L2 cache or CPU register here.

19. Tell me about your understanding of atomicity, visibility, and order?

Atomicity, orderliness, and visibility are very important basic concepts in concurrent programming. Many JMM technologies revolve around these three characteristics.

  • Atomicity : Atomicity means that an operation is indivisible and uninterruptible. Either it is fully executed and the execution process will not be interrupted by any factors, or it is not executed at all.
  • Visibility : Visibility means that when a thread modifies the value of a shared variable, other threads can immediately know the modification.
  • Orderliness : Orderliness means that the execution code of a thread is executed sequentially from front to back. Under a single thread, the program can be considered in order, but instructions may be rearranged during concurrency.

Analyze the atomicity of the following lines of code?

int i = 2;
int j = i;
i++;
i = i + 1;
  • The first sentence is a basic type assignment, which is an atomic operation.
  • The second sentence reads the value of i first, and then assigns it to j. This two-step operation cannot guarantee atomicity.
  • Sentences 3 and 4 are actually equivalent. First read the value of i, then +1, and finally assign the value to i. This is a three-step operation, and atomicity cannot be guaranteed.

How to ensure atomicity, visibility, and orderliness?

  • Atomicity: JMM can only guarantee basic atomicity. If you want to ensure the atomicity of a code block, you need to use synchronized .
  • Visibility: Java uses the volatile keyword to ensure visibility . In addition, final and synchronized can also ensure visibility.
  • Orderliness: Both synchronized and volatile can ensure the orderliness of operations between multiple threads.

20. So what is instruction rearrangement?

When executing a program, compilers and processors often reorder instructions to improve performance. There are 3 types of reordering:

  1. Compiler-optimized reordering. The compiler can rearrange the execution order of statements without changing the semantics of a single-threaded program.
  2. Instruction-level parallel reordering. Modern processors use instruction-level parallelism (ILP) to execute multiple instructions in an overlapping manner. If there are no data dependencies, the processor can change the order in which statements correspond to machine instructions.
  3. Memory system reordering. Because the processor uses cache and read/write buffers, this can make load and store operations appear to be executed out of order.

From the Java source code to the final actually executed instruction sequence, it will undergo the following three reorderings, as shown in the figure:

image-20230817110723151

The double verification singleton mode that we are more familiar with is a classic example of instruction rearrangement.singleton instance=new Singleton();

The corresponding JVM instructions are divided into three steps: allocate memory space -> initialize object -> object points to the allocated memory space , but after the compiler's instruction reordering, the second and third steps may be reordered.

image-20230817111009903

JMM is a language-level memory model that ensures consistent memory visibility guarantees for programmers by prohibiting specific types of compiler reordering and processor reordering on different compilers and different processor platforms.

21. Are there any restrictions on order rearrangement? Do you understand what happens-before?

Instruction rearrangement also has some restrictions, which are restricted by two rules happens-beforeand . Definition of happens-before:as-if-serial

  • If one operation happens-before another operation, then the execution results of the first operation will be visible to the second operation, and the execution order of the first operation will be before the second operation.
    The existence of a happens-before relationship between two operations does not mean that the specific implementation of the Java platform must be executed in the order specified by the happens-before relationship.
  • If the execution result after reordering is consistent with the execution result according to the happens-before relationship, then this reordering is not illegal.

There are six rules that happen-before are closely related to us:

  1. Program sequence rules : Each operation in a thread happens-before any subsequent operation in that thread.
  2. Monitor lock rules : The unlocking of a lock happens-before the subsequent locking of the lock.
  3. Volatile variable rules : A write to a volatile field happens-before any subsequent read of the volatile field.
  4. Transitivity : If A happens-before B, and B happens-before C, then A happens-before C
  5. start() rule : If thread A performs the operation ThreadB.start() (starts thread B), then the ThreadB.start() operation of thread A happens-before any operation in thread B.
  6. join() rule : If thread A executes the operation ThreadB.join() and returns successfully, then any operation in thread B happens-before thread A returns successfully from the ThreadB.join() operation.

22.What is as-if-serial? Are single-threaded programs necessarily sequential?

The meaning of as-if-serial semantics is: no matter how reordering (compilers and processors in order to improve parallelism), the execution results of single-threaded programs cannot be changed . The compiler, runtime, and processor must all adhere to as-if-serial semantics.

In order to comply with as-if-serial semantics, compilers and processors will not reorder operations with data dependencies , because such reordering will change the execution results. However, if there are no data dependencies between operations, these operations may be reordered by the compiler and processor. To illustrate specifically, take a look at the code example below that calculates the area of ​​a circle.

double pi = 3.14; // A
double r = 1.0; // B
double area = pi * r * r; // C

Data dependencies of the above three operations:

image-20230817111921751

There is a data dependency relationship between A and C, and there is also a data dependency relationship between B and C. Therefore, in the instruction sequence that is finally executed, C cannot be reordered before A and B (C is placed in front of A and B, and the result of the program will be changed). But there is no data dependency between A and B. The compiler and processor can reorder the execution order between A and B.

So in the end, the program may have two execution orders:

image-20230817112114812

as-if-serial semantics protects single-threaded programs. Compilers, runtimes, and processors that comply with as-if-serial semantics jointly weave such a "Truman Show": single-threaded programs are in the order of the program. "To execute. As-if-serial semantics enable us to avoid reordering and visibility issues in single-threaded situations.

23.Do you understand the implementation principle of volatile?

Volatile has two functions, ensuring visibility and orderliness.

How does volatile ensure visibility?

Compared with the synchronized locking method to solve the memory visibility problem of shared variables, volatile is a lighter choice, and it does not have the additional overhead cost of context switching .

Volatile can ensure that updates to a variable are immediately visible to other threads. When a variable is declared volatile, the thread will not cache the value in a register or other place when writing the variable, but will . When other threads read the shared variable, the latest value will be retrieved from main memory instead of using the value in the current thread's local memory.

For example, we declare a volatile variable volatile int x= 0, and thread A modifies x=1. After the modification, the new value will be refreshed back to the main memory. When thread B reads x, the local memory variable will be cleared, and then Then get the latest value from main memory.

image-20230817143413291

How does volatile ensure orderliness?

Reordering can be divided into compiler reordering and processor reordering. Valatile guarantees orderliness by limiting these two types of reordering respectively.

image-20230817143702769

In order to achieve volatile memory semantics, the compiler inserts memory barriers into the instruction sequence when generating bytecode to prohibit specific types of processor reordering .

  1. storestoreInsert a barrier in front of each volatile write operation
  2. storeLoadInsert a barrier after each volatile write operation
  3. LoadLoadInsert a barrier after each volatile read operation
  4. LoadStoreInsert a barrier after each volatile read operation

image-20230817144000626

image-20230817144330789

Lock

24.Have you ever used synchronized? how to use?

synchronized is often used to ensure the atomicity of the code .

There are three main uses of synchronized:

  • Modify instance method : Acts on locking the current object instance. Before entering the synchronization code, you must obtain the lock of the current object instance.
synchronized void method() {
    
    
	//业务代码
}
  • Modify static methods : that is, locking the current class will affect all object instances of the class. Before entering the synchronization code, you must obtain the lock of the current class. Because static members do not belong to any instance object, they are class members (static indicates that this is a static resource of the class, no matter how many objects are new, there is only one copy).

    If a thread A calls a non-static synchronized method of an instance object, and thread B needs to call a static synchronized method of the class to which the instance object belongs, it is allowed and no mutual exclusion will occur because the lock occupied by accessing the static synchronized method is the current The lock of the class , and the lock occupied by accessing non-static synchronized methods is the current instance object lock .

synchronized void staic method() {
    
    
	//业务代码
}
  • Modify code block : specify the lock object and lock the given object/class. synchronized(this|object) means that the lock of the given object must be obtained before entering the synchronized code library. synchronized (class.class) means to obtain the lock of the current class before entering the synchronization code
synchronized(this) {
    
    
	//业务代码
}

25. What is the implementation principle of synchronized?

How does synchronized lock?

When we use synchronized, we find that we don't have to lock and unlock ourselves because the JVM does it for us.

  1. When synchronized modifies a code block, the JVM uses monitorentertwo monitorexit instructions to achieve synchronization. monitorenterThe instruction points to the start position of the synchronized code block, and monitorexitthe instruction points to the end position of the synchronized code block.

    Decompile a piece of synchronized modified code block code, javap -c -s -v -l SynchronizedDemo.classand you can see the corresponding bytecode instructions.

image-20230817155725438

  1. When synchronized modifies a synchronized method, the JVM uses ACC_SYNCHRONIZEDa marker to implement synchronization. This identifier indicates that the method is a synchronized method.

    You can also write a piece of code to decompile and take a look.

image-20230817155827041

What does synchronized lock?

monitorenter, monitorexit or ACC_SYNCHRONIZED are all implemented based on Monitor .

There is an object header in the instance object structure. There is a structure called Mark Word in the object header. The Mark Word pointer points to the monitor .

The so-called Monitor is actually a synchronization tool , or a synchronization mechanism . In the Java virtual machine (HotSpot), Monitor is implemented by ObjectMonitor , which can be called internal lock or Monitor lock.

How ObjectMonitor works:

  • ObjectMonitor has two queues: WaitSet and EntryList, which are used to save the ObjectWaiter object list.
  • _owner_owner, when the thread that obtains the Monitor object enters the area _count +1. If the thread calls wait()the method, the Monitor object will be released and _ownerrestored to empty _count-1. At the same time, the waiting thread enters _WaitSetand is waiting to be awakened.
ObjectMonitor() {
    
    
    _header = NULL;
    _count = 0; // 记录线程获取锁的次数
    _waiters = 0,
    _recursions = 0; //锁的重入次数
    _object = NULL;
    _owner = NULL; // 指向持有ObjectMonitor对象的线程
    _WaitSet = NULL; // 处于wait状态的线程,会被加入到_WaitSet
    _WaitSetLock = 0 ;
    _Responsible = NULL ;
    _succ = NULL ;
    _cxq = NULL ;
    FreeNext = NULL ;
    _EntryList = NULL ; // 处于等待锁block状态的线程,会被加入到该列表
    _SpinFreq = 0 ;
    _SpinClock = 0 ;
    OwnerIsThread = 0 ;
}

It can be compared to an example of going to the hospital:

  • First, patients register at the front desk of the outpatient hall or at the self-service registration machine ;
  • Subsequently, after the registration is completed, the patient finds the corresponding clinic for treatment :
    • Only one patient can visit the clinic at a time;
    • If the clinic is free at this time, go directly to the clinic;
    • If there are other patients in the clinic at this time, the current patient will enter the waiting room and wait for his number to be called;
  • After the consultation, walk out of the consultation room and the next patient waiting in the waiting room enters the consultation room.

image-20230817160502548

This process is similar to the Monitor mechanism:

  • Outpatient lobby : All threads waiting to enter must first register at the entrance Entry Set to be eligible;
  • Medical treatment room : There can only be one thread in the medical treatment room **_Owner**, and the thread will leave on its own after the medical treatment.
  • Waiting room : When the treatment room is busy, it enters the waiting area (Wait Set) . When the treatment room is free, a new thread is called from the waiting area (Wait Set).

image-20230817160619216

So we know what synchronization is locked:

  • monitorenter, after judging that it has the synchronization flag ACC_SYNCHRONIZED, the thread that enters this method first will have the owner of the Monitor first, and the counter will be +1 at this time.
  • monitorexit, when the execution is completed and exited, the counter -1, returns to 0 and is obtained by other entering threads.

26. In addition to atomicity, how to achieve synchronized visibility, orderliness, and reentrancy?

How does synchronized ensure visibility?

  • Before the thread locks, the value of the shared variable in the working memory will be cleared, so when using the shared variable, you need to re-read the latest value from the main memory.
  • After a thread is locked, other threads cannot obtain shared variables in main memory.
  • Before the thread is unlocked, the latest value of the shared variable must be refreshed to the main memory.

How does synchronized ensure orderliness?

The synchronized code block is exclusive and can only be owned by one thread at a time, so synchronized guarantees that the code is executed by a single thread at the same time.

Because of the existence of as-if-serial semantics, a single-threaded program can guarantee that the final result is in order , but there is no guarantee that instructions will not be rearranged .

Therefore, the ordering guaranteed by synchronized is the ordering of execution results, not the ordering to prevent instruction reordering .

How does synchronized achieve reentrancy?

synchronized is a reentrant lock, that is, a thread is allowed to request the critical resources of the object lock it holds twice. This situation is called a reentrant lock.

There is a counter when a synchronized lock object is used. It will record the number of times the thread acquires the lock. After the corresponding code block is executed, the counter will be -1 until the counter is cleared and the lock is released.

The reason is that it is reentrant. This is because the synchronized lock object has a counter, which will count +1 after the thread acquires the lock, and -1 when the thread completes execution, until it is cleared to release the lock.

27. Lock upgrade? Do you understand synchronized optimization?

In order to unlock and upgrade, you must first know what the status of different locks is. What does this status refer to?

In the Java object header, there is a structure called Mark wordthe mark field. This structure will change as the lock status changes.

The 64-bit virtual machine Mark Word is 64bit. Let’s take a look at its status changes:

image-20230817162028275

Mark Word stores the running data of the object itself, such as hash code, GC generation age, lock status flag, bias timestamp (Epoch), etc.

What optimizations has synchronized done?

Before JDK1.6, the implementation of synchronized directly called the enter and exit of ObjectMonitor. This kind of lock was called a heavyweight lock. Starting from JDK6, the HotSpot virtual machine development team has optimized locks in Java, such as adding optimization strategies such as adaptive spin, lock elimination, lock coarsening, lightweight locks and biased locks, to improve the performance of synchronized.

  • Biased lock: In the absence of competition , the current thread pointer is only stored in Mark Word, and no CAS operation is performed.
  • Lightweight locks: When there is no multi-thread competition, compared to heavyweight locks, the performance consumption caused by operating system mutexes is reduced. However, if there is lock competition, in addition to the overhead of the mutex itself, there is also the additional overhead of the CAS operation.
  • Spin lock: Reduce unnecessary CPU context switching. When a lightweight lock is upgraded to a heavyweight lock , the spin locking method is used.
  • Lock coarsening: Connect multiple consecutive locking and unlocking operations together and expand them into a lock with a larger scope .
  • Lock elimination: When the virtual machine just-in-time compiler is running, it eliminates locks that require synchronization on some codes but are detected as unlikely to have shared data competition .

What is the process of lock upgrade?

Lock upgrade direction: no lock -> biased lock -> lightweight lock -> heavyweight lock. This direction is basically irreversible.

image-20230817162636798

bias lock

Acquisition of biased lock:

  1. Determine whether it is a biasable state – whether the lock flag in MarkWord is '01' and whether the bias lock is '1'
  2. If it is a biasable state, check whether the thread ID is the current thread, if so, go to step '5', otherwise go to step '3'
  3. Compete for the lock through CAS operation. If the competition succeeds, set the thread ID in MarkWord to the current thread ID, and then execute '5'; if the competition fails, execute '4'
  4. The failure of CAS to acquire the bias lock indicates competition. When the safepoint is reached, the thread that obtained the biased lock is suspended, the biased lock is upgraded to a lightweight lock, and then the thread blocked at the safepoint continues to execute the synchronized code block.
  5. Execute synchronous code

Cancellation of biased lock:

  1. The bias lock will not be actively released (revoked). The revocation will only be executed when other threads compete. Since the revocation needs to know the stack status of the thread currently holding the bias lock, it must wait until the safepoint is executed. At this time, the bias is held. The lock thread (T) has two situations: '2' and '3';
  2. Cancellation----The T thread has exited the synchronization code block, or is no longer alive, then the bias lock is directly revoked and becomes a lock-free state----If the state reaches the threshold 20, batch re-biasing is performed.
  3. Upgrade—The T thread is still in the synchronization code block, then the biased lock of the T thread is upgraded to a lightweight lock . The current thread performs the lock acquisition step in the lightweight lock state—when the state reaches the threshold 40, batch cancellation is performed.

lightweight lock

Acquisition of lightweight lock:

  1. When performing a locking operation, the jvm will determine whether a heavyweight lock has been obtained. If not, a space will be carved out in the current thread stack frame as the lock record of the lock, and the lock object MarkWord will be copied to the lock record.
  2. After the copy is successful, the jvm uses the CAS operation to update the MarkWord of the object header to a pointer to the lock record, and points the owner pointer in the lock record to the MarkWord of the object header. If successful, execute '3', otherwise execute '4'
  3. If the update is successful, the current thread holds the object lock and the object MarkWord lock flag is set to '00', which means that the object is in a lightweight lock state.
  4. The update fails. The jvm first checks whether the object MarkWord points to the lock record in the current thread stack frame. If so, execute '5', otherwise execute '4'
  5. Indicates lock reentrancy; then the first part of the lock record (Displaced Mark Word) is added to the current thread stack frame as null and points to the Mark Word lock object, which functions as a reentrancy counter.
  6. Indicates that the lock object has been preempted by other threads, and spin waiting is performed (default 10 times). If the number of waits reaches the threshold and the lock is not obtained, it is upgraded to a heavyweight lock.

image-20230817163821002

28. Talk about the difference between synchronized and ReentrantLock?

This question can be answered from several dimensions such as lock implementation, functional characteristics, and performance:

  • Lock implementation: synchronized is a keyword in the Java language and is implemented based on the JVM. ReentrantLock is implemented based on the API level of JDK (usually done in conjunction lock()with unlock()methods and try/finallystatement blocks.)
  • Performance: Before JDK1.6 lock optimization, synchronized performance was much worse than ReenTrantLock. However, since JDK6, adaptive spin, lock elimination, etc. have been added, and the performance of the two is almost the same.
  • Features: ReentrantLock adds some advanced features compared to synchronized, such as interruptible waiting, fair locking, and selective notification.
    • ReentrantLock provides a mechanism that can interrupt threads waiting for locks. lock.lockInterruptibly()This mechanism is implemented by
    • ReentrantLock can specify whether it is a fair lock or an unfair lock. And synchronized can only be an unfair lock. The so-called fair lock means that the thread waiting first obtains the lock first.
    • synchronized is combined with the wait() and notify()/notifyAll() methods to implement the waiting/notification mechanism, and the ReentrantLock class is implemented with the help of the Condition interface and the newCondition() method.
    • ReentrantLock requires manual declaration to lock and release the lock, and is generally used in conjunction with finally to release the lock. Synchronized does not need to manually release the lock.

The following table lists the differences between the two types of locks:

the difference sychronized ReentrantLock
Lock implementation mechanism Object header monitor mode Depend on AQS
flexibility not flexible Supports response to interrupts, timeouts, and attempts to acquire locks
release lock form automatic release lock Display call unlock()
Supported lock types unfair lock Fair lock & unfair lock
conditional queue single condition queue Multiple condition queues
Reentrancy support support support

29.How much do you know about AQS?

AbstractQueuedSynchronizer abstract synchronization queue , referred to as AQS, is the foundation of the Java concurrent package. The locks in the concurrent package are implemented based on AQS .

  • AQS is a bidirectional queue based on a FIFO . It defines a node class Node internally. The SHARED inside the Node node is used to mark that the thread is blocked and suspended when acquiring shared resources and then put into the AQS queue. EXCLUSIVE is used to mark the thread. When fetching exclusive resources, they are suspended and then put into the AQS queue.
  • AQS uses a volatile-modified int type member variable state to represent the synchronization state. Successfully modifying the synchronization state means obtaining the lock. Volatile ensures the visibility of the variable between multiple threads . When modifying the State value, the CAS mechanism is used to ensure that the modification is atomicity
  • There are two ways to obtain state, the exclusive method and the shared method. If one thread uses the exclusive method to obtain the resource, other threads will be blocked after the acquisition fails. One thread obtains the resource through sharing, and another thread can obtain it through CAS.
  • If the shared resource is occupied, a certain blocking and waiting wake-up mechanism is required to ensure lock allocation. In AQS, threads that fail to compete for the shared resource will be added to a variant of the CLH queue .

image-20230817205815992

image-20230817205850426

The queue in AQS is a virtual two-way queue of CLH variant. Lock allocation is achieved by encapsulating each thread requesting shared resources into a node :

image-20230817210027241

The CLH variant waiting queue in AQS has the following features:

  • The queue in AQS is a two-way linked list, which is also a first-in-first-out feature of FIFO.
  • The queue structure is composed of two nodes, Head and Tail, and visibility is ensured through volatile modification.
  • The node pointed by Head is the node that has obtained the lock. It is a virtual node. The node itself does not hold a specific thread.
  • If the synchronization status cannot be obtained, the node will spin to acquire the lock. After the spin fails for a certain number of times, the thread will be blocked. Compared with the CLH queue, the performance is better.

30.ReentrantLock implementation principle?

ReentrantLock is a reentrant exclusive lock. Only one thread can acquire the lock. Other threads acquiring the lock will be blocked and placed in the blocking queue of the lock.

Take a look at the locking operation of ReentrantLock:

// 创建非公平锁
ReentrantLock lock = new ReentrantLock();
// 获取锁操作
lock.lock();
try {
    
    
	// 执行代码逻辑
} catch (Exception ex) {
    
    
	// ...
} finally {
    
    
    // 解锁操作
    lock.unlock();
}

new ReentrantLock()The constructor creates an unfair lock NonfairSync by default.

FairLockFairSync

  1. Fair lock means that multiple threads acquire locks in the order in which they apply for locks. Threads directly enter the queue and queue up. Only the first thread in the queue can obtain the lock.
  2. The advantage of fair lock is that threads waiting for the lock will not starve to death. The disadvantage is that the overall throughput efficiency is lower than that of unfair locks . All threads in the waiting queue except the first thread will be blocked. The cost of CPU waking up blocked threads is greater than that of unfair locks.

Unfair lockNonfairSync

  1. Unfair lock means that when multiple threads lock, they directly try to acquire the lock. If they cannot acquire the lock, they will wait at the end of the waiting queue. But if the lock happens to be available at this time, then this thread can obtain the lock directly without blocking.
  2. The advantage of unfair lock is that it can reduce the cost of waking up threads , and the overall throughput efficiency is high, because threads have a chance to directly obtain the lock without blocking, and the CPU does not have to wake up all threads. The disadvantage is that threads in the waiting queue may starve to death or wait for a long time to obtain the lock.

When the object created by default is locked():

  • If the lock is not currently occupied by other threads, and the current thread has not acquired the lock before, the current thread will acquire the lock, then set the owner of the current lock to the current thread, set the AQS status value to 1, and then directly return. If the current thread has acquired the lock before, this time it simply adds 1 to the status value of AQS and returns.
  • If the lock is already held by another thread, the unfair lock will try to acquire the lock. If the acquisition fails, the thread calling this method will be put into the AQS queue and blocked and suspended.

image-20230817210616063

31.How does ReentrantLock implement fair locking?

new ReentrantLock() The constructor creates an unfair lock NonfairSynd by default.

public ReentrantLock() {
    
    
	sync = new NonfairSync();
}

At the same time, you can also pass in specific parameters in the lock constructor to create a fair lock FairSync.

ReentrantLock lock = new ReentrantLock(true);
--- ReentrantLock
// true 代表公平锁,false 代表非公平锁
public ReentrantLock(boolean fair) {
    
    
    sync = fair ? new FairSync() : new NonfairSync();
}

FairSync and NonfairSync represent fair locks and unfair locks. Both are static internal classes of ReentrantLock, but implement different lock semantics.

There are two differences between unfair locks and fair locks:

  1. After calling lock for an unfair lock, CAS will first be called to grab the lock. If the lock happens to be not occupied at this time, the lock will be obtained directly and returned.
  2. After the CAS fails, the unfair lock will enter the tryAcquire method just like the fair lock. In the tryAcquire method, if the lock is found to be released at this time (state == 0), the unfair lock will directly grab the lock with CAS, but the fair lock will It will determine whether there is a thread in the waiting queue in the waiting state. If there is, it will not grab the lock and will be queued to the back.

image-20230817210924649

Relatively speaking, unfair lock will have better performance because its throughput is relatively large. Of course, unfair locks make the time to acquire the lock more uncertain, which may cause threads in the blocking queue to be starved for a long time.

32.What about CAS? What does CAS know?

CAS is called CompareAndSwap, which compares and swaps. It mainly uses processor instructions to ensure the atomicity of the operation.

The CAS instruction contains 3 parameters: the memory address A of the shared variable, the expected value B, and the new value C of the shared variable .

The value at address A in memory can be updated to the new value C only when the value at address A in memory is equal to B. As a CPU instruction, the CAS instruction itself can guarantee atomicity .

33.What’s wrong with CAS? How to solve?

ABA problem

In a concurrent environment, assuming the initial condition is A, when you modify the data, if it is found to be A, the modification will be performed. But although what you see is A, A may have changed into B, and B changed back to A again. At this time, A is no longer the other A. Even if the data is successfully modified, there may be problems.

How to solve ABA problems?

  • Add version number

Every time a variable is modified, 1 is added to the version number of the variable. In this way, just A->B->A, although the value of A has not changed, its version number has changed. If you judge the version number again, you will find that At this time, A has been changed. Referring to the version number of optimistic locking, this approach can bring a practical test to the data.

Java provides the AtomicStampReference class. Its compareAndSet method first checks whether the current object reference value is equal to the expected reference, and whether the current stamp (Stamp) flag is equal to the expected flag. If all are equal, the reference value and stamp flag are atomically combined. The value is updated to the given update value.

Loop performance overhead

Spin CAS, if it is executed in a loop without success, will bring very large execution overhead to the CPU.

How to solve loop performance overhead problem?

In Java, many places where spin CAS is used will have a limit on the number of spins. If it exceeds a certain number, the spin will stop.

Only atomic operations on one variable are guaranteed

CAS guarantees the atomicity of an operation on a variable. If multiple variables are operated on, CAS currently cannot directly guarantee the atomicity of the operation.

How to solve the problem of atomic operation that can only guarantee one variable?

  • You can consider using locks to ensure the atomicity of operations
  • You can consider merging multiple variables, encapsulating multiple variables into one object, and ensuring atomicity through AtomicReference.

34.What methods does Java have to ensure atomicity? How to ensure that the i++ results under multi-threading are correct?

image-20230817211530982

  • Use loop atomic classes, such as Atomiclnteger, to implement i++ atomic operations
  • Use locks under the juc package, such as ReentrantLock, to lock i++ operations with lock.lock() to achieve atomicity
  • Use synchronized to lock i++ operations

35. How much do you know about the atomic operation class?

When a program updates a variable, if multiple threads update the variable at the same time, an unexpected value may be obtained. For example, variable i=1, thread A updates i to count 1, and thread B also updates i+1. After two threads operate, Maybe i is not equal to 3, but equal to 2. Because threads A and B both get 1 when updating variable i, this is a thread-unsafe update operation. Generally, we will use synchronized to solve this problem. Synchronized will ensure that multiple threads will not update variable i at the same time.

In fact, in addition to this, there are more lightweight options. Java has provided the java.util.concurrent.atomic package since WDK 1.5. The atomic operation class in this package provides a simple usage, efficient performance, and thread safety. way to update a variable.

Because there are many types of variables, a total of 13 classes are provided in the Atomic package, which belong to 4 types of atomic update methods, namely atomic update basic type, atomic update array, atomic update reference and atomic update attribute (field) .

image-20230817211855322

The classes in the Atomic package are basically wrapper classes implemented using Unsafe.

Using atomic methods to update basic types, the Atomic package provides the following three classes:

  • AtomicBoolean: Atomic update Boolean type.
  • Atomiclnteger: Atomic update integer.
  • AtomicLong: Atomic update long integer.

To update an element in an array atomically, the Atomic package provides the following four classes:

  • AtomiclntegerArray: Atomicly updates elements in an integer array.
  • AtomicLongArray: Atomicly updates elements in a long integer array.
  • AtomicReferenceArray: Atomicly updates elements in a reference type array.
  • The AtomiclntegerArray class mainly provides an atomic way to update the integers in the array.

Atomic update of the basic type Atomiclnteger can only update one variable. If you want to update multiple variables atomically, you need to use the class provided by this atomic update reference type. The Atomic package provides the following 3 classes:

  • AtomicReference: Atomic update reference type.
  • AtomicReferenceFieldUpdater: Atomicly updates fields in reference types.
  • AtomicMarkableReference: Atomic update of a reference type with a mark bit. It is possible to atomically update a Boolean type tag bit and reference type. The constructor is AtomicMarkableReference(VinitialRef, boolean initialMark).

If you need to atomically update a field in a class, you need to use an atomic update field class. The Atomic package provides the following three classes for atomic field updates:

  • AtomiclntegerFieldUpdater: Updater for atomically updating integer fields.
  • AtomicLongFieldUpdater: Updater for atomically updating long integer fields.
  • AtomicStampedReference: Atomic update of a reference type with version number. This class associates integer values ​​with references and can be used for atomic update data and data version numbers, which can solve the ABA problem that may occur when using CAS for atomic updates.

36.What is the principle of Atomiclnteger?

In one sentence: use CAS to implement.

Take the adding method of Atomiclnteger as an example:

public final int getAndIncrement() {
    
    
	return unsafe.getAndAddInt(this, valueOffset, 1);
}

Add operations are performed through Unsafeclass instances. Let’s take a look at the specific CAS operations:

public final int getAndAddInt(Object var1, long var2, int var4) {
    
    
    int var5;
    do {
    
    
    	var5 = this.getIntVolatile(var1, var2);
    } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4));
    
    return var5;
}

compareAndSwaplnt is a native method based on CAS to operate int type variables. Other atomic operation classes are basically the same.

37. Do you understand thread deadlock? How to avoid it?

Deadlock refers to the phenomenon of two or more threads waiting for each other due to competition for resources during execution. Without external force, these threads will keep waiting for each other and cannot continue to run.

image-20230817212534730

So why does a deadlock occur? The following four conditions must be met for deadlock to occur:

image-20230817212633546

  • Mutually exclusive condition: refers to the thread's other use of the resource it has obtained, that is, the resource is only occupied by one thread at the same time. If there are other threads requesting to acquire the resource at this time, the requester can only wait until the thread occupying the resource releases the resource.
  • Request and hold condition: refers to a thread that already holds at least one resource, but makes a new resource request, and the new resource is already occupied by other threads, so the current thread will be blocked, but it will not be released at the same time. resources that you have already obtained.
  • Inevitability condition: means that the resources obtained by the thread cannot be preempted by other threads before it is used up, and the resource can only be released by itself after it is used up.
  • Loop waiting condition: means that when a deadlock occurs, there must be a thread ---- a circular chain of resources, that is, in the thread set {T0, T1, T2..., Tn}, T0 is waiting ---- T1 occupies Resources, T1 is waiting for resources used by T2,...Tn is waiting for resources occupied by T0.

How to avoid deadlock? The answer is to break at least one condition for deadlock to occur .

  • Among them, we cannot destroy the condition of mutual exclusion, because the purpose of locking is mutual exclusion. However, there are ways to destroy the other three conditions. How to do it?
  • For the "request and hold" condition, all resources can be requested at once.
  • Regarding the condition of "non-preemption", when a thread that occupies some resources further applies for other resources, if it cannot apply, it can actively release the resources it occupies, so that the condition of "non-preemption" is destroyed.
  • The condition of "loop waiting" can be prevented by applying for resources in sequence. The so-called sequential application means that resources are in linear order. When applying, you can apply for resources with smaller serial numbers first, and then apply for resources with larger serial numbers. In this way, there will be no loop after linearization.

38. How to troubleshoot the deadlock problem?

You can use the command line tool that comes with jdk to troubleshoot:

  1. Use jps to find running Java processes:jps -l
  2. Use jstack to view thread stack information:jstack -l 进程id

Basically you can see the deadlock information.

You can also use graphical tools, such as Console. After a thread deadlock occurs, click 检测到死锁the button on the Console thread panel and you will see the thread's deadlock information.

Concurrency tools

39.Do you understand CountDownLatch (countdown counter)?

CountDownLatch, countdown counter, has two common application scenarios:

Scenario 1: Coordinate the end action of sub-threads: wait for all sub-threads to finish running

CountDownLatch allows one or more threads to wait for other threads to complete operations.

For example, many of us like to play Honor of Kings. When playing black games, we have to wait until everyone is online before we can start playing.

image-20230817213215553

CountDownLatch mimics this scenario:

Create five players, including Da Qiao, King Lanling, Ahn'Qiraj, Nezha, and Kai. The main thread must complete their confirmation before it can continue to run.

In this code, new CountDownLatch(5)the user creates an initial number of latches, each player countDownLatch.countDown()confirms the completion status, and the main thread countpownLatch.await()waits.

Scenario 2. Coordinate the start of actions of sub-threads: Unify the timing of the start of actions of each thread

There is a similar scene in the Game of Kings. When the game starts, the initial status of each player must be consistent. Some players cannot be born until all players have completed their outfits.

So everyone has to be born together. In this scene, five threads are still used to represent the five players including Da Qiao, King Lanling, Ahn'Qiraj, Nezha and Kai. It should be noted that although each player has called the start() thread, they are all waiting for the countpownLatch signal at runtime. They will not continue execution until the signal is received.

There are not many core methods in CountDownLatch:

  • await(): wait for latch to drop to 0;
  • boolean await(long timeout, TimeUnit unit): Wait for the latch to drop to 0, but you can set the timeout. For example, if a player times out and is not confirmed, then match again. You cannot wait forever for a certain player.
  • countDown(): decrease the number of latch by 1;
  • getcount(): Get the current number of latches.

40.Do you understand CyclicBarrier (synchronization barrier)?

CyclicBarrier literally means cyclic barrier. What it does is to block a group of threads when they reach a barrier (also called a synchronization point). The barrier will not open until the last thread reaches the barrier, and all threads intercepted by the barrier will continue to run.

It is similar to CountDownLatch in that it can coordinate the end actions of multiple threads and can perform specific actions after they end . But why is there a CyclicBarrier? Naturally, it is different from CountDownLatch.

image-20230817213905415

We used code to simulate this scenario and found that CountDownLatch was powerless because the use of CountDownLatch is one-time and cannot be reused, and we waited twice here. At this point, we can use CyclicBarrier because it can be reused.

image-20230817214007788

operation result:

image-20230817214026741

The core method of CyclicBarrier is still await():

  • If the current thread is not the first to arrive at the barrier, it will wait until other threads arrive, unless it is interrupted, the barrier is removed, the barrier is reset, etc.;

The above example is abstracted. In essence, the process is like this:

image-20230817214149621

41.What is the difference between CyclicBarrier and CountDownLatch?

The core difference between the two:

  • CountDownLatch is one-time use, while CyclicBarrier can set the barrier multiple times for reuse;
  • Each sub-thread in CountDownLatch cannot wait for other threads and can only complete its own tasks; while each thread in CyclicBarrier can wait for other threads

Their differences are organized in a table:

image-20230818090358827

42.Do you understand Semaphore?

Semaphore is used to control the number of threads that access specific resources at the same time. It coordinates various threads to ensure reasonable use of public resources .

It may sound abstract, but now that there are so many cars, one of the most difficult problems when driving is parking. The parking spaces in the parking lot are limited and can only allow a number of vehicles to park. If there are still vacancies in the parking lot, the display sign will show the green light and the remaining parking spaces, and the vehicle can drive in; if there is no space in the parking lot, the display sign will What is displayed is the red light and the number 0, and the vehicle has to wait. If a car leaves the full parking lot, the display will turn green again, showing the number of empty parking spaces, and waiting vehicles can enter the parking lot.

Let's make an analogy to this example. The vehicle is a thread. When entering the parking lot, the thread is executing. When leaving the parking lot, the thread has completed execution. If you see a red light, it means that the thread is blocked and cannot execute. The essence of Semaphore is to coordinate multiple threads to share resources . of acquisition .

image-20230818090644422

Let's look at another use of Semaphore: it can be used for flow control , especially in application scenarios with limited public resources, such as database connections.

If there is a need to read data from tens of thousands of files, because they are IO-intensive tasks, we can start dozens of threads to read concurrently, but if it is read into the memory, it still needs to be stored in the database, and The number of database connections is only 10. At this time, we must control only 10 threads to obtain database connections and save data at the same time, otherwise an error will be reported and the database connection cannot be obtained. At this time, you can use Semaphore for flow control, as follows:

public class SemaphoreTest {
    
    
    private static final int THREAD_COUNT = 30;
    private static ExecutorService threadPool = Executors.newFixedThreadPool(THREAD_COUNT);
	private static Semaphore s = new Semaphore(10);
    
    public static void main(String[] args) {
    
    
        for (int i = 0; i < THREAD_COUNT; i++) {
    
    
            threadPool.execute(new Runnable() {
    
    
                @Override
                public void run() {
    
    
                    try {
    
    
                        s.acquire();
                        System.out.println("save data");
                        s.release();
                    } catch (InterruptedException e) {
    
    
                    }
                }
            });
        }
    	threadPool.shutdown();
    }
}

In the code, although there are 30 threads executing, only 10 concurrent executions are allowed. The constructor of Semaphore semaphore(int permits)accepts an integer number representing the number of available licenses. Semaphore(10)It means that 10 threads are allowed to obtain the license, that is, the maximum number of concurrency is 10. The usage of Semaphore is also very simple. First, the thread uses the Semaphore acquire()method to obtain a license, and after use, release()the method is called to return the license. There are also tryAcquire()ways to try to obtain a license.

43.Do you understand Exchanger?

Exchanger is a tool class for collaboration between threads. Exchanger is used to exchange data between threads. It provides a synchronization point where two threads can exchange each other's data.

image-20230818091331017

The two threads exchange data through the exchange method. If the first thread executes the exchange() method first, it will wait for the second thread to also execute the exchange method. When both threads reach the synchronization point, the two threads You can exchange data and pass the data produced by this thread to the other party.

Exchanger can be used in genetic algorithms . In genetic algorithms, two people need to be selected as mating partners. At this time, the data of the two people will be exchanged, and two mating results will be obtained using crossover rules. Exchanger can also be used for proofreading work . For example, we need to enter paper bank statements into electronic bank statements manually. In order to avoid errors, two people from AB post are used to enter. After entering into Excel, the system needs to load these two Excels. , and proofread the two Excel data to see if they are entered consistently.

public class ExchangerTest {
    
    
    private static final Exchanger<String> exgr = new Exchanger<String>();
    private static ExecutorService threadPool = Executors.newFixedThreadPool(2);
    
    public static void main(String[] args) {
    
    
        threadPool.execute(new Runnable() {
    
    
            @Override
                public void run() {
    
    
                try {
    
    
                    String A = "银行流水A"; // A录入银行流水数据
                    exgr.exchange(A);
                } catch (InterruptedException e) {
    
    
                }
            }
        });
        threadPool.execute(new Runnable() {
    
    
            @Override
            public void run() {
    
    
                try {
    
    
                    String B = "银行流水B"; // B录入银行流水数据
                    String A = exgr.exchange(B);
                    System.out.println("A和B数据是否一致:" + A.equals(B) + ",A录入的是:"
                    					+ A + ",B录入是:" + B);
                } catch (InterruptedException e) {
    
    
                }
            }
        });
        threadPool.shutdown();
    }
}

If one of the two threads does not execute the exchange() method, it will wait forever. If you are worried about special circumstances happening, avoid waiting forever.

You can use exchange(V x, long timeOut, TimeUnit unit)to set the maximum waiting time.

Thread Pool

44.What is a thread pool?

Thread pool: Simply understood, it is a pool that manages threads.

image-20230818112338751

  • It helps us manage threads and avoid increasing the resource consumption of creating and destroying threads . Because a thread is actually an object. To create an object, you need to go through the class loading process. To destroy an object, you need to go through the GC garbage collection process, which all require resource overhead.
  • Improve response speed . If the task arrives, compared to taking the thread from the thread pool, re-creating a thread for execution will definitely be much slower.
  • Reuse . When the thread is used up, it is returned to the pool, which can achieve the effect of reuse and save resources.

45. Can you talk about the application of thread pool in work?

Previously, we had a need to connect with a third party and push data to the third party. Multi-threading was introduced to improve the efficiency of data push, and a thread pool was used to manage threads.

image-20230818112511195

Complete runnable code address: https://gitee.com/fighter3/thread-demo.git

The parameters of the thread pool are as follows:

  • corePoolSize: Thread core parameters, selected CPU number × 2
  • maximumPoolSize: The maximum number of threads, selected to be the same as the number of core threads
  • keepAliveTime: non-core idle thread survival time, set directly to 0
  • unit: The time for non-core threads to stay alive, TimeUnit.SECONDS seconds are selected
  • workQueue: Thread pool waiting queue, using LinkedBlockingQueue to block the queue
  • At the same time, synchronized is also used to lock to ensure that the data will not be pushed repeatedly:
synchronized (PushProcessServiceImpl.class) {
    
    }

ps: This example is just a simple data push. In fact, it can also be combined with other businesses, such as data cleaning and data statistics. It can be applied.

46. ​​Can you briefly talk about the workflow of the thread pool?

To use a popular metaphor:

There is a business hall with a total of six windows. Now three windows are open. Now there are three salesmen and ladies sitting in the three windows doing business.

What might happen if I go to do business?

  1. I found that there was space at the business window, so I went directly to the lady to handle the business.

image-20230818113400319

  1. I found that there was no free window, so I waited in line in the queue area.

image-20230818113442427

  1. I found that there was no free window and the waiting area was full. I was in Bengbu. When the manager saw it, he asked the girl who was resting to come back to work quickly. The person who was waiting for the area code was rushed to the new window. I went to the queuing area to queue up. The young ladies have a harder time. If it is found that they can no longer be open for a period of time, the manager will let them continue to rest.

image-20230818113553370

  1. When I looked, all six windows were full, and there was no room left in the waiting area. I was anxious and wanted to make a fuss. The manager came out quickly. What should the manager do?

image-20230818113635891

  1. Our banking system is paralyzed
  2. Who asked you to do it? Who should you go to?
  3. Seeing that you are in a hurry, go join the team and add a stopper
  4. There's nothing you can do today. If it doesn't work, try another day.

The above process is almost similar to the general process of JDK thread pool.

  1. The three windows in operation correspond to the number of core thread pools: corePoolSize
  2. The total number of business windows 6 corresponds to: maximumPoolSize
  3. The open temporary window will be closed if no one handles it for a certain period of time. Correspondence: unit
  4. The queuing area is the waiting queue: workQueue
  5. When the application cannot be processed, the solution provided by the bank corresponds to: RejectedExecutionHandler
  6. threadFactory This parameter is a thread factory in JDK, which is used to create thread objects and generally does not move.

So the workflow of our thread pool is easier to understand:

  1. When the thread pool is first created, there is no thread in it. The task queue is passed in as a parameter. However, even if there are tasks in the queue, the thread pool will not execute them immediately.
  2. When calling the execute() method to add a task, the thread pool will make the following judgment:
  • If the number of running threads is less than corePoolSize, create a thread immediately to run the task;
  • If the number of running threads is greater than or equal to corePoolSize, then put this task into the queue;
  • If the queue is full at this time and the number of running threads is less than maximumPoolSize, then a non-core thread must be created to run the task immediately;
  • If the queue is full and the number of running threads is greater than or equal to maximumPoolSize, the thread pool will handle it accordingly according to the rejection policy.

image-20230818142359814

  1. When a thread completes a task, it takes a task from the queue and executes it.
  2. When a thread has nothing to do and exceeds a certain time (keepAliveTime), the thread pool will determine that if the number of currently running threads is greater than corePoolSize, then the thread will be stopped. So after all tasks of the thread pool are completed, it will eventually shrink to the size of corePoolSize.

47.What are the main parameters of the thread pool?

image-20230818142516873

The thread pool has seven major parameters, and we need to focus on corePoolSizethese four .maximumPoolSizeworkQueuehandler

  1. corePoolSize

This value is used to initialize the number of core threads in the thread pool. When the number of thread pools in the thread pool < corePoolSize, the system defaults to adding a task before creating a thread pool.

When the number of threads = corePoolSize, new tasks will be appended to the workQueue.

  1. maximumPoolSize

maximumPoolSizeIndicates that the maximum number of threads allowed is equal to (number of non-core threads + number of core threads). When it is BlockingQueuefull, but the total number of threads in the thread pool < maximumPoolSize, a new thread will be created again.

  1. keepAliveTime

Non-core threads = (maximumPoolSize - corePoolSize), the maximum survival time of non-core threads when they are idle and not working.

  1. unit

The unit of time that non-core threads in the thread pool remain alive.

  • TimeUnit.DAYS: days
  • TimeUnit.HOURS: Hours
  • TimeUnit.MINUTES: Minutes
  • TimeUnit.SECONDS: seconds
  • TimeUnit.MILLISECONDS: milliseconds
  • TimeUnit.MICROSECONDS: microseconds
  • TimeUnit.NANOSECONDS: Nanoseconds
  1. workQueue

The thread pool wait queue maintains Runnableobjects waiting to be executed. When running when the number of threads = corePoolSize, new tasks will be added to it workQueue. If it is workQueuealso full, try to use non-core threads to execute tasks. The waiting queue should be bounded as much as possible.

  1. threadFactory

The factory used when creating a new thread can be used to set the thread name, whether it is a daemon thread, etc.

  1. handler

corePoolSizeThe saturation strategy is executed when , workQueue, maximumPoolSizeand are not available.

48.What are the rejection strategies of the thread pool?

Analogy to the previous example, how to handle business when it is impossible to handle, to help memory:

image-20230818143514749

  • AbortPolicy: Throw an exception directly, this policy is used by default
  • CallerRunsPolicy: Use the thread where the caller is located to execute tasks
  • DiscardOldestPolicy: Discard the oldest task in the blocking queue, that is, the task at the front of the queue
  • DiscardPolicy: The current task is discarded directly

If you want to implement your own rejection policy, RejectedExecutionHandlerjust implement the interface.

49.What kinds of work queues are there in the thread pool?

Commonly used blocking queues mainly include the following:

image-20230818143718439

  • ArrayBlockingQueue: ArrayBlockingQueue (bounded queue) is a bounded blocking queue implemented with an array, sorted by FIFO.
  • LinkedBlockingQueue: LinkedBlockingQueue (settable capacity queue) is a blocking queue based on a linked list structure. Tasks are sorted according to FlFO. The capacity can be set. If not set, it will be an unbounded blocking queue with a maximum length of Integer.MAX-VALUE. Throughput is usually higher than ArrayBlockingQuene; the newFixedThreadPool thread pool uses this queue
  • DelayQueue: DelayQueue (delay queue) is a queue that delays the execution of a task in a scheduled period. Sort according to the specified execution time from small to large, otherwise sort according to the order inserted into the queue. The newScheduledThreadPool thread pool uses this queue.
  • PriorityBlockingQueue: PriorityBlockingQueue (priority queue) is an unbounded blocking queue with priority
  • SynchronousQueue: SynchronousQueue (synchronous queue) is a blocking queue that does not store elements. Each insertion operation must wait until another thread calls the removal operation, otherwise the insertion operation will always be blocked. The throughput is usually higher than LinkedBlockingQuene, and the newCachedThreadPool thread pool is used. this queue.

50·What is the difference between execute and submit in thread pool submission?

  1. execute is used to submit tasks that do not require a return value
threadsPool.execute(new Runnable() {
    
    
    @Override public void run() {
    
    
    // TODO Auto-generated method stub }
});
  1. The submit() method is used to submit tasks that require return values. The thread pool will return a future type object. Through this future object, you can determine whether the task is successfully executed, and you can obtain the return value through the future's get() method.
Future<Object> future = executor.submit(harReturnValuetask);
try {
    
     Object s = future.get(); } catch (InterruptedException e) {
    
    
	// 处理理中断异常
} catch (ExecutionException e) {
    
    
	// 处理理⽆无法执⾏行行任务异常
} finally {
    
    
	// 关闭线程池 executor.shutdown();
}

51.Do you know how to close the thread pool?

The thread pool can be shut down by calling the thread pool's shutdownor shutdownNowmethod. Their principle is to traverse the worker threads in the thread pool, and then call the thread's interrupt method one by one to interrupt the thread, so tasks that cannot respond to interruptions may never be terminated.

shutdown() sets the thread pool status to shutdown and will not stop immediately:

  1. Stop accepting tasks from external submissions
  2. Tasks running internally and tasks waiting in the queue will be executed.
  3. Wait until the second step is completed before actually stopping

shutdownNow() sets the thread pool status to stop. Usually it will stop immediately, but in fact it may not:

  1. Like shutdown(), first stop receiving externally submitted tasks.
  2. Ignore tasks waiting in the queue
  3. Try to interrupt the running task
  4. Returns a list of unexecuted tasks

In simple terms, the differences between shutdown and shutdownnow are as follows:

  • shutdownNow() can stop the thread pool immediately, and both running and waiting tasks are stopped. This takes effect immediately, but is also relatively risky.
  • shutdown() only closes the submission channel, and using submit() is invalid; the internal tasks should be run as they should, and then the thread pool will be completely stopped after running.

52. How should the number of threads in the thread pool be configured?

Threads are scarce resources in Java. The bigger the thread pool is, the better or the smaller the better. Tasks are divided into computing-intensive, IO-intensive, and mixed types .

  1. Computing intensive: Most of them use CPU and memory, encryption, logical operation business processing, etc.
  2. IO-intensive: database links, network communication transmission, etc.

image-20230818145550809

General experience, parameter configuration of different types of thread pools:

  1. For computing-intensive applications, it is generally recommended that the thread pool should not be too large. Generally, the number of CPUs is +1. +1 is because there may be page missing (that is, there may be some data in the hard disk that requires an extra thread to read the data into the memory). If the number of thread pools is too large, thread context switching and task scheduling may occur frequently. The code to obtain the current number of CPU cores is as follows:
Runtime.getRuntime().availableProcessors();
  1. IO-intensive: The number of threads should be larger, and the number of CPU cores of the machine * 2.
  2. Mixed type: You can consider the situation and split it into CPU-intensive and IO-intensive tasks. If the execution time is not much different, splitting can improve throughput, and vice versa is not necessary.

53.What are the common thread pools?

There are four main types of common interview questions, all of which are created through the tool class Excutors. It should be noted that Alibaba's "Java Development Manual" prohibits the use of this method to create thread pools.

image-20230818145815716

54. Can you talk about the principles of four common thread pools?

The construction of the first three thread pools directly calls the construction method of ThreadPoolExecutor.

newSingleThreadExecutor

public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) {
    
    
    return new FinalizableDelegatedExecutorService
        (new ThreadPoolExecutor(1, 1,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>(),
                                threadFactory));
}

Thread pool features

  • The number of core threads is 1
  • The maximum number of threads is also 1
  • The blocking queue is an unbounded queue LinkedBlockingQueue, which may cause OOM
  • keepAliveTime is 0

image-20230818150707668

work process:

  • Submit task
  • Whether there is a thread in the thread pool, if not, create a new thread to perform the task
  • If so, add the task to the blocking queue
  • The current only thread fetches tasks from the queue, executes one, and then continues to fetch and execute tasks in one thread.

Applicable scene:

Suitable for serial execution of tasks, one task at a time.

newFixedThreadPool

public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory) {
    
    
    return new ThreadPoolExecutor(nThreads, nThreads,
                                  0L, TimeUnit.MILLISECONDS,
                                  new LinkedBlockingQueue<Runnable>(),
                                  threadFactory);
    }

Thread pool features:

  • The number of core threads is the same as the maximum number of threads
  • There is no so-called non-idle time, that is, keepAliveTime is 0
  • The blocking queue is an unbounded queue LinkedBlockingQueue, which may cause OOM

image-20230818150952708

work process:

  • Submit task
  • If the number of threads is less than core threads, create core threads to perform tasks
  • If the number of threads is equal to core threads, add the task to the LinkedBlockingQueue blocking queue
  • If the thread finishes executing the task, it will block the queue to fetch the task and continue execution.

scenes to be used:

FixedThreadPool is suitable for processing CPU-intensive tasks, ensuring that when the CPU is used by worker threads for a long time, as few threads as possible are allocated, that is, it is suitable for executing long-term tasks.

newCachedThreadPool

public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {
    
    
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                  60L, TimeUnit.SECONDS,
                                  new SynchronousQueue<Runnable>(),
                                  threadFactory);
}

Thread pool features:

  • The number of core threads is 0
  • The maximum number of threads is Integer.MAX_VALUE, which is infinite. It may cause OOM due to unlimited thread creation.
  • The blocking queue is SynchronousQueue
  • The idle survival time of non-core threads is 60 seconds

When the speed of submitting tasks is greater than the speed of processing tasks, each time a task is submitted, a thread will inevitably be created. In extreme cases, too many threads will be created, exhausting CPU and memory resources. Since threads that are idle for 60 seconds will be terminated, CachedThreadPool that remains idle for a long time will not occupy any resources.

image-20230818151257285

work process:

  • Submit task
  • Because there are no core threads, tasks are added directly to the SynchronousQueue queue.
  • Determine whether there is an idle thread, and if so, take out the task and execute it.
  • If there is no idle thread, create a new thread for execution.
  • The thread that has completed the task can survive for 60 seconds. If it receives the task during this period, it can continue to live; otherwise, it will be destroyed.

Applicable scene:

Used to execute a large number of short-term small tasks concurrently.

newScheduledThreadPool

public ScheduledThreadPoolExecutor(int corePoolSize) {
    
    
    super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
          new DelayedWorkQueue());
}

Thread pool features

  • The maximum number of threads is Integer.MAX_VALUE, and there is also the risk of OOM

  • The blocking queue is DelayedWorkQueue

  • keepAliveTime is 0

  • scheduleAtFixedRate(): execute periodically at a certain rate

  • scheduleWithFixedDelay(): execute after a certain delay

image-20230818151636793

Working Mechanism

  • The thread obtains the expired ScheduledFutureTask from DelayQueue (DelayQueue.take()). An expired task means that the time of ScheduledFutureTask is greater than or equal to the current time.
  • The thread executes this ScheduledFutureTask.
  • The thread modifies the time variable of ScheduledFutureTask to the time when it will be executed next time.
  • The thread puts the ScheduledFutureTask after the modified time back into the DelayQueue (DelayQueue.add()).

image-20230818151846343

scenes to be used

Scenarios where tasks are executed periodically and the number of threads needs to be limited

Will using a thread pool with an unbounded queue cause any problems?

For example, newFixedThreadPool uses the unbounded blocking queue LinkedBlockingQueue. If the thread obtains a task and the execution time of the task is relatively long, more tasks will accumulate in the queue, causing the machine memory usage to continue to soar, eventually leading to OOM.

55. Do you know how to handle thread pool exceptions?

When using a thread pool to process a task, the task code may throw a RuntimeException. After the exception is thrown, the thread pool may catch it or create a new thread to replace the abnormal thread. We may not be able to perceive that an exception has occurred in the task, so We need to consider thread pool exceptions.

Common exception handling methods:

image-20230818152050388

56. Can you tell me how many states the thread pool has?

The thread pool has these states: RUNNING, SHUTDOWN, STOP, TIDYING, TERMINATED .

//线程池状态
private static final int RUNNING = -1 << COUNT_BITS;
private static final int SHUTDOWN = 0 << COUNT_BITS;
private static final int STOP = 1 << COUNT_BITS;
private static final int TIDYING = 2 << COUNT_BITS;
private static final int TERMINATED = 3 << COUNT_BITS;

image-20230818152413535

RUNNING

  • The thread pool in this state will receive new tasks and process tasks in the blocking queue;
  • Call the shutdown() method of the thread pool to switch to the SHUTDOWN state;
  • Call the shutdownNow() method of the thread pool to switch to the STOP state;

SHUTDOWN

  • The thread pool in this state will not receive new tasks, but will process tasks in the blocking queue;
  • The queue is empty, and the tasks executed in the thread pool are also empty, entering the TIDYING state;

STOP

  • Threads in this state will not receive new tasks, nor process tasks in the blocking queue, and will interrupt running tasks;
  • The tasks executed in the thread pool are empty and enter the TIDYING state;

TIDYING

  • This status indicates that all tasks have been terminated and the number of recorded tasks is 0.
  • terminated() is executed and enters the TERMINATED state

TERMINATED

  • This state indicates that the thread pool has been completely terminated.

57. How does the thread pool implement dynamic modification of parameters?

The thread pool provides several setter methods to set the parameters of the thread pool.

image-20230818152708705

There are two main ideas here:

image-20230818152804000

  • Under our microservice architecture, you can use configuration centers such as Nacos, Apollo, etc., or you can develop your own configuration center. The business service reads the thread pool configuration and obtains the corresponding thread pool instance to modify the thread pool parameters.
  • If the use of the configuration center is restricted, you can also extend ThreadPoolExecutor yourself, rewrite methods, monitor changes in thread pool parameters, and dynamically modify thread pool parameters.

Do you know about thread pool tuning?

There is no fixed formula for thread pool configuration. Usually the thread pool will be evaluated to a certain extent in advance. Common evaluation schemes are as follows:

image-20230818152924220

Sufficient testing must be carried out before going online, and a complete thread pool monitoring mechanism must be established after going online.

During the process, the monitoring and alarm mechanism can be combined to analyze the problems of the thread pool, or the optimization points can be combined with the dynamic parameter configuration mechanism of the thread pool to adjust the configuration.

Pay attention to careful observation afterwards and make adjustments at any time.

image-20230818153010671

58. Can you design and implement a thread pool?

This question appears more frequently in Ali’s interviews

The implementation principle of the thread pool can be viewed here. If someone had talked about the thread pool like this before, I should have understood it long ago! , of course, we implement it ourselves, we only need to grasp the core process of the thread pool:

image-20230818153303199

Our own implementation is to complete this core process:

  • There are N worker threads in the thread pool
  • Submit the task to the thread pool for running
  • If the thread pool is full, put the task into the queue
  • Finally, when there is free time, get the tasks in the queue to execute

59. What should be done if the single-machine thread pool execution is powered off?

We can perform transaction management on tasks that are being processed and blocked queues or persist tasks in blocked queues , and when there is a power outage or system crash and the operation cannot continue, we can undo the tasks being processed by backtracking the log. A successful operation has been performed. Then re-execute the entire blocking queue.

In other words, the persistence of the blocking queue is processing task transaction control ; after a power outage, the rollback of the task is being processed, and the operation is restored through the log; the data in the blocking queue is reloaded after the server is restarted.

Concurrency containers and frameworks

60.Do you understand the Fork/Join framework?

The Fork/Join framework is a framework provided by Java7 for executing tasks in parallel . It is a framework that divides large tasks into several small tasks , and finally summarizes the results of each small task to obtain the results of the large task.

To master the Fork/Join framework, you first need to understand two points, divide and conquer and work-stealing algorithms .

image-20230818153715902

work stealing algorithm

Split a large task into several small tasks, put these small tasks into different queues, and create separate threads to execute the tasks in the queue.

Then the problem comes, some threads work quickly, and some threads work slowly. The thread that has finished its work cannot be left idle; it must be allowed to work for the thread that has not finished its work. It steals a task from the queue of another thread for execution. This is called work stealing .

When work stealing occurs, they will access the same queue. In order to reduce the competition between the stealing task thread and the stolen task thread, the task usually uses a double-ended queue. The stolen task thread always takes from the head of the double-ended queue . The thread that steals the task always takes the task from the tail of the double-ended queue for execution.

image-20230818153908159

The main difference between ForkjoinTask and general Task is that it needs to implement the compute method. In this method, you first need to determine whether the task is small enough. If it is small enough, execute the task directly. If it is relatively large, it must be divided into two subtasks. When each subtask calls the fork method, it will enter the compute method to see if the current subtask needs to be further divided into subtasks. If it does not need to be further divided, the current subtask will be executed. and return the result. Using the join method will wait for the subtask to complete and get its results.

Source address: The counterattack of noodle scum: Sixty questions on Java concurrency, detailed explanations with pictures and texts, come and see how many you know!

Guess you like

Origin blog.csdn.net/weixin_45483322/article/details/132363273