A thorough understanding of Java concurrent high-frequency interview questions

The content is taken from my learning website: topjavaer.cn

Share 50 Java concurrent high-frequency interview questions .

Thread Pool

Thread pool: A pool that manages threads.

Why do you usually use the thread pool to create threads, is it not good to directly create a new thread?

Well, creating threads manually has two disadvantages

  1. uncontrolled risk
  2. Frequent creation is expensive

Why is it out of control ?

System resources are limited, everyone can manually create threads for different businesses, and there is no uniform standard for creating threads, such as whether the created threads have a name, etc. When the system is running, all threads are grabbing resources, there are no rules, and the chaotic scene can be imagined, which is difficult to control. The most comprehensive Java interview site

Why is frequent manual thread creation expensive? What is the difference with new Object()?

Although everything in Java is an object, there is still a difference between new Thread() creating a thread and new Object().

The new Object() process is as follows:

  1. The JVM allocates a block of memory M
  2. Initialize the object on memory M
  3. Assign the address of memory M to the reference variable obj

The process of creating a thread is as follows:

  1. The JVM allocates memory for a thread stack, which holds a stack frame for each thread method call
  2. Each stack frame consists of an array of local variables, return value, operand stack and constant pool
  3. Each thread obtains a program counter, which is used to record the thread instruction address currently being executed by the virtual machine
  4. The system creates a native thread corresponding to the Java thread
  5. Add thread-related descriptors to JVM internal data structures
  6. Thread shared heap and method area

Creating a thread requires about 1M space (Java8, machine specification 2c8G). It can be seen that the cost of frequently manually creating/destroying threads is very high.

Why use a thread pool?

  • Reduce resource consumption . Reduce the cost of thread creation and destruction by reusing created threads.
  • Improve responsiveness . When a task arrives, it can be executed immediately without waiting for a thread to be created.
  • Improve thread manageability . Manage threads in a unified way to prevent the system from creating a large number of threads of the same type and consuming memory.

Thread pool execution principle?

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-nmwpJRVq-1685887928368) (http://img.topjavaer.cn/img/thread pool execution process.png) ]

  1. When the number of surviving threads in the thread pool is less than the number of core threads corePoolSize, then for a newly submitted task, the thread pool will create a thread to process the task. When the number of surviving threads in the thread pool is less than or equal to the number of core threads corePoolSize, the threads in the thread pool will always survive. Even if the idle time exceeds, the keepAliveTimethreads will not be destroyed, but will always be blocked there and wait for tasks in the task queue to execute.
  2. When the number of surviving threads in the thread pool is equal to corePoolSize, this is a newly submitted task, which will be put into the task queue workQueue to wait for execution.
  3. When the number of surviving threads in the thread pool is equal to corePoolSizeand the task queue is full, suppose that maximumPoolSize>corePoolSizeif a new task comes in at this time, the thread pool will continue to create new threads to process the new task until the number of threads reaches the limit maximumPoolSize. Will not be created again.
  4. If the current number of threads has reached maximumPoolSize, and the task queue is full, if there are new tasks coming, then directly use the rejection strategy for processing. The default rejection strategy is to throw a RejectedExecutionException.

This article has been included in the Github warehouse, which includes computer foundation, Java foundation, multithreading, JVM, database, Redis, Spring, Mybatis, SpringMVC, SpringBoot, distributed, microservices, design patterns, architecture, school recruitment and social recruitment sharing, etc. Core knowledge points, welcome to star~

Github address

If you can't access Github, you can access the gitee address.

gitee address

What are the thread pool parameters?

Generic constructor for ThreadPoolExecutor:

public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler);

1. corePoolSize: When there is a new task, if the number of threads in the thread pool does not reach the basic size of the thread pool, a new thread will be created to execute the task, otherwise the task will be put into the blocking queue. When the number of surviving threads in the thread pool is always greater than corePoolSize, you should consider increasing corePoolSize.

2. maximumPoolSize: When the blocking queue is full, if the number of threads in the thread pool does not exceed the maximum number of threads, a new thread will be created to run the task. Otherwise the new task is processed according to the rejection policy. Non-core threads are similar to temporarily borrowed resources. These threads should exit after the idle time exceeds keepAliveTime to avoid resource waste.

3. BlockingQueue: store the tasks waiting to run.

4. keepAliveTime: After the non-core thread is idle, the time to keep alive, this parameter is only valid for the non-core thread. Set to 0, indicating that redundant idle threads will be terminated immediately.

5. TimeUnit: Time unit

TimeUnit.DAYS
TimeUnit.HOURS
TimeUnit.MINUTES
TimeUnit.SECONDS
TimeUnit.MILLISECONDS
TimeUnit.MICROSECONDS
TimeUnit.NANOSECONDS

6. ThreadFactory: Whenever the thread pool creates a new thread, it is done through the thread factory method. Only one method newThread is defined in ThreadFactory, which is called whenever the thread pool needs to create a new thread.

public class MyThreadFactory implements ThreadFactory {
    
    
    private final String poolName;
    
    public MyThreadFactory(String poolName) {
    
    
        this.poolName = poolName;
    }
    
    public Thread newThread(Runnable runnable) {
    
    
        return new MyAppThread(runnable, poolName);//将线程池名字传递给构造函数,用于区分不同线程池的线程
    }
}

7. RejectedExecutionHandler: When the queue and thread pool are full, process new tasks according to the rejection policy.

AbortPolicy:默认的策略,直接抛出RejectedExecutionException
DiscardPolicy:不处理,直接丢弃
DiscardOldestPolicy:将等待队列队首的任务丢弃,并执行当前任务
CallerRunsPolicy:由调用线程处理该任务

How to set the thread pool size?

If the number of threads in the thread pool is too small, when there are a large number of requests to be processed, the system response will be slow, which will affect the user experience, and even a large number of tasks will accumulate in the task queue, resulting in OOM.

If the number of threads in the thread pool is too large, a large number of threads may seize CPU resources at the same time, which will cause a large number of context switches, thereby increasing the execution time of threads and affecting execution efficiency.

CPU-intensive task (N+1) : This kind of task mainly consumes CPU resources, and the number of threads can be set to . The N(CPU 核心数)+1extra thread is to prevent thread blocking caused by some reasons (such as IO operations, thread sleep, Waiting for the lock) and the impact. Once a thread is blocked, CPU resources are released, and in this case an extra thread can make full use of the idle time of the CPU.

I/O-intensive tasks (2N) : The system spends most of its time processing IO operations. At this time, the thread may be blocked to release CPU resources. At this time, the CPU can be handed over to other threads for use. Therefore, in the application of IO-intensive tasks, more threads can be configured. The specific calculation method is: 最佳线程数 = CPU核心数 * (1/CPU利用率) = CPU核心数 * (1 + (IO耗时/CPU耗时)), which can generally be set to 2N.

The most comprehensive Java interview site

What are the types of thread pools? Applicable scene?

Common thread pools are FixedThreadPool, SingleThreadExecutor, CachedThreadPooland ScheduledThreadPool. These are all ExecutorServicethread pool instances.

FixedThreadPool

A thread pool with a fixed number of threads. At any point in time, at most nThreads threads are active to perform tasks.

public static ExecutorService newFixedThreadPool(int nThreads) {
	return new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
}

Using the unbounded queue LinkedBlockingQueue (the queue capacity is Integer.MAX_VALUE), the running thread pool will not reject tasks, that is, the RejectedExecutionHandler.rejectedExecution() method will not be called.

maxThreadPoolSize is an invalid parameter, so set its value to be consistent with coreThreadPoolSize.

keepAliveTime is also an invalid parameter, set it to 0L, because all threads in this thread pool are core threads, and core threads will not be recycled (unless executor.allowCoreThreadTimeOut(true) is set).

Applicable scenarios: It is suitable for processing CPU-intensive tasks, ensuring that the CPU is allocated as few threads as possible when the CPU is used by worker threads for a long time, that is, it is suitable for performing long-term tasks. It should be noted that FixedThreadPool will not reject tasks, and it will cause OOM when there are many tasks.

SingleThreadExecutor

A thread pool with only one thread.

public static ExecutionService newSingleThreadExecutor() {
	return new ThreadPoolExecutor(1, 1, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>());
}

Use unbounded queue LinkedBlockingQueue. There is only one running thread in the thread pool, and new tasks are put into the work queue. After the thread processes the task, it will get the task from the queue for execution in a loop. Ensure that tasks are executed sequentially.

Applicable scenarios: Applicable to the scenario of executing tasks serially, one task is executed one task at a time. When there are many tasks, it will also cause OOM.

CachedThreadPool

A thread pool that creates new threads as needed.

public static ExecutorService newCachedThreadPool() {
	return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
}

If the rate at which the main thread submits tasks is higher than the rate at which threads can process tasks, CachedThreadPoolnew threads will be created continuously. In extreme cases, this can lead to exhaustion of cpu and memory resources.

Use a SynchronousQueue with no capacity as the thread pool work queue. When there are idle threads in the thread pool, the SynchronousQueue.offer(Runnable task)submitted tasks will be processed by the idle threads, otherwise a new thread will be created to process the tasks.

Applicable scenarios: Used to execute a large number of short-term small tasks concurrently. CachedThreadPoolThe number of threads allowed to be created is Integer.MAX_VALUE, a large number of threads may be created, causing OOM.

ScheduledThreadPoolExecutor

Run the task after a given delay, or periodically. It will basically not be used in actual projects, because there are other options such as quartz.

The task queue used DelayQueueencapsulates one PriorityQueue, PriorityQueuewhich will sort the tasks in the queue, and the task with the earliest time will be executed first (that is, the smaller variableScheduledFutureTask will be executed first), and if the time is the same, the task submitted first will be executed first ( the smaller variable executed first).timeScheduledFutureTasksquenceNumber

Execute periodic task steps:

  1. Threads get expired DelayQueuefrom ScheduledFutureTask(DelayQueue.take()). The due task means ScheduledFutureTaskthat the time of is greater than or equal to the time of the current system;
  2. execute this ScheduledFutureTask;
  3. The ScheduledFutureTaskmodified time variable is the time to be executed next time;
  4. Put this modified time ScheduledFutureTaskback DelayQueue( DelayQueue.add()).

Applicable scenarios: Scenarios where tasks are executed periodically, and scenarios where the number of threads needs to be limited.

Does a project use multiple thread pools or one thread pool?

If there are multiple scenarios in the project that need to use the thread pool, then the best way is: use an independent thread pool for each business scenario. Don't let all scenes share a thread pool.

1) Independent line cities do not affect each other's task operations, which is more conducive to ensuring the independence and integrity of this task, and is more in line with the low-coupling design idea

2) If all scenarios share a thread pool, problems may arise. For example, there are three task scenarios of task A, task B, and task C sharing a thread pool. When the amount of requests for task A increases sharply, it will cause task B and task C to have no available threads, and resources may not be obtained for a long time. For example, task A has 3000 thread requests at the same time. At this time, task B and task C may not be allocated resources or be allocated few thread resources.

Note:

1. The classes that come with JDK use a lot of thread pools;
2. Many open source frameworks use a lot of thread pools;
3. Your own applications will also create multiple thread pools;
4. How many thread pools, each thread pool How many threads to provide must be tested in detail

process thread

A process refers to an application program running in memory, and each process has its own independent memory space.

A thread is an execution unit smaller than a process. It is an independent control flow in a process. A process can start multiple threads, and each thread executes different tasks in parallel.

thread life cycle

Initial (NEW) : The thread is constructed and start() has not been called yet.

Running (RUNNABLE) : Including the ready and running states of the operating system.

Blocking (BLOCKED) : Generally passive, no resources can be obtained during the preemption of resources, passively suspended in memory, waiting for the release of resources to wake it up. A blocked thread releases the CPU, not the memory.

Waiting (WAITING) : The thread entering this state needs to wait for other threads to take some specific actions (notification or interruption).

Timeout waiting (TIMED_WAITING) : This state is different from WAITING, it can return by itself after the specified time.

Terminated (TERMINATED) : Indicates that the thread has been executed.

Image source: The Art of Concurrent Programming in Java

Talk about thread interruption?

Thread interruption means that the thread is interrupted by other threads during its running. The biggest difference between it and stop is: stop is the system to force the termination of the thread, while thread interruption is to send an interrupt signal to the target thread. If the target thread does not receive the thread The interrupt signal ends the thread, but the thread will not terminate. Whether to exit or execute other logic depends on the target thread.

There are three important methods of thread interruption:

1、java.lang.Thread#interrupt

Call interrupt()the method of the target thread, send an interrupt signal to the target thread, and the thread is marked with an interrupt mark.

2、java.lang.Thread#isInterrupted()

To determine whether the target thread is interrupted, the interrupt flag will not be cleared.

3、java.lang.Thread#interrupted

To determine whether the target thread is interrupted, the interrupt flag will be cleared.

private static void test2() {
    
    
    Thread thread = new Thread(() -> {
    
    
        while (true) {
    
    
            Thread.yield();

            // 响应中断
            if (Thread.currentThread().isInterrupted()) {
    
    
                System.out.println("Java技术栈线程被中断,程序退出。");
                return;
            }
        }
    });
    thread.start();
    thread.interrupt();
}

What are the ways to create threads?

  • ThreadCreate multiple threads by extending the class
  • RunnableCreate multiple threads by implementing the interface
  • Implement Callablethe interface and create threads through FutureTaskthe interface.
  • Use Executorthe framework to create thread pools.

Inherit Thread to create thread code as follows. The run() method is called back by jvm after creating an operating system-level thread. It cannot be called manually. Manual calling is equivalent to calling a normal method.

/**
 * @author: 程序员大彬
 * @time: 2021-09-11 10:15
 */
public class MyThread extends Thread {
    
    
    public MyThread() {
    
    
    }

    @Override
    public void run() {
    
    
        for (int i = 0; i < 10; i++) {
    
    
            System.out.println(Thread.currentThread() + ":" + i);
        }
    }

    public static void main(String[] args) {
    
    
        MyThread mThread1 = new MyThread();
        MyThread mThread2 = new MyThread();
        MyThread myThread3 = new MyThread();
        mThread1.start();
        mThread2.start();
        myThread3.start();
    }
}

Runnable creates thread code :

/**
 * @author: 程序员大彬
 * @time: 2021-09-11 10:04
 */
public class RunnableTest {
    
    
    public static  void main(String[] args){
    
    
        Runnable1 r = new Runnable1();
        Thread thread = new Thread(r);
        thread.start();
        System.out.println("主线程:["+Thread.currentThread().getName()+"]");
    }
}

class Runnable1 implements Runnable{
    
    
    @Override
    public void run() {
    
    
        System.out.println("当前线程:"+Thread.currentThread().getName());
    }
}

The advantages of implementing the Runnable interface over inheriting the Thread class:

  1. Can avoid the limitation of single inheritance in java
  2. The thread pool can only be placed in threads that implement Runable or Callable classes, and cannot be directly placed in classes that inherit Thread

Callable creates thread code :

/**
 * @author: 程序员大彬
 * @time: 2021-09-11 10:21
 */
public class CallableTest {
    
    
    public static void main(String[] args) {
    
    
        Callable1 c = new Callable1();

        //异步计算的结果
        FutureTask<Integer> result = new FutureTask<>(c);

        new Thread(result).start();

        try {
    
    
            //等待任务完成,返回结果
            int sum = result.get();
            System.out.println(sum);
        } catch (InterruptedException | ExecutionException e) {
    
    
            e.printStackTrace();
        }
    }

}

class Callable1 implements Callable<Integer> {
    
    

    @Override
    public Integer call() throws Exception {
    
    
        int sum = 0;

        for (int i = 0; i <= 100; i++) {
    
    
            sum += i;
        }
        return sum;
    }
}

Use Executor to create thread code :

/**
 * @author: 程序员大彬
 * @time: 2021-09-11 10:44
 */
public class ExecutorsTest {
    
    
    public static void main(String[] args) {
    
    
        //获取ExecutorService实例,生产禁用,需要手动创建线程池
        ExecutorService executorService = Executors.newCachedThreadPool();
        //提交任务
        executorService.submit(new RunnableDemo());
    }
}

class RunnableDemo implements Runnable {
    
    
    @Override
    public void run() {
    
    
        System.out.println("大彬");
    }
}

What is thread deadlock?

Thread deadlock refers to a phenomenon in which two or more threads wait for each other due to competition for resources during execution. If there is no external force, they will not be able to advance.

As shown in the figure below, thread A holds resource 2, and thread B holds resource 1. They both want to apply for the resource held by the other party at the same time, so the two threads will wait for each other and enter a deadlock state.

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-d7NVKug8-1685887928371)(http://img.topjavaer.cn/img/deadlock.png)]

The following example illustrates thread deadlock, and the code comes from the beauty of concurrent programming.

public class DeadLockDemo {
    
    
    private static Object resource1 = new Object();//资源 1
    private static Object resource2 = new Object();//资源 2

    public static void main(String[] args) {
    
    
        new Thread(() -> {
    
    
            synchronized (resource1) {
    
    
                System.out.println(Thread.currentThread() + "get resource1");
                try {
    
    
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread() + "waiting get resource2");
                synchronized (resource2) {
    
    
                    System.out.println(Thread.currentThread() + "get resource2");
                }
            }
        }, "线程 1").start();

        new Thread(() -> {
    
    
            synchronized (resource2) {
    
    
                System.out.println(Thread.currentThread() + "get resource2");
                try {
    
    
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread() + "waiting get resource1");
                synchronized (resource1) {
    
    
                    System.out.println(Thread.currentThread() + "get resource1");
                }
            }
        }, "线程 2").start();
    }
}

The code output is as follows:

Thread[线程 1,5,main]get resource1
Thread[线程 2,5,main]get resource2
Thread[线程 1,5,main]waiting get resource2
Thread[线程 2,5,main]waiting get resource1

Thread A acquires the monitor lock on resource1 via synchronized(resource1) and passes Thread.sleep(1000). Let thread A sleep for 1s in order to allow thread B to be executed and then acquire the monitor lock of resource2. After the sleep of thread A and thread B is over, they both start to request the other party's resources, and then the two threads will fall into a state of waiting for each other, which will cause a deadlock.

How does thread deadlock occur? How to avoid it?

Four necessary conditions for deadlock to occur :

  • Mutual exclusion: a resource can only be used by one process at a time

  • Request and Hold: When a process is blocked due to requesting resources, the obtained resources are not released

  • No deprivation: The resources obtained by the process cannot be forcibly deprived before they are used

  • Circular waiting: cyclically waiting for resources between processes

Ways to avoid deadlock :

  • Mutual exclusion conditions cannot be destroyed, because locking is to ensure mutual exclusion
  • Apply for all resources at one time, avoiding threads occupying resources and waiting for other resources
  • When a thread that occupies some resources further applies for other resources, if the application cannot be obtained, it will actively release the resources it occupies
  • Apply for resources in sequence

The difference between thread run and start?

  • When a program calls start()a method, a new thread will be created to execute run()the code in the method. run()Just like a normal method, if called directly run(), no new thread will be created.
  • start()The method of a thread can only be called once, and a java.lang.IllegalThreadStateException will be thrown if it is called multiple times. run()method is not limited.

What methods do threads have?

start

Used to start threads.

getPriority

Get the thread priority, the default is 5, the default thread priority is 5, if not manually specified, then the thread priority is inherited, such as thread A starts thread B, then the priority of thread B is the same as that of thread A

setPriority

Set thread priority. The CPU will try to give execution resources to threads with higher priority.

interrupt

Tell the thread that you should be interrupted. Whether to interrupt or continue to run will be handled by the notified thread itself.

When interrupt() is called on a thread, there are two cases:

  1. If the thread is in a blocked state (such as sleep, wait, join, etc.), the thread will immediately exit the blocked state and throw an InterruptedException.

  2. If the thread is normally active, the thread's interrupt flag is set to true. However, the thread whose interrupt flag is set can continue to run normally without being affected.

interrupt() can't really interrupt the thread, it needs the cooperation of the called thread itself.

join

Wait for other threads to terminate. If the join() method of another thread is called in the current thread, the current thread will turn into a blocked state until the other process finishes running, and then the current thread will turn from blocked to ready state.

yield

Suspend the currently executing thread object and give the execution opportunity to the same or higher priority thread.

sleep

Causes the thread to go to the blocked state. The millis parameter sets the time to sleep, in milliseconds. When the sleep is over, the thread automatically turns to the Runnable state.

The underlying principle of volatile

volatileIt is a lightweight synchronization mechanism that volatileensures the visibility of variables to all threads and does not guarantee atomicity.

  1. When volatilewriting to a variable, the JVM will send a LOCKprefix instruction to the processor to write the data of the cache line where the variable is located back to the system memory.
  2. Due to the cache coherence protocol, each processor checks whether its cache is expired by sniffing the data propagated on the bus. When the processor finds that the memory address corresponding to its own cache line has been modified, it will update the current processor's The cache line is set to an invalid state. When the processor modifies the data, it will re-read the data from the system memory into the processor cache.

Let's see what a cache coherence protocol is.

Cache consistency protocol : When the CPU writes data, if it finds that the variable being operated is a shared variable, that is, there is a copy of the variable in other CPUs, it will send a signal to notify other CPUs to invalidate the cache line of the variable, so When other CPUs need to read this variable, they will re-read it from memory.

volatileThe keyword has two functions:

  1. It ensures the visibility of different threads operating on shared variables , that is, a thread modifies the value of a variable, and the new value is immediately visible to other threads.
  2. Instruction reordering is prohibited .

Instruction reordering is for the JVM to optimize instructions, improve program running efficiency, and increase parallelism as much as possible without affecting the execution results of single-threaded programs. The Java compiler will insert 内存屏障instructions at appropriate positions when generating the instruction series to prevent processor reordering. Inserting a memory barrier is equivalent to telling the CPU and the compiler that what precedes this command must be executed first, and what follows this command must be executed later. For a write operation on a volatile field, the Java memory model will insert a write barrier instruction after the write operation, which will flush the previously written value to memory.

What are the uses of synchronized?

  1. Modified ordinary method : acts on the current object instance, and obtains the lock of the current object instance before entering the synchronization code
  2. Modified static method : it acts on the current class, and obtains the lock of the current class object before entering the synchronization code. The synchronized keyword is added to the static static method and the synchronized (class) code block to lock the Class class
  3. Modified code block : Specify the lock object, lock the given object, and obtain the lock of the given object before entering the synchronization code library

What are the functions of synchronized?

Atomicity : access synchronization code to ensure mutual exclusion of threads;

Visibility : ensure that the modification of shared variables can be seen in time;

Orderliness : Effectively solve the reordering problem.

What is the underlying implementation principle of synchronized?

The implementation of synchronized synchronized code block is through monitorenterand monitorexitinstructions, where monitorenterthe instruction points to the start position of the synchronized code block, monitorexitand the instruction indicates the end position of the synchronized code block. When executing monitorenterthe instruction monitor, the thread tries to acquire the lock, that is, the ownership of the acquisition (the monitor object exists in the object header of each Java object, and the synchronized lock is acquired in this way, which is why any object in Java can be used as lock reason).

It contains a counter inside. When the counter is 0, it can be successfully acquired. After acquisition, set the lock counter to 1, that is, add 1. Correspondingly, after executing monitorexitthe instruction , set the lock counter to 0
, indicating that the lock is released. If the acquisition of the object lock fails, the current thread will block and wait until the lock is released by another thread

The method modified by synchronized does not have monitorenterinstructions and monitorexitinstructions, but is replaced by ACC_SYNCHRONIZEDthe logo, which indicates that the method is a synchronous method. The JVM uses the ACC_SYNCHRONIZEDaccess flag to distinguish whether a method is declared as a synchronous method, and then executes the corresponding synchronous call .

What is the difference between volatile and synchronized?

  1. volatileIt can only be used on variables; it synchronizedcan be used on classes, variables, methods and code blocks.
  2. volatileTo guarantee visibility; synchronizedguarantee atomicity and visibility.
  3. volatileDisables instruction reordering; synchronizedit doesn't.
  4. volatileWill not cause obstruction; synchronizedwill.

The difference between ReentrantLock and synchronized

  1. Use the synchronized keyword to achieve synchronization. After the thread executes the synchronization code block, the lock will be released automatically , while ReentrantLock needs to release the lock manually.
  2. Synchronized is an unfair lock , and ReentrantLock can be set as a fair lock.
  3. The thread waiting to acquire the lock on ReentrantLock is interruptible , and the thread can give up waiting for the lock. And synchonized will wait indefinitely.
  4. ReentrantLock can set a timeout to acquire a lock . Acquires the lock before the specified deadline, and returns if the lock has not been acquired by the deadline.
  5. The tryLock() method of ReentrantLock can try to acquire the lock non-blockingly , and return immediately after calling this method. If it can be obtained, it returns true, otherwise it returns false.

The similarities and differences between wait() and sleep()?

Same point :

  1. Both of them can suspend the current thread and give the opportunity to other threads
  2. Thrown for any thread that is interrupted while waiting after calling wait() and sleep()InterruptedException

Differences :

  1. wait()is a method in the Object superclass; sleep()it is a method in the thread Thread class
  2. The holding of the lock is different, wait()the lock will be released, but sleep()the lock will not be released
  3. The wake-up methods are not exactly the same, wait()relying on notifyor notifyAll , interruption, and reaching the specified time to wake up; and sleep()reaching the specified time to be woken up
  4. The call wait()needs to acquire the lock of the object first, instead Thread.sleep()of

What is the difference between Runnable and Callable?

  • The Callable interface method is call(), the Runnable method is run();
  • The call method of the Callable interface has a return value and supports generics, while the run method of the Runnable interface has no return value.
  • Callable interface call()methods allow exceptions to be thrown; Runnable interface run()methods cannot continue to throw exceptions.

How to control the order of thread execution?

Assuming there are three threads T1, T2, and T3, how do you ensure that T2 is executed after T1 is executed, and T3 is executed after T2 is executed?

You can use the join method to solve this problem. For example, in thread A, calling the join method of thread B means **: A waits for thread B to finish executing (releases the CPU execution right), and then continues to execute. **

code show as below:

public class ThreadTest {

    public static void main(String[] args) {

        Thread spring = new Thread(new SeasonThreadTask("春天"));
        Thread summer = new Thread(new SeasonThreadTask("夏天"));
        Thread autumn = new Thread(new SeasonThreadTask("秋天"));

        try
        {
            //春天线程先启动
            spring.start();
            //主线程等待线程spring执行完,再往下执行
            spring.join();
            //夏天线程再启动
            summer.start();
            //主线程等待线程summer执行完,再往下执行
            summer.join();
            //秋天线程最后启动
            autumn.start();
            //主线程等待线程autumn执行完,再往下执行
            autumn.join();
        } catch (InterruptedException e)
        {
            e.printStackTrace();
        }
    }
}

class SeasonThreadTask implements Runnable{

    private String name;

    public SeasonThreadTask(String name){
        this.name = name;
    }

    @Override
    public void run() {
        for (int i = 1; i <4; i++) {
            System.out.println(this.name + "来了: " + i + "次");
            try {
                Thread.sleep(100);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }
}

operation result:

春天来了: 1次
春天来了: 2次
春天来了: 3次
夏天来了: 1次
夏天来了: 2次
夏天来了: 3次
秋天来了: 1次
秋天来了: 2次
秋天来了: 3次

What is a daemon thread?

A daemon thread is a special process that runs in the background . It is independent of the controlling terminal and periodically performs some task or waits for some event to occur. Garbage collection threads are special daemon threads in Java.

Inter-thread communication method

1. Use wait()/notify() of the Object class . The Object class provides methods for inter-thread communication: wait(), notify(), notifyAll(), which are the basis of multi-thread communication. Among them, wait/notifyit must synchronizedbe used , the wait method releases the lock, and the notify method does not release the lock. wait means that in a thread that has already entered the synchronization lock, let yourself temporarily give up the synchronization lock, so that other threads that are waiting for this lock can get the synchronization lock and run, only other threads call it, notify does not release the lock, notify()just Tell the calling wait()thread that it can participate in the competition for obtaining the lock, but it does not obtain the lock immediately, because the lock is still in the hands of others, and the other party has not released it. One or more threads wait()calling will cancel the wait state and participate in the competition again. If the program can get the lock again, it can continue to run downward.

2. Use the volatile keyword. Inter-thread communication is realized based on the volatile keyword, and the underlying layer uses shared memory. To put it simply, multiple threads monitor a variable at the same time. When the variable changes, the thread can perceive and execute the corresponding business.

3. Use the JUC tool class CountDownLatch . After jdk1.5, java.util.concurrenta lot of tools related to concurrent programming are provided under the package, which simplifies the development of concurrent programming. CountDownLatchBased on the AQS framework, it is equivalent to maintaining a shared variable state between threads.

4. Implement inter-thread blocking and wake-up based on LockSupport . LockSupportIt is a very flexible tool for inter-thread blocking and waking up. It does not need to pay attention to whether to wait for the thread to run first or to wake up the thread to run first, but you must know the name of the thread.

ThreadLocal

Thread local variables. When using ThreadLocalmaintained variables, ThreadLocaleach thread using the variable is provided with an independent copy of the variable, so each thread can independently change its own copy without affecting other threads.

Principle of ThreadLocal

Each thread has a ThreadLocalMap( ThreadLocalinner class), the key of the element in the Map ThreadLocal, and the value corresponds to the variable copy of the thread.

Call threadLocal.set()–> Call getMap(Thread)–> returns ThreadLocalMap<ThreadLocal, value>–> of the current thread map.set(this, value), this is threadLocalitself. The source code is as follows:

public void set(T value) {
    
    
    Thread t = Thread.currentThread();
    ThreadLocalMap map = getMap(t);
    if (map != null)
        map.set(this, value);
    else
        createMap(t, value);
}

ThreadLocalMap getMap(Thread t) {
    
    
    return t.threadLocals;
}

void createMap(Thread t, T firstValue) {
    
    
    t.threadLocals = new ThreadLocalMap(this, firstValue);
}

Invoke get()–> Invoke getMap(Thread)–> Return to the current thread’s ThreadLocalMap<ThreadLocal, value>–> map.getEntry(this), return value. The source code is as follows:

    public T get() {
    
    
        Thread t = Thread.currentThread();
        ThreadLocalMap map = getMap(t);
        if (map != null) {
    
    
            ThreadLocalMap.Entry e = map.getEntry(this);
            if (e != null) {
    
    
                @SuppressWarnings("unchecked")
                T result = (T)e.value;
                return result;
            }
        }
        return setInitialValue();
    }

threadLocalsThe type ThreadLocalMapof the key is ThreadLocalobject, because there can be multiple threadLocalvariables in each thread, such as longLocaland stringLocal.

public class ThreadLocalDemo {
    ThreadLocal<Long> longLocal = new ThreadLocal<>();

    public void set() {
        longLocal.set(Thread.currentThread().getId());
    }
    public Long get() {
        return longLocal.get();
    }

    public static void main(String[] args) throws InterruptedException {
        ThreadLocalDemo threadLocalDemo = new ThreadLocalDemo();
        threadLocalDemo.set();
        System.out.println(threadLocalDemo.get());

        Thread thread = new Thread(() -> {
            threadLocalDemo.set();
            System.out.println(threadLocalDemo.get());
        }
        );

        thread.start();
        thread.join();

        System.out.println(threadLocalDemo.get());
    }
}

ThreadLocalIt is not used to solve the problem of multi-threaded access to shared resources, because the resources in each thread are just copies and will not be shared. Therefore, ThreadLocalit is suitable as a thread context variable to simplify parameter passing within a thread.

Cause of ThreadLocal memory leak?

Each thread has an ThreadLocalMapinternal attribute, the key of the map is ThreaLocaldefined as a weak reference, and the value is a strong reference type. During garbage collection, the key will be automatically recovered, and the recovery of the value depends on the life cycle of the Thread object. Generally, threads are reused through the thread pool to save resources, which leads to a relatively long life cycle of thread objects, so there is always a strong reference chain relationship: --> –> –> , with Threadthe ThreadLocalMapexecution Entryof Valuetasks , the value may become more and more and cannot be released, eventually leading to a memory leak.

Solution: ThreadLocalCall its remove()method every time it is used, and manually delete the corresponding key-value pair to avoid memory leaks.

What are the usage scenarios of ThreadLocal?

scene 1

ThreadLocal is used to save objects exclusive to each thread, and create a copy for each thread, so that each thread can modify its own copy without affecting the copies of other threads, ensuring thread safety.

This scenario is usually used to save thread-unsafe tool classes, and the typical class used is SimpleDateFormat.

If the requirement is that 500 threads must use SimpleDateFormat, use the thread pool to realize thread reuse, otherwise it will consume too much memory and other resources, if we create a simpleDateFormat object for each task, that is, 500 The task corresponds to 500 simpleDateFormat objects. But the creation of so many objects has overhead, and it is also a waste of memory that so many objects exist in memory at the same time. The simpleDateFormat object can be extracted and turned into a static variable, but this will cause thread insecurity. The effect we want is not to waste too much memory, but at the same time we want to ensure thread safety. At this point, ThreadLocal can be used to achieve this purpose, and each thread has its own simpleDateFormat object.

scene 2

ThreadLocal is used in scenarios where information needs to be saved independently within each thread so that other methods can obtain the information more conveniently. The information obtained by each thread may be different. After the previously executed method saves the information, the subsequent method can directly obtain it through ThreadLocal, avoiding parameter passing, similar to the concept of global variables.

For example, in a Java web application, each thread has its own separate Sessioninstance, which can be ThreadLocalimplemented using

Principle of AQS

AQS, AbstractQueuedSynchronizer, the abstract queue synchronizer, defines a set of synchronizer frameworks for multi-threaded access to shared resources. The implementation of many concurrent tools depends on it, such as commonly used ReentrantLock/Semaphore/CountDownLatch.

AQS uses an volatileint type member variable stateto represent the synchronization state, and modifies the value of the synchronization state through CAS. When a thread calls the lock method, if state=0, it means that no thread holds the lock of the shared resource, and the lock can be obtained and 1 will statebe added . If it is statenot 0, it means that a thread is currently using the shared variable, and other threads must join the synchronization queue to wait.

private volatile int state;//共享变量,使用volatile修饰保证线程可见性

The synchronizer relies on the internal synchronization queue (a FIFO two-way queue) to complete the management of the synchronization state. When the current thread fails to obtain the synchronization state, the synchronizer will construct the current thread and the waiting state (exclusive or shared) into a node (Node) and Add it to the synchronization queue and spin it. When the synchronization state is released, the thread corresponding to the successor node in the first node will be awakened to make it try to obtain the synchronization state again.

How does ReentrantLock achieve reentrancy?

ReentrantLockInternally customize the synchronizer sync, put the thread object into a doubly linked list through the CAS algorithm when locking, and check whether the currently maintained thread ID is consistent with the currently requested thread ID every time the lock is acquired. If it is consistent, the synchronization status is increased by 1, indicating that the lock has been acquired multiple times by the current thread.

The source code is as follows:

final boolean nonfairTryAcquire(int acquires) {
    
    
    final Thread current = Thread.currentThread();
    int c = getState();
    if (c == 0) {
    
    
        if (compareAndSetState(0, acquires)) {
    
    
            setExclusiveOwnerThread(current);
            return true;
        }
    }
    else if (current == getExclusiveOwnerThread()) {
    
    
        int nextc = c + acquires;
        if (nextc < 0) // overflow
            throw new Error("Maximum lock count exceeded");
        setState(nextc);
        return true;
    }
    return false;
}

Classification of locks

Fair locks and unfair locks

Object locks are acquired in thread access order . synchronizedIt is an unfair lock. LockThe default is an unfair lock. It can be set to a fair lock. A fair lock will affect performance.

public ReentrantLock() {
    
    
    sync = new NonfairSync();
}

public ReentrantLock(boolean fair) {
    
    
    sync = fair ? new FairSync() : new NonfairSync();
}

Shared and exclusive locks

The main difference between the shared type and the exclusive type is that only one thread can obtain the synchronization state at the same time in the exclusive type, while multiple threads can obtain the synchronization state at the same time in the shared type. For example, read operations can be performed by multiple threads at the same time, while write operations can only be performed by one thread at a time, and other operations will be blocked.

Pessimistic lock and optimistic lock

Pessimistic lock, every time a resource is accessed , it will be locked, and the lock will be released after the synchronization code is executed, synchronizedand ReentrantLockit belongs to pessimistic lock.

Optimistic locking does not lock resources. All threads can access and modify the same resource. If there is no conflict, the modification succeeds and exits, otherwise it will continue to try in a loop. The most common implementation of optimistic locking is CAS.

Applicable scene:

  • Pessimistic locking is suitable for scenarios with many write operations .
  • Optimistic locking is suitable for scenarios with many read operations , and no lock can improve the performance of read operations.

What's wrong with optimistic locking?

Optimistic locking avoids the problem of pessimistic locking exclusive objects and improves concurrency performance, but it also has disadvantages:

  • Optimistic locking can only guarantee the atomic operation of a shared variable .
  • Spinning for a long time can lead to high overhead . If the CAS is unsuccessful for a long time and keeps spinning, it will bring a lot of overhead to the CPU.
  • ABA problems . The principle of CAS is to judge whether the memory value has been changed by comparing whether the memory value is the same as the expected value, but there will be the following problems: if the memory value is originally A, then it is changed to B by a thread, and finally changed to A , then CAS thinks that the memory value has not changed. The version number can be introduced to solve this problem, and the version number is incremented by one every time the variable is updated.

What is CAS?

The full name of CAS Compare And Swap, comparison and exchange, is the main implementation method of optimistic locking. CAS realizes variable synchronization between multiple threads without using locks. ReentrantLockBoth the internal AQS and the atomic class use CAS internally.

The CAS algorithm involves three operands:

  • The memory value V that needs to be read and written.
  • The value A to compare against.
  • The new value B to write.

Only when the value of V is equal to A, the value of V will be updated atomically with the new value B, otherwise it will continue to retry until the value is successfully updated.

For AtomicIntegerexample, the bottom layer AtomicIntegerof the getAndIncrement()method is CAS implementation, the key code is compareAndSwapInt(obj, offset, expect, update), its meaning is, if objthe internal valuesum expectis equal, it proves that no other thread has changed this variable, then update it as update, if not equal, it will continue to re- Try until the value is updated successfully.

Problems with CAS?

CAS three big questions:

  1. ABA problems . CAS needs to check whether the memory value has changed when operating the value, and will update the memory value if there is no change. But if the memory value was originally A, then changed to B, and then changed to A again, then when CAS checks, it will find that the value has not changed, but it has actually changed. The solution to the ABA problem is to add a version number in front of the variable, and add one to the version number every time the variable is updated, so that the change process will change A-B-Afrom 1A-2B-3A.

    JDK has provided the AtomicStampedReference class since 1.5 to solve the ABA problem, and the atomic update reference type with the version number.

  2. Long cycle times are expensive . If the CAS operation is unsuccessful for a long time, it will cause it to spin all the time, which will bring a very large overhead to the CPU.

  3. Atomic operations can only be guaranteed for one shared variable . When performing an operation on a shared variable, CAS can guarantee the atomic operation, but when operating on multiple shared variables, CAS cannot guarantee the atomicity of the operation.

    Since Java 1.5, JDK has provided the AtomicReference class to ensure the atomicity between referenced objects, and multiple variables can be placed in one object for CAS operations.

atomic class

Basic type atomic class

Updating primitive types atomically

  • AtomicInteger: integer atomic class
  • AtomicLong: long integer atomic class
  • AtomicBoolean: Boolean atomic class

Commonly used methods of the AtomicInteger class:

public final int get() //获取当前的值
public final int getAndSet(int newValue)//获取当前的值,并设置新的值
public final int getAndIncrement()//获取当前的值,并自增
public final int getAndDecrement() //获取当前的值,并自减
public final int getAndAdd(int delta) //获取当前的值,并加上预期的值
boolean compareAndSet(int expect, int update) //如果输入的数值等于预期值,则以原子方式将该值设置为输入值(update)
public final void lazySet(int newValue)//最终设置为newValue,使用 lazySet 设置之后可能导致其他线程在之后的一小段时间内还是可以读到旧的值。

The AtomicInteger class mainly uses CAS (compare and swap) to ensure atomic operations, thereby avoiding the high overhead of locking.

array type atomic class

Update an element in an array atomically

  • AtomicIntegerArray: integer array atomic class
  • AtomicLongArray: long integer array atomic class
  • AtomicReferenceArray: reference type array atomic class

Common methods of AtomicIntegerArray class:

public final int get(int i) //获取 index=i 位置元素的值
public final int getAndSet(int i, int newValue)//返回 index=i 位置的当前的值,并将其设置为新值:newValue
public final int getAndIncrement(int i)//获取 index=i 位置元素的值,并让该位置的元素自增
public final int getAndDecrement(int i) //获取 index=i 位置元素的值,并让该位置的元素自减
public final int getAndAdd(int i, int delta) //获取 index=i 位置元素的值,并加上预期的值
boolean compareAndSet(int i, int expect, int update) //如果输入的数值等于预期值,则以原子方式将 index=i 位置的元素值设置为输入值(update)
public final void lazySet(int i, int newValue)//最终 将index=i 位置的元素设置为newValue,使用 lazySet 设置之后可能导致其他线程在之后的一小段时间内还是可以读到旧的值。

reference type atomic class

  • AtomicReference: reference type atomic class
  • AtomicStampedReference: A reference type atomic class with a version number. This class associates an integer value with a reference, which can be used to solve the atomic update data and the version number of the data, and can solve the ABA problem that may occur when using CAS for atomic update.
  • AtomicMarkableReference : A reference type that atomically updates with a mark. This class associates boolean tokens with references

Why use the Executor thread pool framework?

  • Every time a task is executed, a thread is created through new Thread(), which consumes performance. Creating a thread is time-consuming and resource-consuming.
  • The thread created by calling new Thread() lacks management and can be created without limit. The competition between threads will cause excessive occupation of system resources and cause system paralysis
  • Threads started directly using new Thread() are not conducive to expansion, such as timing execution, periodic execution, timing and periodic execution, thread interruption, etc. are not easy to implement

How to stop a running thread?

  1. How to use shared variables. Shared variables can be used by multiple threads executing the same task as a signal to stop the execution of the thread.
  2. Use the interrupt method to terminate the thread. When a thread is blocked and in a non-running state, even if the shared variable of the thread is set to true in the main program, the thread cannot check the loop flag at all at this time, and of course it cannot be interrupted immediately. At this time, you can use the interrupt() method provided by Thread, because although this method will not interrupt a running thread, it can cause a blocked thread to throw an interrupt exception, so that the thread ends the blocking state early.

What is a Daemon thread?

Background (daemon) thread refers to a thread that provides a general service in the background when the program is running, and this thread is not an integral part of the program. Therefore, when all non-background threads end, the program terminates, killing all background threads in the process. Conversely, as long as any non-background threads are still running, the program will not terminate. The setDaemon() method must be called before the thread starts to set it as a background thread.

Note: The background process will terminate its run() method without executing the finally clause.

For example: the garbage collection thread of the JVM is the Daemon thread, and the Finalizer is also the daemon thread.

What is the difference between SynchronizedMap and ConcurrentHashMap?

SynchronizedMap locks the entire table at a time to ensure thread safety, so only one thread can access the map at a time.

JDK1.8 ConcurrentHashMap uses CAS and synchronized to ensure concurrent security. The data structure adopts array + linked list/red and black binary tree. synchronized only locks the first node of the current linked list or red-black binary tree, and supports concurrent access and modification.
In addition, ConcurrentHashMap uses a different iteration method. When the iterator is created and the collection changes again, ConcurrentModificationException will no longer be thrown, but new new data will be replaced when the iterator is changed so as not to affect the original data. After the iterator is completed, the head pointer will be replaced with new data, so that the iterator Threads can use the original old data, and writing threads can also complete changes concurrently.

How to judge whether the task of the thread pool has been executed?

There are several methods:

1. Use the native function isTerminated() of the thread pool ;

The executor provides a native function isTerminated() to determine whether all tasks in the thread pool are completed. Returns true if all are done, false otherwise.

2. Use reentrant locks to maintain a common count .

All ordinary tasks maintain a counter, and when the task is completed, the counter is incremented by one (locking is required here). When the value of the counter is equal to the number of tasks, all tasks have been executed.

3. Use CountDownLatch .

Its principle is similar to the second method. Give CountDownLatch a count value. After the task is executed, call countDown() to execute the count value minus one. The last executed task calls the await() method at the beginning of the calling method, so that the entire task will be blocked until the count value is zero before continuing to execute.

The disadvantage of this method is that the number of tasks needs to be known in advance.

4. submit submits the task to the thread pool, and uses Future to judge the execution status of the task .

Submitting tasks to the thread pool using submit is different from submitting with execute. submit will have a return value of type Future. Through the future.isDone() method, you can know whether the task is completed.

What are Futures?

In concurrent programming, no matter whether you inherit the thread class or implement the runnable interface, you cannot guarantee to get the previous execution results. By implementing the Callback interface and using Future to receive the execution results of multiple threads.

Future represents the result of an asynchronous task that may not have been completed, and a Callback can be added to this result to take corresponding actions after the task succeeds or fails.

For example: For example, when you go to eat breakfast, you order buns and cold dishes. The buns need to wait for 3 minutes, and the cold dishes only need 1 minute. While waiting for the buns, you can prepare cold dishes at the same time, so you can prepare buns at the same time during the preparation of cold dishes, so you only need to wait for 3 minutes. Future is the latter execution mode.

The Future interface mainly includes 5 methods:

  1. The get() method can return a result when the task is finished. If the work is not finished when it is called, the thread will be blocked until the task is completed.
  2. get(long timeout, TimeUnit unit) will return the result after waiting for the timeout time
  3. The cancel(boolean mayInterruptIfRunning) method can be used to stop a task. If the task can be stopped (judged by mayInterruptIfRunning), it can return true. If the task has been completed or stopped, or the task cannot be stopped, it will return false.
  4. isDone() method to determine whether the current method is completed
  5. isCancel() method to determine whether the current method is canceled

Finally, I would like to share with you a Github repository, which has more than 300 classic computer book PDFs compiled by Dabin, including C language, C++, Java, Python, front-end, database, operating system, computer network, data structure and algorithm, machine learning , programming life , etc., you can star it, next time you look for a book directly search on it, the warehouse is continuously updated~

Github address

Guess you like

Origin blog.csdn.net/Tyson0314/article/details/131038016