Brother Jie teaches you one hundred questions in the interview series: java multi-threading

Article directory


Java multi-threading is a frequently asked question in Java interviews. How can you stand out in the interview? Just read the hundred Java multi-threading interview questions here.

1. What is a thread? What is a process?

answer:

  • A thread is the smallest execution unit that the operating system can schedule. It is included in a process and shares the resources of the process.
  • A process is an executing program that contains code, data, and system resources. A process can contain multiple threads.

2. How to create a thread in Java?

Answer: There are two ways to create a thread: inheriting the Thread class or implementing the Runnable interface.

Code example:

// 通过继承Thread类
class MyThread extends Thread {
    
    
    public void run() {
    
    
        System.out.println("Thread is running");
    }
}
MyThread thread = new MyThread();
thread.start();

// 通过实现Runnable接口
class MyRunnable implements Runnable {
    
    
    public void run() {
    
    
        System.out.println("Runnable is running");
    }
}
Thread thread = new Thread(new MyRunnable());
thread.start();

3. What is the difference between sleep() and wait() methods?

answer:

  • sleep()The method is a static method of the Thread class, which causes the current thread to pause execution for a period of time. During this time, the thread does not release the object lock.
  • wait()The method is a method of the Object class that makes the current thread wait until other threads call notify() or notifyAll() method to wake it up. While waiting, the thread releases the object lock.

4. What is thread safety? How to achieve thread safety?

Answer: Thread safety refers to a state that does not cause data inconsistency or errors when multiple threads access shared resources. Ways to achieve thread safety include:

  • Use the synchronized keyword to protect access to shared resources.
  • UseReentrantLockdisplay lock to achieve synchronization.
  • Use thread-safe data structures, such asConcurrentHashMap.

5. What is a deadlock? How to avoid deadlock?

Answer: A deadlock is a situation in which multiple threads wait for each other's resources, causing all threads to be unable to continue execution. To avoid deadlock, you can adopt the following strategies:

  • Acquire locks in the same order to avoid circular wait conditions.
  • UsetryLock() to avoid waiting for the lock and set the timeout.
  • UseExecutorService thread pool to control the number of threads.

6. What is a thread pool? How to create a thread pool?

Answer: A thread pool is a group of pre-created threads used to perform multiple tasks to reduce the overhead of thread creation and destruction. Thread pools can be created using the java.util.concurrent.Executors class.

Code example:

ExecutorService executor = Executors.newFixedThreadPool(5);

7. What are Callable and Runnable? What's the difference?

Answer: Runnable and Callable are both interfaces for multi-threaded programming. The main differences are:

  • Runnablerun()The method of the interface has no return value and cannot throw an exception.
  • Callablecall()The method of the interface can return a value and can throw an exception.

Code example:

// Runnable 示例
class MyRunnable implements Runnable {
    
    
    public void run() {
    
    
        System.out.println("Runnable is running");
    }
}

// Callable 示例
class MyCallable implements Callable<Integer> {
    
    
    public Integer call() throws Exception {
    
    
        return 42;
    }
}

8. What is the volatile keyword? What does it do?

Answer: volatile The keyword is used to modify the variable to ensure that multiple threads' operations on the variable are visible, that is, the modification of the variable by one thread will be immediately reflected in other threads . It does not provide atomic operations, only solves the visibility problem.

Code example:

class SharedResource {
    
    
    private volatile boolean flag = false;

    public void toggleFlag() {
    
    
        flag = !flag;
    }

    public boolean isFlag() {
    
    
        return flag;
    }
}

9. What is the synchronization mechanism in Java?

Answer: Synchronization mechanisms are used to protect shared resources from concurrent access by multiple threads. The main synchronization mechanisms in Java include the synchronizedkeyword and ReentrantLockdisplay lock.

Code example:

class Counter {
    
    
    private int count = 0;

    public synchronized void increment() {
    
    
        count++;
    }
}

10. What is a CAS operation? How does it avoid thread contention?

Answer: CAS (Compare and Swap) is a lock-free concurrency algorithm that determines whether to update by comparing whether the value in the memory is equal to the expected value. It avoids the use of locks, thereby reducing thread contention and context switching overhead.

Code example:

import java.util.concurrent.atomic.AtomicInteger;

public class CASExample {
    
    
    private AtomicInteger counter = new AtomicInteger(0);

    public void increment() {
    
    
        counter.incrementAndGet();
    }

    public int getCount() {
    
    
        return counter.get();
    }
}

11. What is thread context switching? What overhead will it bring?

Answer: Thread context switching is the process of the operating system switching from one thread to another thread in a multi-threaded environment. It incurs some overhead because the current thread's state (registers, stack, etc.) needs to be saved and another thread's state needs to be loaded. Excessive thread context switching can reduce system performance.

12. What is thread priority? How to set thread priority?

Answer: Thread priority is an integer value that specifies the order in which threads are scheduled. Thread priority range in Java is 1 (lowest priority) to 10 (highest priority). You can use the setPriority(int priority) method to set the thread's priority.

Code example:

Thread thread = new Thread();
thread.setPriority(Thread.MAX_PRIORITY); // 设置最高优先级

13. What is a daemon thread? How to create a daemon thread?

Answer: The daemon thread is a thread running in the background. When all non-daemon threads end, the daemon thread will automatically terminate. You can use the setDaemon(true) method to set a thread as a daemon thread.

Code example:

Thread daemonThread = new Thread();
daemonThread.setDaemon(true); // 设置为守护线程

14. How to stop the execution of a thread? Why is the stop() method not recommended?

Answer: It is generally not recommended to stop the thread directly, because this may lead to resource leaks or unstable status. The recommended way is to let the thread exit the loop or execute on its own by setting the flag bit. The stop() method has been deprecated because it may cause problems such as threads not releasing locks.

15. What is a thread group (ThreadGroup)? Why is it not recommended?

Answer: Thread groups are a mechanism for organizing threads, but in modern Java multi-threaded programming, the use of thread groups is not recommended because more advanced mechanisms such as threads Pools can better manage threads, while thread groups have relatively limited functionality.

16. What is a read-write lock (ReadWrite Lock)? How does it improve performance?

Answer: Read-write locks allow multiple threads to read shared resources at the same time, but only allow one thread to write. This can improve concurrency performance in scenarios with more reads and fewer writes, because multiple read operations can be executed concurrently, while write operations require exclusive access.

Code example:

import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class ReadWriteLockExample {
    
    
    private ReadWriteLock lock = new ReentrantReadWriteLock();
    private int data;

    public int readData() {
    
    
        lock.readLock().lock();
        try {
    
    
            return data;
        } finally {
    
    
            lock.readLock().unlock();
        }
    }

    public void writeData(int value) {
    
    
        lock.writeLock().lock();
        try {
    
    
            data = value;
        } finally {
    
    
            lock.writeLock().unlock();
        }
    }
}

17. What is inter-thread communication? How to implement inter-thread communication?

Answer: Inter-thread communication refers to the process of exchanging information or sharing data between multiple threads. You can use the wait(), notify(), and notifyAll() methods to implement inter-thread communication, or you can use concurrent containers or other synchronization mechanisms.

18. What are the concurrent containers in Java?

Answer: Java provides many concurrent containers for safely operating data in a multi-threaded environment, such asConcurrentHashMap, , etc. CopyOnWriteArrayList, BlockingQueue

19. What is a thread local variable (ThreadLocal)? what's the effect?

Answer: Thread local variables are a special kind of variables. Each thread has its own independent copy, and different threads do not affect each other. It is suitable for situations where data needs to be isolated across multiple threads.

Code example:

ThreadLocal<Integer> threadLocal = ThreadLocal.withInitial(() -> 0);
threadLocal.set(42); // 在当前线程中设置值
int value = threadLocal.get(); // 在当前线程中获取值

20. What are thread synchronization and thread asynchronous?

answer:

  • Thread synchronization refers to the execution of multiple threads in a certain order to ensure the consistency and correctness of data.
  • Thread asynchronous means that multiple threads can execute independently without being restricted by a specific order.

21. What is a race condition between threads? How to avoid it?

Answer: A race condition between threads means that multiple threads access shared resources concurrently, causing the order or value of the results to not meet expectations. Synchronization mechanisms (such as synchronized, ReentrantLock) can be used to avoid race conditions and ensure that only one thread accesses the resource.

22. What is the thread liveness problem? What are the main types?

Answer: Thread liveness issues refer to situations that prevent a thread from executing normally. The main types include deadlock, livelock, and starvation. Deadlock is when multiple threads are waiting for each other for resources, and livelock is when threads constantly change states to avoid deadlock, but still cannot execute normally. Starvation is when some threads are never able to get the resources they need.

23. What is a thread-safe immutable object? Why are they suitable for multi-threaded environments?

Answer: An immutable object is an object that cannot be modified once created. Because the state of an immutable object does not change, multiple threads can access it simultaneously without the need for additional synchronization mechanisms, thus providing thread safety.

24. What are atomic operations in Java? Why are they important?

Answer: Atomic operations refer to operations that cannot be interrupted in a multi-threaded environment. They are either all executed or not executed. Java provides some atomic classes (such as AtomicInteger, AtomicLong) and atomic methods for implementing thread-safe operations such as self-increment and self-decrement.

Code example:

import java.util.concurrent.atomic.AtomicInteger;

public class AtomicExample {
    
    
    private AtomicInteger counter = new AtomicInteger(0);

    public void increment() {
    
    
        counter.incrementAndGet();
    }

    public int getCount() {
    
    
        return counter.get();
    }
}

25. What is thread context data sharing (Thread-Local Storage)?

Answer: Thread context data sharing is a mechanism for storing data within a thread so that each thread has its own copy of the data. This avoids data conflicts between threads and improves performance. TheThreadLocal class in Java is used to implement thread context data sharing.

26. How to handle exceptions in the thread pool?

Answer: In a thread pool, if a thread throws an exception without catching it, the thread will be terminated, but other threads in the thread pool will continue to run. You can prevent exceptions in the thread pool from affecting other threads by catching exceptions in tasks.

Code example:

ExecutorService executor = Executors.newFixedThreadPool(5);
executor.execute(() -> {
    
    
    try {
    
    
        // 任务代码
    } catch (Exception e) {
    
    
        // 处理异常
    }
});

27. How to debug and analyze threads?

Answer: When debugging and analyzing threads, you can use tools such as VisualVM, jconsole, jstack, etc. These tools can help you view thread status, stack information, memory usage, etc. to locate and solve thread-related issues.

28. What are concurrency and parallelism? What's the difference?

answer:

  • Concurrency refers to the alternate execution of multiple tasks, and each task may only be allocated a short time slice, thereby creating the illusion that multiple tasks are running simultaneously.
  • Parallelism refers to the true simultaneous execution of multiple tasks, usually implemented on multi-core processors.

29. What is thread context data switching? What overhead will it bring?

Answer: Thread context switching refers to the process in which the operating system saves the state of the current thread and then switches to the state of another thread. This will bring certain overhead, including saving and restoring registers, stacks, etc., which may affect system performance.

30. What is the execution order guarantee of threads?

Answer: Thread execution order guarantee means that the program guarantees the execution order of specific operations in a multi-thread environment, such asvolatile, Mechanisms such as a>synchronized can ensure a specific order of instructions.

31. What is the thread stack and heap of a thread? What's the difference?

answer:

  • The thread stack is a memory area exclusive to each thread and is used to store information such as local variables, method calls, and method parameters.
  • The heap is a memory area shared by all threads and is used to store object instances, arrays, etc.

32. How to achieve cooperation between threads?

Answer: can be achieved using the wait(), notify() and notifyAll() methods Cooperation between threads. These methods are used to wait and notify between different threads.

Code example:

class SharedResource {
    
    
    private boolean flag = false;

    public synchronized void waitForFlag() throws InterruptedException {
    
    
        while (!flag) {
    
    
            wait();
        }
    }

    public synchronized void setFlag() {
    
    
        flag = true;
        notifyAll();
    }
}

33. What is the context of a thread?

Answer: The context environment of a thread refers to the status and data of a thread when it is running, including register contents, stack information, thread local variables, etc. Context switching refers to the process of switching the context environment of one thread to another thread.

34. What is thread optimization and tuning?

Answer: Thread optimization and tuning refers to improving the performance and stability of multi-threaded programs through reasonable design, synchronization mechanism, thread pool configuration, etc. Optimizations include reducing thread context switching, reducing lock contention, avoiding deadlocks, etc.

35. Why use thread pool? What are its benefits?

Answer: Using a thread pool can avoid the overhead of frequently creating and destroying threads, and improve system performance and resource utilization. The thread pool can manage the number of threads, reuse threads, and control the execution order of threads. It can also avoid the problem of system resource exhaustion caused by too many threads.

36. What is the lock granularity in Java? How to choose appropriate lock granularity?

Answer: Lock granularity refers to the scope of locking shared resources. Choosing appropriate lock granularity is to minimize lock contention while ensuring thread safety. Generally, the smaller the granularity of the lock, the more efficient it is, but the maintenance cost may increase.

37. What is the ABA problem? How to avoid it?

Answer: The ABA problem refers to a situation where a value is first modified to another value and then modified back to the original value in a multi-threaded environment, resulting in a change in the detection value Misjudgments sometimes occur. ABA issues can be avoided by using variables with version numbers or using AtomicStampedReference.

38. What are optimistic locks and pessimistic locks?

answer:

  • Optimistic locking is a lock that assumes there will be no conflicts in most cases and only checks for conflicts during actual write operations.
  • Pessimistic locking assumes that a conflict may occur at any time, so the lock is acquired before accessing the shared resource.

39. What is reentrancy in Java? Why are reentrant locks reentrant?

Answer: Reentrancy means that when a thread holds a lock, it can continue to acquire the same lock without being blocked. A reentrant lock is reentrant because it records the thread holding the lock and the number of acquisitions. A thread can acquire the lock multiple times while holding the lock.

40. How to handle exception propagation between threads?

Answer: In a multi-threaded environment, thread exceptions cannot be directly passed to other threads. Exceptions can be caught in the thread's task, and then the exception information can be passed to other threads for processing through callbacks, shared variables, etc.

41. What is Active Object Pattern?

Answer: The active object pattern is a concurrent design pattern that is used to decouple method calls and method execution, making method calls asynchronous. It encapsulates method calls into tasks and is executed by a dedicated thread, thereby avoiding blocking of the caller thread.

42. What is latching (CountDownLatch)? How to use it?

Answer: Locking is a synchronization auxiliary class that is used to wait for multiple threads to complete execution before continuing. It implements waiting through an initial count value and the countDown() method.

Code example:

import java.util.concurrent.CountDownLatch;

public class CountDownLatchExample {
    
    
    public static void main(String[] args) throws InterruptedException {
    
    
        CountDownLatch latch = new CountDownLatch(3);
        
        Runnable task = () -> {
    
    
            // 执行任务
            latch.countDown();
        };
        
        Thread thread1 = new Thread(task);
        Thread thread2 = new Thread(task);
        Thread thread3 = new Thread(task);
        
        thread1.start();
        thread2.start();
        thread3.start();
        
        latch.await(); // 等待所有线程执行完毕
        System.out.println("All threads have finished.");
    }
}

43. What is a semaphore? How to use it?

Answer: A semaphore is a synchronization tool used to control the number of threads accessing a resource at the same time. It does this by maintaining a number of licenses.

Code example:

import java.util.concurrent.Semaphore;

public class SemaphoreExample {
    
    
    public static void main(String[] args) throws InterruptedException {
    
    
        Semaphore semaphore = new Semaphore(2); // 允许2个线程同时访问
        
        Runnable task = () -> {
    
    
            try {
    
    
                semaphore.acquire(); // 获取许可证
                // 执行任务
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            } finally {
    
    
                semaphore.release(); // 释放许可证
            }
        };
        
        Thread thread1 = new Thread(task);
        Thread thread2 = new Thread(task);
        Thread thread3 = new Thread(task);
        
        thread1.start();
        thread2.start();
        thread3.start();
    }
}

44. What is a CyclicBarrier? How to use it?

Answer: A barrier is a synchronization helper class that waits for multiple threads to reach a common barrier point before continuing execution. It is implemented by specifying the number of waiting threads and the await() method.

Code example:

import java.util.concurrent.CyclicBarrier;

public class CyclicBarrierExample {
    
    
    public static void main(String[] args) {
    
    
        CyclicBarrier barrier = new CyclicBarrier(3, () -> {
    
    
            System.out.println("All threads have reached the barrier.");
        });
        
        Runnable task = () -> {
    
    
            try {
    
    
                // 执行任务
                barrier.await(); // 等待其他线程
            } catch (Exception e) {
    
    
                e.printStackTrace();
            }
        };
        
        Thread thread1 = new Thread(task);
        Thread thread2 = new Thread(task);
        Thread thread3 = new Thread(task);
        
        thread1.start();
        thread2.start();
        thread3.start();
    }
}

45. How to achieve orderly output of data among multiple threads?

Answer: You can use CountDownLatch, CyclicBarrier or other synchronization mechanisms to ensure the orderly execution and output of threads .

Code example:

import java.util.concurrent.CountDownLatch;

public class OrderedOutputExample {
    
    
    public static void main(String[] args) throws InterruptedException {
    
    
        CountDownLatch latch = new CountDownLatch(2);
        
        Runnable task = () -> {
    
    
            // 执行任务
            latch.countDown();
        };
        
        Thread thread1 = new Thread(task);
        Thread thread2 = new Thread(task);
        
        thread1.start();
        thread2.start();
        
        latch.await(); // 等待线程1和线程2执行完毕
        System.out.println("Thread 1 and Thread 2 have finished.");
        
        // 执行下一个任务
    }
}

46. What is graceful termination of a thread?

Answer: Graceful termination of a thread refers to terminating the execution of the thread in an appropriate way to ensure the release of resources and cleanup of the state when the thread needs to end.

47. How to implement singleton mode in a multi-threaded environment?

Answer: You can use double-checked locking, static inner classes, etc. to implement thread-safe singleton mode.

Code example:

public class Singleton {
    
    
    private volatile static Singleton instance;

    private Singleton() {
    
    }

    public static Singleton getInstance() {
    
    
        if (instance == null) {
    
    
            synchronized (Singleton.class) {
    
    
                if (instance == null) {
    
    
                    instance = new Singleton();
                }
            }
        }
        return instance;
    }
}

48. How to deal with resource competition issues in a multi-threaded environment?

Answer: Synchronization mechanisms (such as synchronized, ReentrantLock) can be used to protect access to shared resources and avoid Competition problems caused by multiple threads modifying resources at the same time.

49. What is the task decomposition pattern (Fork-Join Pattern)?

Answer: The task decomposition pattern is a concurrency design pattern that is used to split a large task into multiple small tasks, and then allocate the small tasks to multiple threads concurrently Execute and finally merge the results.

50. What are thread-safe inner classes? How to use it to implement thread-safe singleton pattern?

Answer: Thread-safe inner classes refer to

Define a private static inner class inside the class that holds an instance of the outer class and creates the instance during static initialization. This can ensure lazy loading and achieve thread safety at the same time.

Code example:

public class Singleton {
    
    
    private Singleton() {
    
    }

    private static class Holder {
    
    
        private static final Singleton INSTANCE = new Singleton();
    }

    public static Singleton getInstance() {
    
    
        return Holder.INSTANCE;
    }
}

51. What is Work Stealing Algorithm?

Answer: The work-stealing algorithm is an algorithm for task scheduling, often used in task-based parallel programming. It allows idle threads to steal tasks from other threads' task queues for execution to fully utilize multi-core processors.

52. What is ThreadLocalRandom? How can I use it to generate random numbers?

Answer: ThreadLocalRandom is a class introduced in Java 7, which is used to generate random numbers in a multi-threaded environment. It is more suitable for high-level users than the Random class. Concurrent environment.

Code example:

import java.util.concurrent.ThreadLocalRandom;

public class RandomExample {
    
    
    public static void main(String[] args) {
    
    
        ThreadLocalRandom random = ThreadLocalRandom.current();
        int randomNumber = random.nextInt(1, 101); // 生成1到100的随机整数
        System.out.println(randomNumber);
    }
}

53. What is Amdahl’s Law? What implications does it have for parallelism?

Answer: Amdahl’s Law is a formula used to measure the effects of parallelism. It expresses the upper limit of the speedup after introducing parallelism in the system. It tells us that if a certain part of the program is serial, then no matter how many processors are added, the overall speedup is still limited by the impact of the serial part.

54. What is the visibility issue of threads? How to solve visibility issues?

Answer: The thread visibility problem means that when one thread modifies the value of a shared variable, other threads may not be able to see the change immediately. Visibility issues can be solved using volatilekeywords, synchronizedkeywords, Atomicclasses, etc.

55. What is ForkJoinPool? How to use it to perform tasks?

Answer: ForkJoinPool is a thread pool introduced in Java 7, specifically used to perform task decomposition mode. You can use ForkJoinTask and RecursiveTask to achieve task decomposition and execution.

Code example:

import java.util.concurrent.RecursiveTask;
import java.util.concurrent.ForkJoinPool;

public class ForkJoinExample extends RecursiveTask<Integer> {
    
    
    private final int threshold = 10;
    private int[] array;
    private int start;
    private int end;

    public ForkJoinExample(int[] array, int start, int end) {
    
    
        this.array = array;
        this.start = start;
        this.end = end;
    }

    @Override
    protected Integer compute() {
    
    
        if (end - start <= threshold) {
    
    
            // 执行任务
            int sum = 0;
            for (int i = start; i < end; i++) {
    
    
                sum += array[i];
            }
            return sum;
        } else {
    
    
            int middle = (start + end) / 2;
            ForkJoinExample leftTask = new ForkJoinExample(array, start, middle);
            ForkJoinExample rightTask = new ForkJoinExample(array, middle, end);
            leftTask.fork();
            rightTask.fork();
            return leftTask.join() + rightTask.join();
        }
    }

    public static void main(String[] args) {
    
    
        int[] array = new int[1000];
        for (int i = 0; i < array.length; i++) {
    
    
            array[i] = i + 1;
        }
        ForkJoinPool pool = ForkJoinPool.commonPool();
        int result = pool.invoke(new ForkJoinExample(array, 0, array.length));
        System.out.println("Sum: " + result);
    }
}

56. What is a blocking queue (Blocking Queue)? How to use it to implement the producer-consumer pattern?

Answer: A blocking queue is a thread-safe queue that provides blocking operations, such as waiting for elements to be added when the queue is empty, or waiting for elements to be added when the queue is full. Remove. You can use blocking queues to implement the producer-consumer pattern.

Code example:

import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;

public class ProducerConsumerExample {
    
    
    public static void main(String[] args) {
    
    
        BlockingQueue<Integer> queue = new ArrayBlockingQueue<>(10);
        
        Runnable producer = () -> {
    
    
            try {
    
    
                for (int i = 1; i <= 20; i++) {
    
    
                    queue.put(i);
                    System.out.println("Produced: " + i);
                    Thread.sleep(200);
                }
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
        };
        
        Runnable consumer = () -> {
    
    
            try {
    
    
                for (int i = 1; i <= 20; i++) {
    
    
                    int value = queue.take();
                    System.out.println("Consumed: " + value);
                    Thread.sleep(400);
                }
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
        };
        
        Thread producerThread = new Thread(producer);
        Thread consumerThread = new Thread(consumer);
        
        producerThread.start();
        consumerThread.start();
    }
}

57. What is the Thread.interrupt() method? How can I use it to interrupt a thread?

Answer: Thread.interrupt() The method is used to interrupt the thread. You can call this method where the thread needs to be interrupted, and then use Thread.isInterrupted() in the thread's task to check the interruption status and take appropriate actions.

Code example:

Thread thread = new Thread(() -> {
    
    
    while (!Thread.currentThread().isInterrupted()) {
    
    
        // 执行任务
    }
});
thread.start();

// 在需要中断线程的地方调用
thread.interrupt();

58. What is StampedLock in Java concurrency package? How to use it to implement optimistic read locking?

Answer: StampedLock is a lock mechanism introduced in the Java concurrency package that supports read-write locks and optimistic read locks. You can use thetryOptimisticRead() method to obtain an optimistic read lock, then

and then use thevalidate() method to verify whether the read lock is valid.

Code example:

import java.util.concurrent.locks.StampedLock;

public class StampedLockExample {
    
    
    private double x, y;
    private final StampedLock lock = new StampedLock();

    void move(double deltaX, double deltaY) {
    
    
        long stamp = lock.writeLock();
        try {
    
    
            x += deltaX;
            y += deltaY;
        } finally {
    
    
            lock.unlockWrite(stamp);
        }
    }

    double distanceFromOrigin() {
    
    
        long stamp = lock.tryOptimisticRead();
        double currentX = x;
        double currentY = y;
        if (!lock.validate(stamp)) {
    
    
            stamp = lock.readLock();
            try {
    
    
                currentX = x;
                currentY = y;
            } finally {
    
    
                lock.unlockRead(stamp);
            }
        }
        return Math.sqrt(currentX * currentX + currentY * currentY);
    }
}

59. How to use Exchanger in Java to exchange data between two threads?

Answer: Exchanger is a synchronization tool in the Java concurrency package, used to implement data exchange between two threads. It exchanges data through the exchange() method and continues execution after the exchange is completed.

Code example:

import java.util.concurrent.Exchanger;

public class ExchangerExample {
    
    
    public static void main(String[] args) {
    
    
        Exchanger<String> exchanger = new Exchanger<>();

        Runnable task1 = () -> {
    
    
            try {
    
    
                String data = "Hello from Thread 1";
                System.out.println("Thread 1 sending: " + data);
                String receivedData = exchanger.exchange(data);
                System.out.println("Thread 1 received: " + receivedData);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
        };

        Runnable task2 = () -> {
    
    
            try {
    
    
                String data = "Hello from Thread 2";
                System.out.println("Thread 2 sending: " + data);
                String receivedData = exchanger.exchange(data);
                System.out.println("Thread 2 received: " + receivedData);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
        };

        Thread thread1 = new Thread(task1);
        Thread thread2 = new Thread(task2);

        thread1.start();
        thread2.start();
    }
}

60. What is the priority of a thread? How to set the priority of a thread?

Answer: The priority of a thread is an integer used to specify the priority order of threads when scheduling. You can use the setPriority() method to set the thread's priority.

Code example:

Thread thread1 = new Thread(() -> {
    
    
    // 任务代码
});
thread1.setPriority(Thread.MAX_PRIORITY); // 设置最高优先级

Thread thread2 = new Thread(() -> {
    
    
    // 任务代码
});
thread2.setPriority(Thread.MIN_PRIORITY); // 设置最低优先级

61. What is the CopyOnWrite container? In what situations is it more applicable?

Answer: CopyOnWriteA container is a thread-safe container in the Java concurrency package. It creates a new copy when it is modified, thereby avoiding competition between modification and reading. It is more suitable in scenarios where there are many reads and few writes, because the write operation will cause the entire container to be copied, which is expensive.

62. What is thread stack overflow? How to avoid it?

Answer: Thread stack overflow means that the thread's call stack space is insufficient to accommodate the information required for method calls, resulting in a stack overflow error. It can be avoided by adjusting the stack size of the virtual machine, optimizing recursive methods, or reducing method call depth.

63. What is memory consistency problem? How to use volatile to solve memory consistency issues?

Answer: The memory consistency problem refers to the fact that in a multi-threaded environment, due to the asynchronous memory read and write operations, the values ​​of shared variables appear to be inconsistent between different threads. of. Using the volatile keyword ensures that when writing to a volatile variable, the value of the variable will be flushed to main memory and read volatile variable, the latest value will be read from the main memory.

64. What is ThreadGroup? What does it do?

Answer: ThreadGroup is a thread group, used to organize multiple threads together for easy management. It can be used to set the priority of the thread group, set the uncaught exception handler of the thread group, etc.

65. What is the rejection policy of the thread pool? How to customize the thread pool's denial policy?

Answer: The rejection policy of the thread pool refers to how to handle newly submitted tasks when the thread pool cannot continue to accept new tasks. Common rejection strategies are: AbortPolicy (default, throw exception), CallerRunsPolicy (use the calling thread to perform tasks), DiscardPolicy ( Discard the task directly) andDiscardOldestPolicy (discard the oldest task in the queue).

You can customize the rejection policy by implementing the RejectedExecutionHandler interface.

Code example:

import java.util.concurrent.*;

public class CustomThreadPoolExample {
    
    
    public static void main(String[] args) {
    
    
        RejectedExecutionHandler customHandler = (r, executor) -> {
    
    
            System.out.println("Custom rejected: " + r.toString());
        };

        ThreadPoolExecutor executor = new ThreadPoolExecutor(
            2, // corePoolSize
            5, // maximumPoolSize
            1, TimeUnit.SECONDS, // keepAliveTime and unit
            new LinkedBlockingQueue<>(10), // workQueue
            customHandler // rejectedExecutionHandler
        );
        
        for (int i = 1; i <= 10; i++) {
    
    
            final int taskNum = i;
            executor.execute(() -> {
    
    
                System.out.println("Executing task " + taskNum);
            });
        }
        
        executor.shutdown();
    }
}

66. How to implement scheduled tasks in a multi-threaded environment?

Answer: You can use the ScheduledExecutorService interface to implement scheduled tasks in a multi-threaded environment. Through the schedule() method, tasks can be scheduled to be executed at a fixed delay or a fixed period.

Code example:

import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;

public class ScheduledTaskExample {
    
    
    public static void main(String[] args) {
    
    
        ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
        
        Runnable task = () -> {
    
    
            System.out.println("Task executed at: " + System.currentTimeMillis());
        };
        
        // 延迟3秒后执行
        executor.schedule(task, 3, TimeUnit.SECONDS);
        
        // 初始延迟1秒,然后每隔2秒执行一次
        executor.scheduleAtFixedRate(task, 1, 2, TimeUnit.SECONDS);
        
        // 初始延迟1秒,然后等待上一个任务完成后再延迟2秒执行
        executor.scheduleWithFixedDelay(task, 1, 2, TimeUnit.SECONDS);
    }
}

67. How to handle uninterruptible tasks in a multi-threaded environment?

Answer: You can achieve uninterruptible effects by catchingInterruptedException exceptions and continuing the task during exception handling.

Code example:

Thread thread = new Thread(() -> {
    
    
    try {
    
    
        while (!Thread.currentThread().isInterrupted()) {
    
    
            // 执行不可中断的任务
        }
    } catch (InterruptedException e) {
    
    
        // 捕获异常并继续执行任务
        Thread.currentThread().interrupt();
    }
});
thread.start();

// 在需要中断线程的地方调用
thread.interrupt();

68. How to use Phaser in Java to implement multi-stage parallel tasks?

Answer: Phaser is a synchronization tool in the Java concurrency package, which can be used to synchronize multi-stage parallel tasks. It can synchronize the execution of threads in stages. When the tasks of each stage are completed, the thread can

Continue to the next stage.

Code example:

import java.util.concurrent.Phaser;

public class PhaserExample {
    
    
    public static void main(String[] args) {
    
    
        Phaser phaser = new Phaser(3); // 需要同步的线程数
        
        Runnable task = () -> {
    
    
            // 执行任务
            phaser.arriveAndAwaitAdvance(); // 等待其他线程到达
        };
        
        Thread thread1 = new Thread(task);
        Thread thread2 = new Thread(task);
        Thread thread3 = new Thread(task);
        
        thread1.start();
        thread2.start();
        thread3.start();
        
        phaser.arriveAndAwaitAdvance(); // 等待所有线程完成第一阶段任务
        
        // 执行下一个阶段任务
    }
}

69. What is thread safety? How to evaluate whether a class is thread-safe?

Answer: Thread safety means that in a multi-threaded environment, access and modification of shared resources will not cause data inconsistency or race conditions. There are several criteria to evaluate whether a class is thread-safe:

  • Atomicity: The execution of the method must be atomic, either all executions are completed or not.

  • Visibility: The modified value must be visible to other threads, that is, the latest value must be read.

  • Ordering: The order of program execution must be consistent with the order of the code.

If a class meets the above three conditions, it can be considered thread-safe.

70. What is a non-blocking algorithm? How to use non-blocking algorithms in multi-threaded environment?

Answer: Non-blocking algorithm means that in a multi-threaded environment, the traditional lock mechanism is not used, but atomic operations and other methods are used to achieve access to shared resources. It can avoid thread blocking and competition, thereby improving concurrency performance.

When using non-blocking algorithms, atomic variables, CAS operations, optimistic locking and other technologies are usually used to achieve thread-safe access. However, non-blocking algorithms are also complex and suitable for specific scenarios, requiring careful design and testing.

71. What are lock elimination and lock expansion? How to avoid them?

Answer: Lock elimination means that during the compiler optimization phase, locks that cannot be accessed by other threads are eliminated, thereby reducing lock competition. Lock expansion refers to upgrading lightweight locks to heavyweight locks to provide stronger synchronization protection when competition for locks is fierce in a multi-threaded environment.

Lock bloat can be avoided by reducing the scope of the lock, using local variables to avoid lock elimination, and optimizing the granularity of the lock.

72. What is thread context switching? How to reduce context switching overhead?

Answer: Thread context switching refers to the process of switching from one thread to another. The operating system needs to save the context of the current thread and load the context of the next thread. Context switching consumes time and resources and affects system performance.

The cost of context switching can be reduced by reducing the number of threads, rationally allocating CPU time slices, using lock-free programming, and using coroutines.

73. What is a thread leak? How to avoid thread leaks?

Answer: Thread leakage means that in a multi-threaded program, a thread is not closed correctly after being created, resulting in the thread's resources not being released, which may eventually cause the system to Performance degrades. Thread leaks can be avoided by using the thread pool rationally, closing threads in time, and using try-with-resources.

74. What are the usage scenarios of ThreadLocal? What are the pros and cons?

Answer: ThreadLocal is a thread-local variable that provides a way to store data in each thread. Common usage scenarios include:

  • In a multi-threaded environment, each thread needs

Have its own independent copy, such as database connection, Session, etc.

  • It is necessary to avoid using parameters to pass data, thereby reducing the coupling of the code.

Advantages include:

  • Thread safety: each thread has its own copy, no race conditions will occur.

  • Simplified parameter passing: Avoid passing a large number of parameters between methods.

Disadvantages include:

  • Memory leak: If the data in ThreadLocal is not cleaned up in time, a memory leak may occur.

  • May increase context switching: When the number of threads is too large, ThreadLocal may increase context switching overhead.

75. What is Daemon Thread? How to create a daemon thread?

Answer: A daemon thread is a thread running in the background. When all non-daemon threads end, the daemon thread will end with the exit of the JVM. A thread can be set as a daemon thread by calling the setDaemon(true) method.

Code example:

Thread daemonThread = new Thread(() -> {
    
    
    while (true) {
    
    
        // 执行后台任务
    }
});
daemonThread.setDaemon(true);
daemonThread.start();

76. What is CAS (Compare and Swap) operation? How does it achieve lock-free synchronization?

Answer: The CAS (Compare and Swap) operation is an atomic operation used to achieve lock-free synchronization. It is used to solve the problem of concurrent access to shared resources in a multi-threaded environment. It ensures atomicity by comparing the value in the memory with the expected value and writing the new value into the memory if they are equal.

CAS operations are usually implemented by instructions provided by the CPU, such as AtomicInteger, AtomicLong, etc.

Code example:

import java.util.concurrent.atomic.AtomicInteger;

public class CASExample {
    
    
    private static AtomicInteger count = new AtomicInteger(0);

    public static void main(String[] args) {
    
    
        for (int i = 0; i < 10; i++) {
    
    
            new Thread(() -> {
    
    
                for (int j = 0; j < 1000; j++) {
    
    
                    count.incrementAndGet();
                }
            }).start();
        }

        // 等待所有线程执行完成
        try {
    
    
            Thread.sleep(2000);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }

        System.out.println("Count: " + count);
    }
}

77. What is a deadlock? How to avoid deadlock?

Answer: Deadlock means that multiple threads fall into an infinite waiting state because they are waiting for each other to release the lock. Deadlocks often involve multiple resources and multiple threads.

Deadlocks can be avoided in several ways:

  • Acquire locks in a fixed order: Threads acquire locks in the same order, reducing the probability of deadlock.

  • Set the timeout: If the thread cannot acquire the lock, you can set a timeout and release the acquired lock after the timeout.

  • UsetryLock() method: Use tryLock() method to try to acquire the lock. If it cannot be acquired, give up the acquired lock. Lock.

  • UsingLockinterfacetryLock()method:UsingLockinterface< /span>tryLock() method to try to acquire multiple locks. If all locks cannot be acquired, the locks that have been acquired are released.

78. What is a thread scheduling algorithm? What are the common thread scheduling algorithms?

Answer: The thread scheduling algorithm is the strategy used by the operating system to decide which thread to run at a certain time. Common thread scheduling algorithms include:

  • First come, first served (FCFS): Scheduling is performed in the order in which threads arrive.

  • Short Job First (SJF): Prioritizes the thread with the shortest execution time.

  • Priority scheduling: Schedule according to the priority of the thread, and the thread with higher priority will be executed first.

  • Time slice rotation (Round Robin): Each thread is assigned a time slice, executes within the time slice, and then switches to the next thread.

  • Multilevel Feedback Queue: Adjust the priority based on the historical behavior of the thread to improve response time.

79. What are the risks and challenges in concurrent programming?

Answer: The following risks and challenges exist in concurrent programming:

  • Race Condition: Multiple threads compete for shared resources, resulting in inconsistent data.

  • Deadlock: Multiple threads wait for each other to release the lock and fall into infinite waiting.

  • Thread safety issues: Multiple threads access shared resources at the same time, resulting in data inconsistency.

  • Memory consistency problem: Multiple threads read and write shared variables in different CPU caches, resulting in data inconsistency.

  • Context switching overhead: Frequent switching of threads leads to performance degradation.

  • Increased complexity: Concurrent programming increases the complexity of the code and the difficulty of debugging.

In order to deal with these risks and challenges, it is necessary to rationally design concurrency solutions, use appropriate synchronization mechanisms, and conduct sufficient testing and tuning.

80. What is the thread liveness problem? What types of liveness issues are there?

Answer: The thread activity problem refers to the problem that in a multi-threaded environment, the thread cannot execute normally or cannot continue to execute. Common thread liveness issues include:

  • Deadlock: Multiple threads are waiting for each other to release the lock.

  • Livelock: Multiple threads repeatedly try an operation but are unable to continue.

  • Hungry: Some threads cannot obtain resources and have been unable to execute.

  • Infinite loop: The thread is stuck in an infinite loop and cannot exit.

In order to avoid thread activity problems, it is necessary to design the synchronization mechanism reasonably, avoid occupying locks for a long time, and conduct sufficient testing and debugging.

81. What is the ABA problem? How to solve ABA problem using AtomicStampedReference?

Answer: The ABA problem is a problem that occurs in lock-free programming. It means that in a multi-threaded environment, a value first becomes A, and then becomes B. , and finally changed back to A, and the thread may not be able to detect this change. This may cause some operations to misjudge when values ​​are equal.

AtomicStampedReferenceIt is a tool provided in the Java concurrency package to solve ABA problems. It solves the problem by introducing the version number (Stamp), that is, in addition to comparing the reference value, it is also necessary to compare whether the version number matches.

Code example:

import java.util.concurrent.atomic.AtomicStampedReference;

public class ABAProblemSolution {
    
    
    public static void main(String[] args) {
    
    
        AtomicStampedReference<Integer> atomicStampedRef = new AtomicStampedReference<>(1, 0);
        
        int stamp = atomicStampedRef.getStamp(); // 获取初始版本号
        
        Thread thread1 = new Thread(() -> {
    
    
            atomicStampedRef.compareAndSet(1, 2, stamp, stamp + 1); // A -> B
            atomicStampedRef.compareAndSet(2, 1, stamp + 1, stamp + 2); // B -> A
        });
        
        Thread thread2 = new Thread(() -> {
    
    
            int expectedStamp = atomicStampedRef.getStamp();
            int expectedValue = atomicStampedRef.getReference();
            
            try {
    
    
                Thread.sleep(1000); // 等待线程1执行完成
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            
            boolean success = atomicStampedRef.compareAndSet(expectedValue, 3, expectedStamp, expectedStamp + 1);
            System.out.println("Thread 2 update: " + success);
        });
        
        thread1.start();
        thread2.start();
    }
}

82. How to use the Fork-Join framework to implement parallel processing of tasks?

Answer: The Fork-Join framework is a tool in the Java concurrency package, used to implement parallel processing of tasks. It is based on the idea of ​​"divide and conquer", dividing large tasks into small tasks, then processing the small tasks in parallel, and finally merging the results.

To use the Fork-Join framework, you need to inherit RecursiveTask (with a return result) or RecursiveAction (with no return result), and implement < a i=3> method to handle the task. compute()

Code example:

import java.util.concurrent.RecursiveTask;
import java.util.concurrent.ForkJoinPool;

public class ForkJoinExample {
    
    
    static class SumTask extends RecursiveTask<Long> {
    
    
        private final int[] array;
        private final int start;
        private final int end;

        SumTask(int[] array, int start, int end) {
    
    
            this.array = array;
            this.start = start;
            this.end = end;
        }

        @Override
        protected Long compute() {
    
    
            if (end - start <= 100) {
    
     // 阈值,小于等于100个元素时直接计算
                long sum = 0;
                for (int i = start; i < end; i++) {
    
    
                    sum += array[i];
                }
                return sum;
            } else {
    
     // 大于100个元素时分割任务
                int middle = (start + end) / 2;
                SumTask leftTask = new SumTask(array, start, middle);
                SumTask rightTask = new SumTask(array, middle, end);
                leftTask.fork();
                rightTask.fork();
                return leftTask.join() + rightTask.join();
            }
        }
    }

    public static void main(String[] args) {
    
    
        ForkJoinPool forkJoinPool = new ForkJoinPool();
        int[] array = new int[1000];
        for (int i = 0; i < array.length; i++) {
    
    
            array[i] = i + 1;
        }
        long result = forkJoinPool.invoke(new SumTask(array, 0, array.length));
        System.out.println("Sum: " + result);
    }
}

83. What are parallel streams and parallel computing? How to use Stream in Java for parallel computing?

Answer: Parallel streams are a feature introduced in Java 8 that can process data in streams in parallel on multi-core processors. Parallel streams increase processing speed by dividing data into multiple parts and processing them on multiple threads.

To use parallel streams, just convert the stream object into a parallel stream through the parallel() method, and then perform stream operations.

Code example:

import java.util.Arrays;
import java.util.List;

public class ParallelStreamExample {
    
    
    public static void main(String[] args) {
    
    
        List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5, 6, 7, 8, 9, 10);

        int sum = numbers.parallelStream()
                .filter(n -> n % 2 == 0) // 过滤偶数
                .mapToInt(Integer::intValue) // 转换为int类型
                .sum();

        System.out.println("Sum of even numbers: " + sum);
    }
}

84. What is a thread group (ThreadGroup) in Java? What does it do?

Answer: ThreadGroup is a mechanism in Java for organizing and managing threads. Thread groups allow threads to be divided into multiple groups for easy management and control. Thread groups can be nested to form a tree structure.

The main functions of thread groups include:

  • Set the priority of the thread group.

  • Sets the thread group's uncaught exception handler.

  • Batch interrupts all threads in the thread group.

  • Convenient for counting and monitoring threads.

85. How to achieve collaboration and communication between threads?

Answer: Cooperation and communication between threads can be achieved in the following ways:

  • Shared variables: Multiple threads share a variable and control access through synchronization mechanisms such as locks and semaphores.

  • Pipe: One thread writes data to the pipe, and another thread reads data from the pipe to achieve inter-thread communication.

  • Blocking queue: Use blocking queue as a shared data structure. The producer thread puts data into the queue and the consumer thread takes data from the queue.

  • Condition variable: Use Condition object to implement waiting and notification between threads.

  • Semaphore: Use semaphores to control access to shared resources.

  • Inter-thread signals: Use wait() and notify() or notifyAll() Implement waiting and notification between threads.

86. What is a thread pool? How to create and use thread pool?

Answer: Thread pool is a mechanism for managing and reusing threads, which can avoid frequent creation and destruction of threads, thereby improving program performance and resource utilization. The thread pool in Java is provided by the Executor framework and is mainly implemented by ThreadPoolExecutor.

You can create different types of thread pools through the factory methods provided by the Executors class, such as newFixedThreadPool(), newCachedThreadPool() and newScheduledThreadPool()etc.

Code example:

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class ThreadPoolExample {
    
    
    public static void main(String[] args) {
    
    
        ExecutorService executor = Executors.newFixedThreadPool(5);
        
        for (int i = 0; i < 10; i++) {
    
    
            final int taskNum = i;
            executor.execute(() -> {
    
    
                System.out.println("Executing task " + taskNum);
            });
        }
        
        executor.shutdown();
    }
}

87. What are the number of core threads, the maximum number of threads and the work queue of the thread pool? How to adjust these parameters?

Answer: The number of core threads in the thread pool is the number of threads that remain active in the thread pool, and the maximum number of threads is the maximum number of threads allowed by the thread pool. A work queue is a queue used to store tasks waiting to be executed.

You can create a custom thread pool by calling the constructor ofThreadPoolExecutor, and adjust the performance and performance of the thread pool by adjusting the number of core threads, the maximum number of threads, and the capacity of the work queue. Behavior.

88. What is the rejection policy of the thread pool? How to choose the appropriate rejection strategy?

Answer: The rejection policy of the thread pool is to decide how to handle newly submitted tasks when the thread pool cannot continue to accept new tasks. Common rejection strategies include:

  • AbortPolicy (default): throws RejectedExecutionException exception.

  • CallerRunsPolicy: Use the calling thread to perform tasks.

  • DiscardPolicy: Discard newly submitted tasks directly.

  • DiscardOldestPolicy: Discards the oldest task in the queue.

You can choose an appropriate rejection strategy based on actual needs, or implement a customized rejection strategy.

89. What is the pre-start strategy of the thread pool? How to use pre-launch strategy?

Answer: The pre-start strategy of the thread pool means that after the thread pool is created, a certain number of core threads are created in advance and put into the work queue to shorten the task execution time. Start Time.

You can use the pre-start strategy by calling the prestartAllCoreThreads() method, which will create all core threads and put them in the work queue.

Code example:

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class PrestartCoreThreadsExample {
    
    
    public static void main(String[] args) {
    
    
        ExecutorService executor = Executors.newFixedThreadPool(5);
        ((ThreadPoolExecutor) executor).prestartAllCoreThreads(); // 预启动所有核心线程
        
        for (int i = 0; i < 10; i++) {
    
    
            final int taskNum = i;
            executor.execute(() -> {
    
    
                System.out.println("Executing task " + taskNum);
            });
        }
        
        executor.shutdown();
    }
}

90. What is Work Stealing in the Fork-Join framework? How to make work stealing more effective?

Answer: In the Fork-Join framework, work stealing means that a thread steals task execution from the queue of other threads. When a thread's queue is empty, it can steal tasks from the end of other threads' queues for execution, which can improve thread utilization and task distribution balance.

To improve the efficiency of work stealing, tasks can be divided into smaller subtasks so that more threads can participate in work stealing. At the same time, you can avoid creating too many threads to reduce the overhead of context switching.

91. What are optimistic locks and pessimistic locks? What's the difference?

Answer: Optimistic locking and pessimistic locking are two different concurrency control strategies.

  • Optimistic locking: Assuming that there will be no conflicts between multiple threads, each thread can perform operations directly, but it needs to check whether the data has been modified by other threads when updating. If it has been modified, try the operation again.

  • Pessimistic lock: Assuming that conflicts will occur between multiple threads, each thread will acquire a lock before operating to prevent other threads from modifying data at the same time. Once a thread acquires the lock, other threads must wait.

Optimistic locks are usually implemented using mechanisms such as version numbers and timestamps, while pessimistic locks use lock mechanisms, such as synchronized and ReentrantLock in Java .

92. What is the ABA problem of CAS operation? How to use version number to solve ABA problem?

Answer: The ABA problem of CAS (Compare and Swap) operation means that a value first changes from A to B, and then changes back to A. During the operation, it may Some other threads have modified this value.

Using version numbers can solve the ABA problem of CAS operations. During each update, not only do the values ​​need to be compared to see if they are equal, but the version numbers also need to be compared to see if they match. This way, even if the value is back to A, but the version number has changed, other threads can still correctly identify this situation.

AtomicStampedReference in Java can be used to solve the ABA problem, which introduces a version number mechanism.

93. What is a thread’s context class loader (Context Class Loader)? What does it do?

Answer: A thread's context class loader is the class loader used by the thread when loading a class. Class loaders in Java have a parent-child relationship, and a tree-like structure can be formed between class loaders. However, the thread context class loader does not necessarily follow the parent-child relationship and can be set according to the actual situation.

Context class loaders are useful in multi-threaded environments, especially in some frameworks where threads in a thread pool may not have access to the correct classpath. By setting the context class loader, you can ensure that the thread loads the correct class.

94. What is Java Memory Model (JMM)? How is it thread-safe?

Answer: The Java Memory Model (JMM) is a specification that defines how shared memory is accessed between threads in a multi-threaded program. JMM defines the order and visibility of various operations and how to prevent incorrect reordering.

JMM ensures thread safety by using synchronization locks, volatile keywords, final keywords, etc. Synchronization locks can ensure mutually exclusive access between multiple threads, the volatile keyword can ensure the visibility of variables and prohibit reordering, and the final keyword can ensure The object will not be modified.

95. What is thread safety? How to evaluate whether a class is thread-safe?

Answer: Thread safety means that in a multi-threaded environment, access and modification of shared resources will not cause data inconsistency or race conditions. There are several criteria to evaluate whether a class is thread-safe:

  • Atomicity: The execution of the method must be atomic, either all executions are completed or not.

  • Visibility: The modified value must be visible to other threads, that is, the latest value must be read.

  • Ordering: The order of program execution must be consistent with the order of the code.

If a class meets the above three conditions, it can be considered thread-safe.

96. How to implement a thread-safe singleton mode?

Answer: The following methods can be used to implement thread-safe singleton mode:

  • Lazy mode (Double-Checked Locking): Use double-checked locking to synchronize when the instance is first obtained to avoid creating instances multiple times.
public class Singleton {
    
    
    private static volatile Singleton instance;

    private Singleton() {
    
    }

    public static Singleton getInstance() {
    
    
        if (instance == null) {
    
    
            synchronized (Singleton.class) {
    
    
                if (instance == null) {
    
    
                    instance = new Singleton();
                }
            }
        }
        return instance;
    }
}
  • Static inner class: Using the loading mechanism of static inner class, the inner class will only be loaded when the getInstance() method is called, thus achieving lazy loading .
public class Singleton {
    
    
    private Singleton() {
    
    }

    private static class SingletonHolder {
    
    
        private static final Singleton INSTANCE = new Singleton();
    }

    public static Singleton getInstance() {
    
    
        return SingletonHolder.INSTANCE;
    }
}
  • Enumeration singleton: Use the characteristics of the enumeration type to ensure that there is only one instance.
public enum Singleton {
    
    
    INSTANCE;

    // 可以添加其他方法和属性
}

These methods can all implement thread-safe singleton mode, and you can choose the appropriate method according to actual needs.

97. What are thread-safe collections in Java? List some common thread-safe collection classes.

Answer: A thread-safe collection is a data structure that can be safely operated in a multi-threaded environment, ensuring that no data inconsistency or race conditions will occur during concurrent access. Some common thread-safe collection classes include:

  • ConcurrentHashMap: Thread-safe hash table, used instead of HashMap.

  • CopyOnWriteArrayList: Thread-safe dynamic array, suitable for scenarios where there is more reading and less writing.

  • CopyOnWriteArraySet: A thread-safe collection implemented based on CopyOnWriteArrayList.

  • ConcurrentLinkedQueue: Thread-safe unbounded non-blocking queue.

  • BlockingQueue: One series of blocking train, likeArrayBlockingQueue, LinkedBlockingQueue etc.

  • ConcurrentSkipListMap: Thread-safe jump list implementation of ordered mapping.

These thread-safe collection classes can operate safely in a multi-threaded environment without requiring additional synchronization measures.

98. What is a thread safety checking tool? Please give an example.

Answer: Thread safety checking tools are a type of tool used to check thread safety issues in concurrent programs, which can help discover and fix potential concurrency bugs. Common thread safety checking tools include:

  • FindBugs/SpotBugs: Static code analysis tool that can check for concurrency issues in the code.

  • CheckThread: can be used to check whether there are thread safety issues in multi-threaded programs.

  • ThreadSanitizer (TSan): A memory error detection tool that can detect data races and deadlock problems in multi-threaded programs.

  • Java Concurrency Stress Test (jcstress): A testing tool officially provided by Java, used to detect uncertain behavior in concurrent code.

These tools can help identify concurrency issues during the development and testing phases, thereby improving the quality of concurrent programs.

99. What are Thread Dump and Heap Dump in Java? How is this information generated and analyzed?

Answer: Thread Dump is a status snapshot of all threads in the current JVM, and Heap Dump is a snapshot of the current JVM heap memory. They can help developers analyze the running status and memory usage of the program, and are especially useful when problems such as deadlocks and memory leaks occur.

There are many ways to generate thread dumps and heap dumps, including using the JVM's own jstack command, jmap command, or using it in the code ThreadMXBean and MemoryMXBean are used for dynamic acquisition. To analyze this information, you can use tools such as Eclipse Memory Analyzer (MAT).

100. How to deal with concurrency performance issues in Java?

Answer: Dealing with concurrency performance issues requires comprehensive consideration of multiple aspects, including code design, synchronization mechanism, concurrency control, etc. Some common treatments include:

  • Avoid excessive lock competition: Reduce the granularity of locks and try to use lock-free data structures.

  • Reduce context switching: Use mechanisms such as thread pools and coroutines to reduce the frequent creation and destruction of threads.

  • Reasonable task division: Use technologies such as the Fork-Join framework to split large tasks into small tasks to improve parallelism.

  • Use high-performance data structures: Choose appropriate data structures, such as ConcurrentHashMap, ConcurrentSkipList, etc.

  • Reasonably adjust the thread pool parameters: Adjust the number of core threads, the maximum number of threads and the work queue size of the thread pool according to actual needs.

  • Perform performance testing and tuning: Use performance testing tools to conduct stress testing and perform performance tuning based on the test results.

Dealing with concurrent performance issues requires comprehensive consideration of multiple factors and optimization and adjustment based on specific circumstances.

Guess you like

Origin blog.csdn.net/superfjj/article/details/132617702