Java Interview - Concurrent Programming

1. What is parallelism? What is concurrency?

From the perspective of the operating system, a thread is the smallest unit of CPU allocation

  • Parallel: At the same time, two threads are executing, which requires two CPUs to execute separately. If there is only one CPU, one thread can be executed before another can be executed.
  • Concurrency: There is only one execution at the same time, but two threads are executed in a period of time. The implementation of concurrency depends on the switching of the CPU. The switching time is very short, and basically it is imperceptible to the user

2. What is a process? What are threads?

  • Process: A process is a running activity of the code on the data set, and is the basic unit for the system to allocate and schedule resources
  • Thread: A thread is an execution path of a process. There is at least one thread in a process. Multiple threads in a process share the resources of the process, which is the basic unit of CPU allocation.

For example, in Java, starting a main function means starting a JVM process, and this main is a thread (main thread) in this process. There can be multiple threads in a process, and they share the heap and method area resources of the process. But each thread has its own program counter and stack.

3. What are the thread life cycles? The process of state switching?

  • Freshmen (NEW)
  • run (RUNABLE)
  • Blocked (BLOCKED)
  • waiting
  • Waiting for timeout (TIMED_WAITING)
  • TERMINATED

image.png

4. What is deadlock? What are the conditions for deadlock? How to avoid deadlock?

       For example, thread A holds resource 1, and thread B holds resource 2. Neither of them releases their respective resources, and wants to acquire each other's resources, so the two threads will wait for each other and cause the state, which is called for deadlock. Understand the following code.

public class DeadLockDemo {
    
    
    private static Object resource1 = new Object();//资源 1
    private static Object resource2 = new Object();//资源 2

    public static void main(String[] args) {
    
    
        new Thread(() -> {
    
    
            synchronized (resource1) {
    
    
                System.out.println(Thread.currentThread() + "get resource1");
                try {
    
    
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread() + "waiting get resource2");
                synchronized (resource2) {
    
    
                    System.out.println(Thread.currentThread() + "get resource2");
                }
            }
        }, "线程 1").start();

        new Thread(() -> {
    
    
            synchronized (resource2) {
    
    
                System.out.println(Thread.currentThread() + "get resource2");
                try {
    
    
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread() + "waiting get resource1");
                synchronized (resource1) {
    
    
                    System.out.println(Thread.currentThread() + "get resource1");
                }
            }
        }, "线程 2").start();
    }
}

## 输出
Thread[线程 1,5,main]get resource1
Thread[线程 2,5,main]get resource2
Thread[线程 1,5,main]waiting get resource2
Thread[线程 2,5,main]waiting get resource1

Conditions for deadlock to occur:

  • Mutual exclusion: the resource is only occupied by one thread at any one time
  • Request and hold condition: When a thread is blocked by requesting resources, it keeps holding on to the obtained resources
  • Non-deprivation condition: the resources obtained by the thread cannot be forcibly deprived by other threads before they are used up, and the resources are released only after they are used up
  • Circular waiting condition: a number of threads form a head-to-tail circular waiting resource relationship

How to avoid deadlock:

At least one condition for deadlock to occur must be broken

  • Destroy the request and hold the condition: you can request all resources at once
  • Breaking the inalienable condition: the thread that occupies some resources applies for other resources, and if the application cannot be obtained, it can actively release the resources it occupies
  • Destroy the circular waiting condition: When applying for resources, you can apply in order to apply for the one with the smaller serial number first, and then apply for the one with the larger serial number

5. What is synchronized locked?

      Synchronized itself is not a lock, the lock itself is an object, and synchronized is at most equivalent to a "locking" operation, so synchronized does not lock a code block. When used on an instance method, the object that calls the method is locked. When used on a static method, all objects of the current class are locked. When used on a code block, if the object is modified, the object is locked. If the object is modified, the object is locked. , then all objects of this class are locked.

6. The bottom layer of synchronized?

The underlying principle of the synchronized keyword belongs to the JVM level.

  1. The case of synchronized synchronous statement blocks

      synchronizedThe implementation of the synchronization statement block is to use the monitorenterand monitorexitinstruction, which monitorenterpoints to the position where the synchronization code block starts, that is, when it is executed, if the lock counter is 0, it means that it can be acquired, then the lock counter will increase by 1 , other people who want to acquire the lock will wait when they see that it is 1; monitorexitthe position pointed to by the end of the synchronization code block, executing it will release the lock just acquired, and decrement the lock counter by 1.

  1. The case of synchronized modification methods

      When modifying a method, an identifier is usedACC_SYNCHRONIZED to indicate that the method is a synchronous method, and the JVM uses this identifier to identify whether a method is declared as a synchronous method, so as to execute the corresponding synchronous call. **

7. Synchronized lock upgrade?

      In the Java object header, there is a structure called the Mark Word mark field, and this structure will change as the state of the lock changes. First of all, we need to understand the four states of locks: lock-free state, biased lock state, lightweight lock state, and heavyweight lock state. They will gradually upgrade with fierce competition. Locks can be upgraded but cannot be downgraded. This strategy It is to improve the efficiency of acquiring and releasing locks.
image.png

Before jdk1.6, the implementation of synchronized directly called the enter and exit of ObjectMonitor. This kind of lock was called a heavyweight lock. In order to optimize it, jdk1.6 introduced a lot of optimizations to the lock implementation, such as biased lock , lightweight locks, spin locks, adaptive spin locks, lock elimination, lock coarsening and other technologies to reduce the overhead of lock operations

8. What is the role of the volatile keyword? What is memory visibility? What is instruction rearrangement? The principle of volatile prohibiting instruction rearrangement?

  1. Shared variables modified by volatile mainly have visibility, order, and no guarantee of atomicity.
  2. Memory visibility: Here we will talk about the Java memory model (JMM). Each thread has a grid of working memory. When some variables are modified, the new value will be synchronized to the main memory. Visibility refers to when a thread The value of the shared variable is modified, and other threads can immediately know the modification.
  3. What is instruction rearrangement, see the following example:
public class test1 {
    
    
    static int x = 0;
    static int b = 0;
    public static void main(String[] args) {
    
    
        new Thread(() -> {
    
    
            x = b;
            b = 1;
            System.out.println("x = " + x + " b = " + b);
        }).start();

        new Thread(() -> {
    
    
            x = b;
            b = 1;
            System.out.println("x = " + x + " b = " + b);
        }).start();
    }
}

# 结果(多线程情况下)
x = 0 b = 1
x = 1 b = 1

      Serious definition: In order to improve the efficiency of program execution, CPU and compiler will allow instruction optimization according to certain rules. However, there is a certain sequence between the code logic, and different results will be obtained according to different execution logic during concurrent execution.

  1. The principle of volatile prohibiting instruction rearrangement, by imposing a memory barrier, prohibits the operation of instruction rearrangement

image.png

9. What is the difference between Volatile and Synchronized?

  1. volatileThe keyword is a lightweight implementation of thread synchronization, so volatilethe performance is definitely better than synchronizedthat, but volatile can only modify variables, while synchronized can modify methods and code blocks.
  2. The `volatile keyword can guarantee the visibility of the data, but it cannot guarantee the atomicity of the data, and synchronized can guarantee both. **
  3. volatileKeywords are mainly used to solve the visibility of variables among multiple threads, and synchronizedsolve the synchronization of multiple threads accessing resources.

10. What is ReentrantLock?

  1. ReentrantLockImplemented Lockthe interface, it is a reentrant and exclusive lock, synchronizedsimilar to keywords, but ReentrantLockmore flexible and powerful, adding advanced functions such as polling, timeout, interrupt, fair lock and unfair lock.
  2. ReentrantLockThere is an inner class inside Sync, and most of the operations of Syncinheritance , adding locks and releasing locks are actually implemented in it. There are two subclasses of fair lock and unfair lock .AQSSyncSyncFairSyncNonFairSync
  3. The default is to use unfair locks, and you can also specify the use of fair locks through the constructor.

11. What is the difference between ReentrantLock and Synchronized?

the difference synchronized reentrantlock
Lock implementation mechanism object head monitor mode Rely on AQS
flexibility not flexible Supports response interrupts, timeouts, and attempts to acquire locks
release lock form auto release lock Show calls to unlock()
Support lock type unfair lock Fair Lock & Unfair Lock
condition queue single condition queue multi-condition queue
reentrant support support support

12. What is the thread pool? Application scenario?

       It is a pool for managing threads, using the idea of ​​pooling technology, you can 降低资源的消耗, 提高响应的速度, 重复利用, 方便管理, the main things to understand here are the three major methods, seven major parameters, and four rejection strategies.
       Application scenarios include parallel tasks, scheduled tasks, etc.

13. What are the three methods, seven parameters, and four rejection strategies of the thread pool? Five queues?

Three methods:

method effect
Executors.newSingleThreadExecutor() thread pool for a single thread
Executors.newFixedThreadPool(5) fixed size thread pool
Executors.newCachedThreadPool() A thread pool that is strong when it is strong, and weak when it is weak

ps: These three methods are basically useless, just understand it. "Alibaba Java Development Manual" does not allow the use of Excutors to create, but ThreadPoolExcutors to create, so that the operating rules of the thread pool can be more clearly defined, and resource exhaustion can be avoided risks of.

Seven parameters:

public ThreadPoolExecutor(int corePoolSize, // 核心线程大小
                          int maximumPoolSize, // 最大核心线程池大小
                          long keepAliveTime, // 超时了,没有人调用就会释放
                          TimeUnit unit, // 超时的单位
                          BlockingQueue<Runnable> workQueue, // 阻塞队列
                          ThreadFactory threadFactory, // 创建线程的工厂
                          RejectedExecutionHandler handler // 拒绝策略) {
    
    
    if (corePoolSize < 0 ||
        maximumPoolSize <= 0 ||
        maximumPoolSize < corePoolSize ||
        keepAliveTime < 0)
        throw new IllegalArgumentException();
    if (workQueue == null || threadFactory == null || handler == null)
        throw new NullPointerException();
    this.corePoolSize = corePoolSize;
    this.maximumPoolSize = maximumPoolSize;
    this.workQueue = workQueue;
    this.keepAliveTime = unit.toNanos(keepAliveTime);
    this.threadFactory = threadFactory;
    this.handler = handler;
}
parameter effect
corePoolSize Initialize the number of core threads in the thread pool
maximumPoolSize Indicates the maximum number of threads allowed
keepAliveTime It will be released after no one calls it
unit unit of survival time
workQueue The thread pool waits for the queue, maintaining Runnable objects waiting to be executed
threadFactory The factory used to create new threads
handler Deny policy (see below)

Four rejection strategies:

Strategy effect
AbortPolicy Throw an exception directly, use this strategy by default
CallerRunsPolicy Execute the task on the same thread as the caller
DiscardOldestPolicy Discard the oldest task in the blocking queue, that is, the task at the front of the queue
DiscardPolicy The current task is directly discarded

Five queues:

queue effect
ArrayBlockingQueue (bounded queue) It is a bounded blocking queue implemented with an array, sorted by FIFO
LinkedBlockingQueue (capacity queue can be set) The blocking queue based on the linked list, the capacity can be set, if not set, it will be an unbounded blocking queue
DelayQueue (delay queue) It is a queue for delayed execution of a task timing cycle. Sort from small to large according to the specified execution time, otherwise according to the order of insertion into the queue
PriorityBlockingQueue (priority queue) is an unbounded blocking queue with priority
SynchronousQueue (synchronous queue) It is a blocking queue that does not store elements. Each insertion operation must wait until another thread calls the removal operation, otherwise it will always block

14. Briefly describe the workflow of the thread pool?

image.png

15. How to configure the number of core threads in the thread pool?

Using the thread pool in the case of high concurrency can effectively reduce the time cost and resource overhead of creating threads and releasing them. If the thread pool is not used, there may be a large number of threads in the playground system, which may cause system memory to be exhausted and "excessive switching". We hope that Create as many tasks as possible, but we cannot create too many threads due to resource constraints, so how to choose the optimal number of threads in the case of high concurrency?

CPU-intensive: number of core threads = number of CPU cores + 1
IO-intensive: number of core threads = number of CPU cores * 2

CPU核数: Runtime.getRuntime().availableProcessors()
CPU密集型: When the proportion of thread CPU time is higher, fewer threads are needed
IO密集型: when the proportion of thread waiting time is higher, more threads are needed, and other threads are enabled to continue using the CPU to This increases CPU utilization

16. What is ThreadLocal? scenes to be used? Do you understand memory leaks? Why is the key designed as a weak reference?

image.png

  1. It can be called a thread local variable. ThreadLocalThe main solution of the class is to let each thread bind its own value, and each thread has its own private data.
  2. The usage scenarios include that each thread needs an exclusive object, and the current user information needs to be shared by all methods in the thread, etc.
  3. The key in ThreadLocal is a weak reference, and the value is a strong reference. Weak references have such a feature that no matter whether the jvm memory space is sufficient, the memory occupied by the object will be reclaimed. If the key is reclaimed at this time, the value will still cause memory loss. leaked. To prevent memory leaks, you can call the remove method in time to release space after using ThreadLocal.
  4. If the key is a strong reference, if the ThreadLocal Reference is destroyed, it will no longer point to the ThreadLocal strong reference and should be recycled, but the key is still pointed to by the strong reference, so it cannot be recycled, and memory leaks will also occur The problem. Compared with the design of weak references, weak references still have a way to remedy memory leaks, while strong references do not.

17. What is CAS?

       CAS is called CompareAndSwap, which compares and transforms, mainly through processor instructions to ensure the atomicity of the operation. The CAS instruction contains three parameters: the memory address A of the shared variable, the expected value B, and the new value C of the shared variable. Only the memory When the value in address A is equal to B, the value of A in memory will be updated to C. As a CPU instruction, the CAS instruction itself can guarantee atomicity.

18. What's wrong with CAS? How to solve it?

ABA questions:
       Under concurrent conditions, assuming that the initial condition is A, when the data is modified, the modification will be executed if A is found, but A may have changed from B to A before, and some problems may arise at this time.
       Solution: Add the version number. Every time you modify a variable, you need to add 1 to the version number of the variable. When executing CAS, to determine whether the current version number is consistent with the expected version number (one-time verification), you can use AtomicStampReference Class, through its compareAndSet method, firstly check whether the current object reference value is equal to the expected reference, and whether the current stamp flag is equal to the expected flag, if all are equal, update the reference value and stamp flag value to the given The specified update value.

public class CASDemo {
    
    
    // AtomicStampedReference注意,如果泛型是包装类,注意对象的引用问题
    static AtomicStampedReference<Integer> atomicStampedReference = new AtomicStampedReference<>(1, 1);
    public static void main(String[] args) {
    
    
        new Thread(()->{
    
    
            int stamp = atomicStampedReference.getStamp();
            System.out.println("a1 => " + stamp);

            try {
    
    
                TimeUnit.SECONDS.sleep(1);
            } catch (InterruptedException e) {
    
    
                throw new RuntimeException(e);
            }

            System.out.println(atomicStampedReference.compareAndSet(1, 2, atomicStampedReference.getStamp(), atomicStampedReference.getStamp() + 1));

            System.out.println("a2 => " + atomicStampedReference.getStamp());

            System.out.println(atomicStampedReference.compareAndSet(2, 1, atomicStampedReference.getStamp(), atomicStampedReference.getStamp() + 1));

            System.out.println("a3 => " + atomicStampedReference.getStamp());
        }, "a").start();

        new Thread(()->{
    
    
            int stamp = atomicStampedReference.getStamp();
            System.out.println("b1 => " + stamp);

            try {
    
    
                TimeUnit.SECONDS.sleep(2);
            } catch (InterruptedException e) {
    
    
                throw new RuntimeException(e);
            }

            System.out.println(atomicStampedReference.compareAndSet(1, 3, stamp, stamp + 1));
            System.out.println("b2 => " + atomicStampedReference.getStamp());

            }, "b").start();
    }
}
# 结果
a1 => 1
b1 => 1
true
a2 => 2
true
a3 => 3
false
b2 => 3

Loop performance overhead :
       Spin CAS, if it keeps looping and fails, it will bring a very large execution overhead to the CPU.
       Solution: In Java, in many places where spin CAS is used, there will be a limit on the number of spins. If it exceeds a certain number of times, it will stop.

Atomic operations that accept only one variable:
       CAS guarantees the atomicity of operations on one variable. If multiple variables are operated, CAS cannot guarantee the atomicity of operations
       . Into an object, through AtomicReference to ensure atomicity.

19. What are the similarities and differences between wait() and sleep()?

The same point: their calls will suspend the current thread and give up the CPU

difference:

wait() sleep()
defined location is a method of Object Is the method of Thread
where to call Only in code blocks or synchronized methods can be used anywhere
Lock resource release method 让当前线程暂时释放了锁,当调用了notify/notifyAll方法才会解除wait状态,去争夺锁,进而执行 让出了CPU,但是没有释放锁
恢复方式不同 进入了wait状态,放弃了锁,调用了notify/notifyAll后,才会去争夺锁,才能进入运行状态 停止运行期间,仍然持有锁,当超时时间一到,就会继续执行
异常捕获 不需要 需要捕获或抛出异常

20、线程有几种创建的方式?

有三种创建线程的方法:继承Thread类、实现Runnable接口、实现Callable接口

  1. 继承Thread类:重写run()方法,调用start()方法启动线程。
  2. 实现Runnable接口:重写run()方法,调用start()方法启动线程。
  3. 实现Callable接口:上面两种方法都是没有返回值的,但是这种可以获取到返回值,可以抛出异常,并且重写的方法是call()

image.png

public class test1 {
    
    
    public static void main(String[] args) {
    
    
        FutureTask<String> stringFutureTask = new FutureTask<>(new MyThread());
        new Thread(stringFutureTask).start();
        try {
    
    
            String s = stringFutureTask.get();
            System.out.println("call = "  + s);

        } catch (InterruptedException e) {
    
    
            throw new RuntimeException(e);
        } catch (ExecutionException e) {
    
    
            throw new RuntimeException(e);
        }
    }
}

class MyThread implements Callable<String> {
    
    
    @Override
    public String call() throws Exception {
    
    
        System.out.println("call()");
        return "1024";
    }
}
# 结果
call()
call = 1024

21、线程间的通讯方式和区别?

方式 解释
volatile和synchronized volatile修饰的变量能保证所有线程对变量访问的可见性;synchronized能确保多个线程在同一时刻,只能有一个线程处于方法或者同步代码块中,保证线程对变量访问的可见性和排它性。
等待/通知机制 通过(wait()、notify())实现一个线程修改一个对象的值,而另一个线程感知到了变化,然后做出响应的操作
管道输入/输出流 管道输入/出流和普通的文件输入/出流或者网络输入/出流不同之处,它主要用于线程之间的数据传输,而传输的媒介为内存。具体有面向字节:PipedOutputStream、PipedInputStream,面向字符:PipedReader、PipedWriter。
使用Thread.join() 如果一个线程A执行了threa.join()语句,含义:当前线程A等待thread线程终止之后才从thread.join()返回。还提供了join(long millis)和join(long millis,int nanos)两个具备超时特性的方法
使用ThreadLocal A thread variable is a structure with a ThreaLocal object as the key and any object as the value. It is attached to the thread, and the value bound to the ThreadLocal can be queried through the ThreadLocal object (through the set() and get() methods)

Guess you like

Origin blog.csdn.net/weixin_52487106/article/details/130954632