[Java Interview - Concurrency Basics, Concurrency Keywords]



3.1 Concurrency basics

 What problem does the emergence of multi-threading solve? What is its essence?

The speeds of CPU, memory, and I/O devices are very different. In order to make reasonable use of the high performance of the CPU and balance the speed differences between the three, the computer architecture, operating system, and compiler have all made contributions, which mainly reflects for:

  • The CPU adds cache to equalize the speed difference with memory; // causing visibility issues
  • The operating system adds processes and threads to time-share the CPU and balance the speed difference between the CPU and I/O devices; // leading to atomicity issues
  • The compiler optimizes the order of instruction execution so that the cache can be used more rationally. // Causing ordering problems
 How does Java solve concurrency problems?

The Java memory model is a very complex specification. For details, seeJava Memory Model Detailed Explanation.

The first dimension of understanding: core knowledge points

JMM can essentially be understood as the Java memory model that specifies how the JVM provides methods for disabling caching and compilation optimization on demand. Specifically, these methods include:

  • The three keywords volatile, synchronized and final
  • Happens-Before Rule

The second dimension of understanding: visibility, order, atomicity

  • atomicity

In Java, reading and assigning operations to variables of basic data types are atomic operations, that is, these operations cannot be interrupted and are either executed or not. Please analyze which of the following operations are atomic operations:

x = 10;        //语句1: 直接将数值10赋值给x,也就是说线程执行这个语句的会直接将数值10写入到工作内存中
y = x;         //语句2: 包含2个操作,它先要去读取x的值,再将x的值写入工作内存,虽然读取x的值以及 将x的值写入工作内存 这2个操作都是原子性操作,但是合起来就不是原子性操作了。
x++;           //语句3: x++包括3个操作:读取x的值,进行加1操作,写入新的值。
x = x + 1;     //语句4: 同语句3

Of the four statements above, only the operation of statement 1 is atomic.

In other words, only simple reading and assignment (and the number must be assigned to a variable, mutual assignment between variables is not an atomic operation) are atomic operations.

As can be seen from the above, the Java memory model only guarantees that basic reading and assignment are atomic operations. If you want to achieve atomicity for a larger range of operations, you can achieve it through synchronized and Lock. Since synchronized and Lock can ensure that only one thread executes the code block at any time, there is no atomicity problem, thus ensuring atomicity.

  • visibility

Java provides the volatile keyword to ensure visibility.

When a shared variable is modified volatile, it will ensure that the modified value will be updated to the main memory immediately. When other threads need to read it, it will read the new value from the memory.

Ordinary shared variables cannot guarantee visibility, because after an ordinary shared variable is modified, it is uncertain when it will be written to the main memory. When other threads read it, the original old value may still be in the memory at this time, so Visibility is not guaranteed.

In addition, visibility can also be guaranteed through synchronized and Lock. Synchronized and Lock can ensure that only one thread acquires the lock at the same time and then executes the synchronization code, and the modifications to the variables are flushed to the main memory before releasing the lock. Visibility is therefore guaranteed.

  • Orderliness

In Java, a certain "orderliness" can be guaranteed through the volatile keyword. In addition, orderliness can be ensured through synchronized and Lock. Obviously, synchronized and Lock ensure that one thread executes synchronization code at each moment, which is equivalent to letting threads execute synchronization code sequentially, which naturally ensures orderliness. Of course, JMM ensures orderliness through Happens-Before rules.

 What are the implementation ideas for thread safety?
  1. Mutually exclusive synchronization

synchronized 和 ReentrantLock。

  1. non-blocking synchronization

The main problem of mutually exclusive synchronization is the performance problem caused by thread blocking and waking up, so this kind of synchronization is also called blocking synchronization.

Mutually exclusive synchronization is a pessimistic concurrency strategy. It is always believed that as long as correct synchronization measures are not taken, problems will definitely occur. Regardless of whether there is competition for shared data, it must be locked (what is discussed here is a conceptual model, in fact the virtual machine optimizes a large part of unnecessary locking), user mode core mode conversion, maintenance of lock counters and Check whether there are blocked threads that need to be awakened and other operations.

  • CAS

With the development of hardware instruction sets, we can use an optimistic concurrency strategy based on conflict detection: perform the operation first, and if no other threads compete for the shared data, the operation is successful, otherwise compensatory measures are taken (continuously retry until successful until). Many implementations of this optimistic concurrency strategy do not require threads to be blocked, so this synchronization operation is called non-blocking synchronization.

Optimistic locking requires the two steps of operation and conflict detection to be atomic. Mutex synchronization can no longer be used to ensure this, and it can only be accomplished by hardware. The most typical atomic operation supported by hardware is: Compare-and-Swap (CAS). The CAS instruction requires 3 operands, namely the memory address V, the old expected value A and the new value B. When the operation is performed, the value of V is updated to B only if the value of V is equal to A.

  • AtomicInteger

The integer atomic class AtomicInteger in the J.U.C package uses the CAS operation of the Unsafe class in its methods such as compareAndSet() and getAndIncrement().

  1. No sync solution

To ensure thread safety, synchronization is not necessarily necessary. If a method does not involve sharing data, then it naturally does not require any synchronization measures to ensure correctness.

  • stack closed

When multiple threads access local variables of the same method, thread safety issues will not arise because local variables are stored in the virtual machine stack and are thread-private.

  • Thread Local Storage

If the data required in a piece of code must be shared with other code, then see if the code that shares the data can be guaranteed to execute in the same thread. If it can be guaranteed, we can limit the visible range of shared data to the same thread. In this way, we can ensure that there is no data contention problem between threads without synchronization.

 How to understand the difference between concurrency and parallelism?

Concurrency refers to a processor processing multiple tasks at the same time.

Parallelism refers to multiple processors or multi-core processors processing multiple different tasks at the same time.

 What are the states of a thread? Explain the ways to transition from one state to another?
  • New

It has not been started since it was created.

  • Runnable

May be running or waiting for a CPU time slice.

Contains Running and Ready in the operating system thread status.

  • Blocking

Waiting to acquire an exclusive lock, this state will end if its thread releases the lock.

  • Waiting indefinitely

Wait for other threads to wake up explicitly, otherwise they will not be allocated CPU time slices.

Entry method Exit method
Object.wait() method without setting Timeout parameter Object.notify() / Object.notifyAll()
Thread.join() method without setting Timeout parameter The called thread has completed execution
LockSupport.park() method -
  • Timed Waiting

There is no need to wait for other threads to wake up explicitly, it will be automatically woken up by the system after a certain period of time.

When calling the Thread.sleep() method to put a thread into a time-limited waiting state, it is often described as "putting a thread to sleep".

When calling the Object.wait() method to cause a thread to wait for a limited time or wait indefinitely, it is often described as "suspending a thread".

Sleep and suspend are used to describe behavior, while blocking and waiting are used to describe state.

The difference between blocking and waiting is that blocking is passive, waiting to acquire an exclusive lock. Waiting is active and is entered by calling methods such as Thread.sleep() and Object.wait().

Entry method Exit method
Thread.sleep() method time's up
Object.wait() method with Timeout parameter set Time ends / Object.notify() / Object.notifyAll()
Thread.join() method with Timeout parameter set Time ends/The called thread completes execution
LockSupport.parkNanos() method -
LockSupport.parkUntil() method -
  • Terminated

It can be that the thread ends itself after completing the task, or it ends due to an exception.

 What are the common ways to use threads?

There are three ways to use threads:

  • Implement the Runnable interface;
  • Implement the Callable interface;
  • Inherits the Thread class.

A class that implements the Runnable and Callable interfaces can only be regarded as a task that can be run in a thread, not a thread in the true sense, so it needs to be called through Thread in the end. It can be said that tasks are executed through thread driving.

 What are the basic threading mechanisms?
  • Executor

Executors manage the execution of multiple asynchronous tasks without requiring the programmer to explicitly manage thread lifecycles. Asynchronous here means that the execution of multiple tasks does not interfere with each other and does not require synchronous operations.

There are three main types of Executors:

  1. CachedThreadPool: A task creates a thread;
  2. FixedThreadPool: All tasks can only use fixed-size threads;
  3. SingleThreadExecutor: Equivalent to FixedThreadPool of size 1.
  • Daemon

Daemon threads are threads that provide services in the background when the program is running and are not an integral part of the program.

When all non-daemon threads end, the program terminates and all daemon threads are killed.

main() belongs to the non-daemon thread. Use the setDaemon() method to set a thread as a daemon thread.

  • sleep()

The Thread.sleep(millisec) method sleeps the currently executing thread. The unit of millisec is milliseconds.

sleep() may throw InterruptedException because exceptions cannot be propagated across threads back into main() and therefore must be handled locally. Other exceptions thrown in the thread also need to be handled locally.

  • yield()

The call to the static method Thread.yield() declares that the current thread has completed the most important part of its life cycle and can be switched to other threads for execution. This method is only a suggestion to the thread scheduler, and only a suggestion that other threads with the same priority can run.

 What are the ways to interrupt threads?

A thread will automatically end after it is executed. If an exception occurs during operation, it will also end early.

  • InterruptedException

Interrupt a thread by calling its interrupt(). If the thread is blocked, waiting for a limited time, or waiting indefinitely, an InterruptedException will be thrown, thus ending the thread early. However, I/O blocking and synchronized lock blocking cannot be interrupted.

For the following code, start a thread in main() and then interrupt it. Since the Thread.sleep() method is called in the thread, an InterruptedException will be thrown, thus ending the thread early and not executing subsequent statements.

public class InterruptExample {

    private static class MyThread1 extends Thread {
        @Override
        public void run() {
            try {
                Thread.sleep(2000);
                System.out.println("Thread run");
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }

    public static void main(String[] args) throws InterruptedException {
        Thread thread1 = new MyThread1();
        thread1.start();
        thread1.interrupt();
        System.out.println("Main run");
    }
}
Main run
java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at InterruptExample.lambda$main$0(InterruptExample.java:5)
    at InterruptExample$$Lambda$1/713338599.run(Unknown Source)
    at java.lang.Thread.run(Thread.java:745)
  • interrupted()

If a thread's run() method executes an infinite loop and does not perform operations such as sleep() that will throw InterruptedException, then calling the thread's interrupt() method cannot cause the thread to end early.

However, calling the interrupt() method will set the thread's interrupt flag, and calling the interrupted() method will return true. Therefore, you can use the interrupted() method in the loop body to determine whether the thread is in an interrupted state, thereby ending the thread early.

  • Executor interrupt operation

Calling the shutdown() method of Executor will wait for all threads to finish executing before shutting down. However, if the shutdownNow() method is called, it is equivalent to calling the interrupt() method of each thread.

 What are the mutually exclusive synchronization methods of threads? How to compare and choose?

Java provides two lock mechanisms to control mutually exclusive access to shared resources by multiple threads. The first is synchronized implemented by the JVM, and the other is ReentrantLock implemented by the JDK.

1. Implementation of lock

synchronized is implemented by JVM, while ReentrantLock is implemented by JDK.

2. Performance

The new version of Java has made many optimizations to synchronized, such as spin locks, etc. Synchronized is roughly the same as ReentrantLock.

3. Waiting can be interrupted

When the thread holding the lock does not release the lock for a long time, the waiting thread can choose to give up waiting and deal with other things instead.

ReentrantLock can be interrupted, but synchronized cannot.

4. Fair lock

Fair lock means that when multiple threads are waiting for the same lock, they must obtain the lock in sequence according to the time order of applying for the lock.

Locks in synchronized are unfair, and ReentrantLock is also unfair by default, but it can also be fair.

5. The lock is bound to multiple conditions

A ReentrantLock can bind multiple Condition objects at the same time.

 What are the ways of cooperation between threads?

When multiple threads can work together to solve a problem, if some parts must be completed before other parts, then the threads need to be coordinated.

  • join()

Calling the join() method of another thread in a thread will suspend the current thread instead of busy waiting until the target thread ends.

For the following code, although thread b starts first, because the join() method of thread a is called in thread b, thread b will wait for thread a to finish before continuing to execute, so in the end it is guaranteed that the output of thread a precedes that of thread b. output.

public class JoinExample {

    private class A extends Thread {
        @Override
        public void run() {
            System.out.println("A");
        }
    }

    private class B extends Thread {

        private A a;

        B(A a) {
            this.a = a;
        }

        @Override
        public void run() {
            try {
                a.join();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            System.out.println("B");
        }
    }

    public void test() {
        A a = new A();
        B b = new B(a);
        b.start();
        a.start();
    }
}
public static void main(String[] args) {
    JoinExample example = new JoinExample();
    example.test();
}
A
B
  • wait() notify() notifyAll()

Calling wait() causes the thread to wait for a certain condition to be met. The thread will be suspended while waiting. When other threads run and the condition is met, other threads will call notify() or notifyAll() to wake up the suspended thread.

They are all part of Object, not Thread.

It can only be used in synchronized methods or synchronized control blocks, otherwise IllegalMonitorStateExeception will be thrown at runtime.

While suspended using wait(), the thread releases the lock. This is because if the lock is not released, other threads cannot enter the object's synchronization method or synchronization control block, and then cannot execute notify() or notifyAll() to wake up the suspended thread, causing a deadlock.

The difference between wait() and sleep()

  • wait() is a method of Object, and sleep() is a static method of Thread;
  • wait() will release the lock, sleep() will not.
  • await() signal() signalAll()

The java.util.concurrent class library provides the Condition class to achieve coordination between threads. You can call the await() method on the Condition to make the thread wait, and other threads call the signal() or signalAll() method to wake up the waiting thread. Compared with the waiting method of wait(), await() can specify the waiting conditions, so it is more flexible.

 3.2 Concurrency keyword

 Where can Synchronized be used?
  • object lock
  • method lock
  • class lock
 What does Synchronized essentially ensure thread safety?
  • The principle of locking and releasing locks

Go deep into the JVM to look at the bytecode and create the following code:

public class SynchronizedDemo2 {
    Object object = new Object();
    public void method1() {
        synchronized (object) {

        }
    }
}

Use javac command to compile and generate .class file

>javac SynchronizedDemo2.java

Use the javap command to decompile and view the information of the .class file

>javap -verbose SynchronizedDemo2.class

MonitorenterThe and Monitorexit instructions will cause the object to increase or decrease its lock counter by 1 during execution. Each object is only associated with one monitor (lock) at the same time, and a monitor can only be obtained by one thread at the same time. When an object attempts to obtain ownership of the Monitor lock associated with this object, the monitorenter instruction will One of the following 3 situations occurs:

  • The monitor counter is 0, which means that it has not been acquired yet. Then this thread will acquire it immediately and then increase the lock counter by 1. Once +1, other threads need to wait if they want to acquire it.
  • If the monitor has obtained ownership of the lock and re-enters the lock, the lock counter will accumulate and become 2, and will continue to accumulate with the number of re-entries.
  • This lock has been acquired by another thread, waiting for the lock to be released

monitorexit指令: Release the ownership of the monitor. The release process is very simple, that is, the monitor counter is decremented by 1. If after the decrement, the counter is not 0, it means that it has just re-entered, and the current thread continues to hold ownership of the lock. If the counter becomes 0, it means that the current thread no longer has ownership of the monitor, that is, the lock is released.

When any thread accesses an Object, it must first obtain the Object's monitor. If the acquisition fails, the thread enters the synchronization state, and the thread status changes to BLOCKED. When the Object's monitor occupier is released, the thread in the synchronization queue will There is an opportunity to reacquire the monitor.

  • Reentrant principle: locking times counter

Look at the following example:

public class SynchronizedDemo {

    public static void main(String[] args) {
        synchronized (SynchronizedDemo.class) {

        }
        method2();
    }

    private synchronized static void method2() {

    }
}

Corresponding bytecode

  public static void main(java.lang.String[]);
    descriptor: ([Ljava/lang/String;)V
    flags: (0x0009) ACC_PUBLIC, ACC_STATIC
    Code:
      stack=2, locals=3, args_size=1
         0: ldc           #2                  // class tech/pdai/test/synchronized/SynchronizedDemo
         2: dup
         3: astore_1
         4: monitorenter
         5: aload_1
         6: monitorexit
         7: goto          15
        10: astore_2
        11: aload_1
        12: monitorexit
        13: aload_2
        15: invokestatic  #3                  // Method method2:()V
      Exception table:
         from    to  target type
             5     7    10   any
            10    13    10   any

In the above SynchronizedDemo, after executing the synchronization code block, a static synchronization method will be executed immediately, and the object locked by this method is still this class object, so does the executing thread still need to acquire the lock? The answer is no. , as can be seen from the above figure, when executing the static synchronization method, there is only one monitorexit instruction, and there is no instruction for monitorenter to acquire the lock. This is the reentrancy of the lock, that is, in the same lock process, the thread does not need to acquire the same lock again.

Synchronized is inherently reentrant. Each object has a counter. When the thread acquires the object lock, the counter is incremented by one, and when the lock is released, the counter is decremented.

  • Principles of ensuring visibility: memory model and happens-before rules

Synchronized's happens-before rule, that is, the monitor lock rule: unlocking the same monitor happens-before locking the monitor. Continue to look at the code:

public class MonitorDemo {
    private int a = 0;

    public synchronized void writer() {     // 1
        a++;                                // 2
    }                                       // 3

    public synchronized void reader() {    // 4
        int i = a;                         // 5
    }                                      // 6
}

According to one of the definitions of happens-before: if A happens-before B, then the execution result of A is visible to B, and the execution order of A precedes B. Thread A first adds one to the shared variable A. From the 2 happens-before 5 relationship, we can see that the execution result of thread A is visible to thread B, that is, the value of a read by thread B is 1.

 Synchronized allows only one thread to execute at the same time, and the performance is relatively poor. Is there any way to improve it?

Simply put, the monitorenter and monitorexit bytecodes in the JVM rely on the Mutex Lock of the underlying operating system. However, using Mutex Lock requires suspending the current thread and switching from user mode to kernel mode. Execution, this kind of switching is very expensive; however, in most cases in reality, synchronized methods run in a single-threaded environment (lock-free competition environment). If Mutex Lock is called every time, it will seriously affect the program. performance. However, a large number of optimizations have been introduced to the implementation of locks in jdk1.6, such as Lock Coarsening, Lock Elimination, Lightweight Locking, Biased Locking, Adaptive Spinning and other technologies are used to reduce the cost of lock operations.

  • Lock Coarsening: That is to reduce unnecessary unlock and lock operations that are closely connected, and expand multiple consecutive locks into a larger range lock.

  • Lock Elimination: Eliminate the lock protection of some data that is not shared by other threads outside the current synchronized block through escape analysis of the runtime JIT compiler. Escape analysis can also allocate object space on the thread-local Stack (and also reduce garbage collection overhead on the Heap).

  • Lightweight Locking: The implementation of this lock is based on the assumption that in real situations most of the synchronization code in our program is generally They are all in a lock-free competition state (that is, a single-threaded execution environment). In the case of lock-free competition, you can completely avoid calling the heavyweight mutex lock at the operating system level. Instead, you only need to rely on a CAS atomic instruction in monitorenter and monitorexit. You can complete the acquisition and release of the lock. When there is lock competition, the thread that fails to execute the CAS instruction will call the operating system mutex lock and enter the blocking state, and will be awakened when the lock is released.

  • Biased Locking: It is to avoid executing unnecessary CAS atomic instructions during the lock acquisition process without lock competition, because although CAS atomic instructions are relatively The overhead is relatively small for heavyweight locks but there is still a very considerable local delay.

  • Adaptive Spinning: When the thread fails to perform a CAS operation during the process of acquiring a lightweight lock, it enters the operating system weight associated with the monitor. It will enter busy waiting (Spinning) before the level lock (mutex semaphore) and then try again. If it still fails after a certain number of attempts, the semaphore (i.e. mutex lock) associated with the monitor will be called and enter the blocking state.

 What kind of defects does Synchronized have? How does Java Lock make up for these defects?
  • Defects of synchronized
  1. Low efficiency: There are few lock releases. The lock will be released only after the code is executed or ends abnormally; when trying to acquire the lock, you cannot set a timeout and cannot interrupt a process in use. Lock thread, relatively speaking, Lock can interrupt and set timeout
  2. Not flexible enough: The timing of locking and releasing is single, and each lock has only a single condition (a certain object). Relatively speaking, read-write locks are more flexible< /span>
  3. It is impossible to know whether the lock was successfully obtained. Relatively speaking, Lock can obtain the status
  • Lock solves the corresponding problem

I won’t explain too much about the Lock class here, but mainly look at the four methods in it:

  1. lock(): lock
  2. unlock(): Unlock
  3. tryLock(): Try to acquire the lock and return a boolean value
  4. tryLock(long,TimeUtil): Try to acquire the lock, you can set a timeout

Synchronized only has locks associated with one condition (whether to acquire the lock), which is inflexible. Laterthe combination of Condition and Lock solved this problem question.

When multiple threads compete for a lock, the remaining threads that have not obtained the lock can only keep trying to obtain the lock without interruption. High concurrency will lead to performance degradation. ReentrantLock's lockInterruptibly() method can prioritize responding to interrupts. If a thread waits too long, it can interrupt itself, and then ReentrantLock responds to the interrupt and no longer lets the thread continue to wait. With this mechanism, when using ReentrantLock, there will be no deadlock like synchronized.

 Comparison and choice between Synchronized and Lock?
  • on existential level

synchronized: Java keyword, at the jvm level

Lock: is an interface

  • lock release

synchronized: 1. After the thread that acquired the lock finishes executing the synchronized code, the lock is released. 2. If an exception occurs during thread execution, the JVM will let the thread release the lock.

Lock: The lock must be released in finally, otherwise it will easily cause thread deadlock.

  • Lock acquisition

synchronized: Assume that thread A obtains the lock and thread B waits. If thread A is blocked, thread B will wait forever

Lock: It depends on the situation. Lock has multiple ways to obtain the lock. Basically, you can try to obtain the lock, and the thread does not have to wait all the time (you can use tryLock to determine whether there is a lock).

  • Lock release (deadlock generation)

synchronized: When an exception occurs, the held lock will be automatically released, so no deadlock will occur.

Lock: When an exception occurs, the occupied lock will not be actively released. You must manually unlock to release the lock, which may cause deadlock.

  • lock status

synchronized: Unable to determine

Lock: OK

  • lock type

synchronized: reentrant, uninterruptible, unfair

Lock: reentrant, judgmental, fair (both)

  • performance

synchronized: a small amount of synchronization

Lock: Mass sync

Lock can improve the efficiency of read operations by multiple threads. (Read and write separation can be achieved through readwritelock) When resource competition is not very fierce, Synchronized's performance is better than ReetrantLock's, but when resource competition is fierce, Synchronized's performance will drop dozens of times, but ReetrantLock's performance Able to maintain normalcy;

ReentrantLock provides a variety of synchronization, such as time-limited synchronization, synchronization that can be interrupted (synchronized synchronization cannot be interrupted), etc. When resource competition is not fierce, the performance is slightly worse than synchronized. But when synchronization is very intense, the performance of synchronized can suddenly drop dozens of times. And ReentrantLock can indeed maintain normalcy.

  • Scheduling

synchronized: Use the wait, notify, notifyAll scheduling mechanism of the Object object itself

Lock: Condition can be used for scheduling between threads

  • usage

synchronized: Add this control to the object that needs to be synchronized. Synchronized can be added to the method or in a specific code block. The objects that need to be locked are represented in parentheses.

Lock: Generally, the ReentrantLock class is used as the lock. The locking and unlocking locations need to be displayed through lock() and unlock(). Therefore, unlock() is generally written in the finally block to prevent deadlock.

  • underlying implementation

synchronized: The bottom layer uses instruction code to control the lock. Mapping it into bytecode instructions means adding two instructions: monitorenter and monitorexit. When the thread execution encounters the monitorenter instruction, it will try to acquire the built-in lock. If the lock is acquired, the lock counter will be +1, if the lock is not acquired, it will be blocked; when the monitorexit instruction is encountered, the lock counter will be -1, and if the counter is 0, the lock will be released.

Lock: The bottom layer is CAS optimistic locking, which relies on the AbstractQueuedSynchronizer class to form a CLH queue for all request threads. All operations on the queue are performed through Lock-Free (CAS) operations.

 What should I pay attention to when using Synchronized?
  • The lock object cannot be empty because the lock information is stored in the object header.
  • The scope should not be too large, which affects the speed of program execution. If the control scope is too large, it is easy to make errors when writing code.
  • avoid deadlock
  • If you have a choice, don't use Lock or synchronized keywords. Use various classes in the java.util.concurrent package. If you don't use the classes in this package, you can do it if the business is satisfied. Use the synchronized key to avoid errors because the amount of code is small.
 Will the lock be released when the Synchronized modified method throws an exception?

meeting

 When multiple threads are waiting for the same Synchronized lock, how does the JVM choose the next thread to acquire the lock?

Unfair lock, that is, preemptive.

 Is synchronized a fair lock?

Synchronized is actually unfair. New threads may get the monitor immediately, while threads that have been waiting in the waiting area for a long time may wait again. This will help improve performance, but it may also lead to starvation.

 What is the role of volatile keyword?
  • Anti-reordering Let’s analyze the reordering problem from the most classic example. Everyone should be familiar with the implementation of the singleton pattern. To implement a singleton in a concurrent environment, we can usually use double check locking (DCL). The source code is as follows:
public class Singleton {
    public static volatile Singleton singleton;
    /**
     * 构造函数私有,禁止外部实例化
     */
    private Singleton() {};
    public static Singleton getInstance() {
        if (singleton == null) {
            synchronized (singleton.class) {
                if (singleton == null) {
                    singleton = new Singleton();
                }
            }
        }
        return singleton;
    }
}

Now let's analyze why we need to add the volatile keyword between variables singleton. To understand this problem, you must first understand the object construction process. Instantiating an object can actually be divided into three steps:

  1. Allocate memory space.
  2. Initialize the object.
  3. Assign the address of the memory space to the corresponding reference.

But since the operating system canreorder instructions, the above process may also become the following process:

  1. Allocate memory space.
  2. Assign the address of the memory space to the corresponding reference.
  3. initialize object

If this is the process, an uninitialized object reference may be exposed in a multi-threaded environment, leading to unpredictable results. Therefore, in order to prevent reordering of this process, we need to set the variable to a volatile type variable.

  • achieve visibility

The visibility problem mainly means that one thread modifies the value of a shared variable, but another thread cannot see it. The main reason for the visibility problem is that each thread has its own cache area-thread working memory. The volatile keyword can effectively solve this problem. Let's look at the following example to see its effect:

public class TestVolatile {
    private static boolean stop = false;

    public static void main(String[] args) {
        // Thread-A
        new Thread("Thread A") {
            @Override
            public void run() {
                while (!stop) {
                }
                System.out.println(Thread.currentThread() + " stopped");
            }
        }.start();

        // Thread-main
        try {
            TimeUnit.SECONDS.sleep(1);
            System.out.println(Thread.currentThread() + " after 1 seconds");
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        stop = true;
    }
}

The execution output is as follows

Thread[main,5,main] after 1 seconds

// Thread A一直在loop, 因为Thread A 由于可见性原因看不到Thread Main 已经修改stop的值

You can see that after Thread-main sleeps for 1 second, stop = true is set, but Thread A does not stop at all. This is a visibility problem. If you add the volatile keyword in front of the stop variable, it will actually stop:

Thread[main,5,main] after 1 seconds
Thread[Thread A,5,main] stopped

Process finished with exit code 0
  • Guaranteed atomicity: single read/write

Volatile cannot guarantee complete atomicity, but can only guarantee that a single read/write operation is atomic.

 Can volatile guarantee atomicity?

There is no complete guarantee, only that a single read/write operation is atomic.

 Why should volatile be used for shared long and double variables on 32-bit machines?

Because the operations of the two data types long and double can be divided into high 32-bit and low 32-bit parts, ordinary long or double type reading/writing may not be atomic. Therefore, everyone is encouraged to set shared long and double variables to volatile types, which can ensure that a single read/write operation on long and double is atomic under any circumstances.

The following is the explanation in JLS:

17.7 Non-Atomic Treatment of double and long

  • For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. This can result in a situation where a thread sees the first 32 bits of a 64-bit value from one write, and the second 32 bits from another write.
  • Writes and reads of volatile long and double values are always atomic.
  • Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.
  • Some implementations may find it convenient to divide a single write action on a 64-bit long or double value into two write actions on adjacent 32-bit values. For efficiency’s sake, this behavior is implementation-specific; an implementation of the Java Virtual Machine is free to perform writes to long and double values atomically or in two parts.
  • Implementations of the Java Virtual Machine are encouraged to avoid splitting 64-bit values where possible. Programmers are encouraged to declare shared 64-bit values as volatile or synchronize their programs correctly to avoid possible complications.

Currently, commercial virtual machines under various platforms choose to treat the read and write operations of 64-bit data as atomic operations. Therefore, when we write code, we generally do not specifically declare long and double variables as volatile. In most cases, it will not be wrong. .

 How does volatile achieve visibility?

memory barrier.

 How does volatile achieve orderliness?

happens-before等

 Tell me about the application scenarios of volatile?

Conditions required to use volatile

  1. Writing to a variable does not depend on the current value.
  2. This variable is not included in an invariant with other variables.
  3. Use volatile only when the state is truly independent of other content within the program.
  • Example 1: Singleton pattern

A way to implement the singleton mode, but many people will ignore the volatile keyword, because without this keyword, the program can run well, but the stability of the code is not always 100%, maybe in the future. At any time, hidden bugs emerge.

class Singleton {
    private volatile static Singleton instance;
    private Singleton() {
    }
    public static Singleton getInstance() {
        if (instance == null) {
            syschronized(Singleton.class) {
                if (instance == null) {
                    instance = new Singleton();
                }
            }
        }
        return instance;
    } 
}
  • Example 2: volatile beans

In the volatile bean pattern, all data members of the JavaBean are of type volatile, and the getter and setter methods must be very ordinary - they cannot contain any logic other than getting or setting the corresponding properties. Additionally, for data members that are referenced by an object, the referenced object must be effectively immutable. (This would disallow properties with array values ​​because when an array reference is declared volatile, only the reference and not the array itself has volatile semantics). For any volatile variable, no invariant or constraint can contain JavaBean properties.

@ThreadSafe
public class Person {
    private volatile String firstName;
    private volatile String lastName;
    private volatile int age;
 
    public String getFirstName() { return firstName; }
    public String getLastName() { return lastName; }
    public int getAge() { return age; }
 
    public void setFirstName(String firstName) { 
        this.firstName = firstName;
    }
 
    public void setLastName(String lastName) { 
        this.lastName = lastName;
    }
 
    public void setAge(int age) { 
        this.age = age;
    }
}
 Are all final-modified fields compile-time constants?

no

 How to understand that the method modified by private is implicitly final?

All private methods in a class are implicitly designated as final. Since the private method cannot be accessed, it cannot be overridden. You can add the final keyword to the private method, but there is no benefit in doing so. Take a look at the following example:

public class Base {
    private void test() {
    }
}

public class Son extends Base{
    public void test() {
    }
    public static void main(String[] args) {
        Son son = new Son();
        Base father = son;
        //father.test();
    }
}

Both Base and Son have the method test(), but this is not an override, because the method modified by private is implicitly final, that is, it cannot be inherited, so it is not to mention an override. The test() in Son ) method is just a new member of Son. Son performs upward transformation to obtain father, but father.test() is not executable because the test method in Base is private and cannot be accessed.

 Let’s talk about how to extend final type classes?

For example, String is a final type. We want to write a MyString that reuses all the methods in String and adds a new toMyString() method. How should we do this?

Appearance mode:


class MyString{

    private String innerString;

    // ...init & other methods

    // 支持老的方法
    public int length(){
        return innerString.length(); // 通过innerString调用老的方法
    }

    // 添加新方法
    public String toMyString(){
        //...
    }
}
 Can final methods be overloaded?

We know that the final method of the parent class cannot be overridden by the subclass, so can the final method be overloaded? The answer is yes, the following code is correct.

public class FinalExampleParent {
    public final void test() {
    }

    public final void test(String str) {
    }
}
 Can the final method of a parent class be overridden by a subclass?

Can't

 Talk about the final field reordering rules of basic types?

Let’s look at an example code first:

public class FinalDemo {
    private int a;  //普通域
    private final int b; //final域
    private static FinalDemo finalDemo;

    public FinalDemo() {
        a = 1; // 1. 写普通域
        b = 2; // 2. 写final域
    }

    public static void writer() {
        finalDemo = new FinalDemo();
    }

    public static void reader() {
        FinalDemo demo = finalDemo; // 3.读对象引用
        int a = demo.a;    //4.读普通域
        int b = demo.b;    //5.读final域
    }
}

Assume that thread A is executing the writer() method and thread B is executing the reader() method.

  • Write final field reordering rules

The reordering rule for writing final fields prohibits the reordering of writing to final fields outside the constructor. The implementation of this rule mainly includes two aspects:

  • JMM prohibits the compiler from reordering writes to final fields outside of constructors;
  • The compiler will insert a store barrier after the final field is written and before the constructor returns. This barrier prevents the processor from reordering writes to final fields outside the constructor.

Let's analyze the writer method again. Although there is only one line of code, it actually does two things:

  • Constructed a FinalDemo object;
  • Assign this object to the member variable finalDemo.

Since there is no data dependency between a and b, the ordinary field (ordinary variable) a may be reordered outside the constructor, and thread B may read the value (zero value) of the ordinary variable a before it is initialized. Errors may occur. As for the final domain variable b, according to the reordering rules, the final-modified variable b will be prohibited from being reordered outside the constructor, so that b can be correctly assigned and thread B can read the initialized value of the final variable.

Therefore, writing reordering rules for final fields can ensure that the object's final field has been correctly initialized before the object reference is visible to any thread, while ordinary fields do not have this guarantee. For example, in the above example, thread B may be an incorrectly initialized object finalDemo.

  • Read final field reordering rules

The reordering rules for reading final fields are: in a thread, JMM will prohibit the reordering of these two operations when reading an object reference for the first time and reading the final field contained in the object for the first time. (Note that this rule is only for the processor), the processor will insert a LoadLoad barrier in front of the read final field operation. In fact, there is an indirect dependence between reading the reference of the object and reading the final field of the object. Generally, the processor will not reorder these two operations. However, some processors will reorder, so this prohibition of reordering rule is set for these processors.

The read() method mainly includes three operations:

  • First read reference variable finalDemo;
  • First read the ordinary field a that refers to the variable finalDemo;
  • Read the final field b of the reference variable finalDemo for the first time;

If the common field of the read object is reordered to the front of the read object reference, thread B will read the common field variable of the object before reading the object reference. This is obviously a wrong operation. The read operation of the final field "limits" the reference to the object that has been read before reading the final field variable, thus avoiding this situation.

The reordering rules for reading final fields ensure that before reading the final field of an object, the reference to the object containing the final field must be read first.

 Talk about the principle of final?
  • Writing a final field requires the compiler to insert a StoreStore barrier after the final field is written and before the constructor returns.
  • The reordering rules for reading final fields require the compiler to insert a LoadLoad barrier before reading the final field.


Guess you like

Origin blog.csdn.net/abclyq/article/details/134739745