Multithreading - lock mechanism - Synchronized

The main purpose of using multi-threading is to improve the utilization of system resources. However, since multiple threads are executed concurrently and the system scheduling is random, business in a multi-threaded environment generally configures multiple threads to execute the same code. If there are shared variables or some combined operations in this code, it will inevitably cause security issues such as data confusion between threads.

one thread safety

When multiple threads access a certain class (object or method), this class can always show the correct behavior, then this class (object or method) is thread-safe.

At this point, various lock mechanisms need to be used to ensure thread safety and the program executes according to our wishes;

There are two main types of Java multithreaded locking:

1、synchronized;

2. Explicit Lock;

two synchronized

Synchronized is a keyword in Java, and its modified code is equivalent to the mutex of the database. It can lock the code block, ensure that only one thread is in the method or synchronization block at the same time, and ensure the visibility and exclusiveness of the thread's access to the variable. The object that acquires the lock will release the lock after the code ends.

synchronized is a mutex that only allows one thread to enter the locked method block at a time;

2.1 Function

It has three main functions:

    1. Atomicity: Only one thread can access the code block at the same time;

    2. Orderliness: For the same lock operation, unlock must occur before lock;

    3. Visibility: The modification of the same shared variable is visible to other threads in time;

2.2 Usage Scenarios

public class Synchronized {

    // 修饰代码块
    public void test(int i) {
        Long test = 0L;
        synchronized (test) {
            i++;
        }
    }

    // 修饰普通方法
    public synchronized void test1() {
        System.out.println();
    }

    // 修饰静态方法
    public synchronized static void test2() {
        System.out.println();
    }

    // 修饰静态全局变量
    private static Long i;
    public void test3() {
        synchronized (i) {
            System.out.println();
        }
    }
}

1. Modification of ordinary methods or code blocks : This scenario actually locks the object calling the method (that is, the current instance object of the current class), which is called "object lock" or "method lock";

2. Modifying static methods or static variables : This scenario actually locks the current Class itself (the bytecode file object of the class, not the instance object), which is called "class lock";

It should be noted here that the thread that acquires the class lock does not conflict with the thread that acquires the object lock, for example:

public class Synchronized {

    public synchronized static void staticLock() {
        for (int i = 0; i < 10; i++) {
            System.out.println("staticLock execute ----- " + i);
        }
    }

    public void normalMethod() {
        for (int i = 0; i < 10; i++) {
            System.out.println("nromalMethod execute ----- " + i);
        }
    }

    public static void main(String[] args) {
        Synchronized demo = new Synchronized();

        Thread t1 = new Thread(() -> {
            demo.normalMethod();
        });

        Thread t2 = new Thread(() -> {
            staticLock();
        });

        t1.start();
        t2.start();
    }
}

Output result:

2.3 synchronized principle

Every object in java has a built-in lock (monitor lock monitor), and synchronized uses the built-in lock of the object to lock the object.

2.3.1 Synchronized code blocks

Also first come to an example:

public class Synchronized {
    public void test() {
        Long i = 1L;
        synchronized (i) {
            System.out.println();
        }
    }

Decompile the above class file through the javap -c command to see:

As can be seen from the above, if synchronized is added to the code block , the monitorenter and monitorexit instructions will be added before and after the lock code logic,

The general process of monitorenter:

    1. If the number of monitor entries is 0, the lock is successfully acquired, and the number of entries is set to 1 to execute the business; 

    2. If the current thread re-enters to obtain the synchronized lock object, judge whether it is the same thread, and if so, add 1 to the number of monitor entries ( reentrant lock );

    3. If other thread A enters to obtain the current synchronized lock object at this time, it is judged that the number of monitor entries is not 0 and is not held by thread A, then thread A blocks, spins and waits for the lock, and after the number of monitor entries drops to 0, thread A grabs the lock again (lock upgrade ) ;

monitor exit process:

    The thread currently holding the monitor calls monitorexit, the instruction is executed, and the number of monitor entries is reduced by one. If the number of monitor entries is 0 after the reduction, the current thread releases the lock, and other threads waiting for the lock resource can try to grab the lock;

2.3.2 Synchronization method

Different from the locking method of the synchronization code block method, the locking method of the synchronization method is implemented based on  ACC_SYNCHRONIZED the identifier . When the synchronization method is called, it will first check whether the ACC_SYNCHRONIZED identifier is set for the method. If so, the thread will first obtain the monitor. After the acquisition is successful, the method can be executed, and the monitor will be released after the method is executed;

If it is an instance method, the JVM will try to acquire the lock of the instance object, and if it is a class method, the JVM will try to acquire the class lock. After the synchronization method is completed, whether it is a normal return or an abnormal return, the lock will be released.

I won’t expand here, and interested friends can try it by themselves;

2.4 Lock Upgrade

As mentioned above, synchronized is implemented through the built-in monitor lock (monitor), and the monitor lock is essentially implemented by the Mutex Lock (mutual exclusion lock) of the underlying operating system. Considering that the operating system needs to switch from the user state to the core state to implement thread switching, the transition between states takes a long time. This cost is very high, which is why Synchronized is inefficient. This kind of lock that relies on the operating system Mutex Lock is called a "heavyweight lock".

Before JDK1.6, there was no concept of lock classification, and synchronized was a heavyweight lock.

However, after the JDK1.6 version, CAS spin, lock elimination, and lock coarsening have been added. In addition, the lock classification status of no lock, biased lock, lightweight lock, and heavyweight lock has been introduced. According to different lock competition conditions, the lock status is gradually upgraded and the performance is gradually reduced. However, in most cases, the business concurrency is not high, and biased locks and lightweight locks can meet our requirements;

In JDK 1.6, biased locks and lightweight locks are enabled by default, and biased locks can be disabled by -XX:-UseBiasedLocking.

Locks can be upgraded, but not downgraded;

2.4.1 Bias lock

Core idea: Assume that the locked code is called by only one thread from the beginning to the end. If there are more than one thread calls, it will be upgraded to a lightweight lock;

That is to say, biased locks are only suitable for single-threaded execution of code blocks. If it is multi-threaded, biased locks must at least be upgraded to lightweight locks; therefore, it is necessary to use -XX:-UseBiasedLocking to determine whether to disable biased locks according to actual business conditions;

Bias lock upgrade:

1. Thread A acquires the lock and can enter the synchronization code block. The virtual machine records the threadId of the biased lock in the current lock object header and stack frame, that is, it is set to the biased mode; at the same time, use the CAS operation to record the thread ID that acquired the lock in the Mark Word of the object;

2. Thread A acquires the lock again, and judges whether the threadId of the current thread is consistent with the threadId in the lock object header. If it is consistent, it means that it is the same thread, and there is no need to use CAS to add and unlock (the CAS operation will delay the local call ) ;

3. At this time, thread B starts to acquire the lock. If the threadId of the current thread is inconsistent with the threadId in the lock object header, check whether the thread corresponding to the threadId set in the lock object is alive:

  1.     If alive: determine whether thread A still needs to hold the lock object, if necessary, suspend thread A, cancel the biased lock, and upgrade to a lightweight lock; if thread A no longer needs to hold the lock object, set the state of the lock object to the lock-free state;
  2.     If there is no survival: reset the lock object to a lock-free state, and thread B can compete to set it as a biased lock;

2.4.2 Lightweight locks

Core idea: the chained code will not be concurrent, and in the absence of multi-thread competition, reduce the performance consumption caused by the operation state transition in the heavyweight;

When biased locks are turned off or multi-threaded competition for biased locks causes biased locks to be upgraded to lightweight locks, it will try to acquire lightweight locks. The steps are as follows:

1. Thread A calls the synchronization code block to determine the state of the lock object. If the object is not locked (the lock flag is "01" and whether it is a biased lock is "0"), the JVM first creates a space named Lock Record (Lock Record) in the stack frame of the current thread.

2. Copy the copy of the Mark Word in the object header (that is, the Displaced Mark Word) to the lock record space created in step 1, and then the JVM uses CAS to try to update the Mark Word of the object to a pointer to the lock record (Lock Record).

3. If the update is successful, thread A owns the current object lock, and sets the lock flag of the object Mark Word to "00" (lightweight lock);

4. If the update fails, the JVM checks whether the Lock Word in the object Mark Word points to the stack frame of the current thread. If so, it means that the current thread already owns the lock of the object, and can directly enter the synchronization block to continue execution. If there is no lock competition (eg: thread B obtains the lock first), thread A spins and waits for thread B to release the lock. If thread A finishes spinning or thread C also competes for the lock during thread A spinning, the lightweight lock is upgraded to a heavyweight lock. The heavyweight lock will block other threads except the thread that owns the lock to prevent the CPU from idling.

2.4.3 Comparison

Lock advantage shortcoming Applicable scene
bias lock There is no additional consumption for unlocking and unlocking, and the performance is almost the same as that of executing asynchronous methods When there is lock competition, it will bring additional lock cancellation consumption Applicable to scenarios where a single thread accesses method blocks in most cases
lightweight lock Do not block competing threads, improve program response speed Spin while the lock is waiting, causing the CPU to idle and consume performance Applicable to scenarios with less thread lock competition, improving response speed
heavyweight lock Spinning is not applied and will not cause the CPU to idle Thread blocking, long response time Applicable to scenarios where multi-threaded competing locks are pursued for throughput

 

 

 

 

 

 

2.4.4 Lock coarsening

Connect multiple consecutive locking and unlocking operations together to expand into a larger range of locks to avoid performance consumption caused by frequent locking and unlocking;

2.4.5 Lock Elimination

During the just-in-time compilation stage, the JVM scans the running context and performs escape analysis to remove locks that are unlikely to have shared resource competition. By eliminating unnecessary locks in this way, the performance consumption caused by redundant request locks can be saved. Example:

    public static void main(String[] args) {
        for (int i = 0; i < 10; i++) {
            StringBuffer sb = new StringBuffer();
            sb.append("concat1").append("concat2");
        }
    }

Let's look at the source code of the StringBuffer.append method:

    @Override
    public synchronized StringBuffer append(String str) {
        toStringCache = null;
        super.append(str);
        return this;
    }

As in the above example, the scope of StringBuffer is only in the main method and does not escape outside the main method. When the JVM detects such a scenario at compile time, it will eliminate the synchronized required to call the append method, reducing unnecessary performance consumption;

Enable support for lock removal

To eliminate locks, the program must run in server mode (server mode will be more optimized than client mode), enable escape analysis through -XX:+DoEscapeAnalysis, and enable lock elimination through -XX:+EliminateLocks;

eg: -server -XX:+DoEscapeAnalysis -XX:+EliminateLocks

Guess you like

Origin blog.csdn.net/sxg0205/article/details/108399176