JUC Lecture 5: Detailed explanation of keyword synchronized

JUC Lecture 5: Detailed explanation of keyword synchronized

In C program code, we can use the mutex lock provided by the operating system to implement mutually exclusive access to synchronized blocks and blocking and waking up threads . In addition to providing the Lock API, Java also provides the synchronized keyword at the syntax level to implement mutually exclusive synchronization primitives . This article is the fifth JUC lecture and will analyze the synchronized keyword in detail.

1. Use the interview questions from major BAT companies to understand Synchronized

Please continue with these questions, which will greatly help you understand synchronized better.

  • Where can Synchronized be used? Examples include object locks and class locks.
  • What does Synchronized essentially ensure thread safety? The answer is divided into three aspects: the principle of locking and releasing locks, the principle of reentrancy, and the principle of ensuring visibility .
  • What kind of defects does Synchronized have? How does Java Lock make up for these defects.
  • Comparison and choice between Synchronized and Lock?
  • What should I pay attention to when using Synchronized?
  • Will the Synchronized method release the lock when it throws an exception?
  • When multiple threads are waiting for the same Synchronized lock, how does the JVM choose the next thread to acquire the lock?
  • Synchronized allows only one thread to execute at the same time, and the performance is relatively poor. Is there any way to improve it?
  • I want to control the release and acquisition of locks more flexibly (the timing of releasing and acquiring locks is now stipulated), what should I do?
  • What are lock upgrades and downgrades? What are skew locks, lightweight locks, and heavyweight locks in the JVM?
  • What are the optimizations for Synchronized in different JDKs?

2. Use of Synchronized

When applying the Sychronized keyword, you need to pay attention to the following points:

  • A lock can only be acquired by one thread at the same time, and threads that have not acquired the lock can only wait;
  • Each instance corresponds to its own lock (this), and different instances do not affect each other; exception: when the lock object is *.class and the synchronized modification is a static method, all objects share the same lock;
  • The synchronized modified method will release the lock regardless of whether the method is executed normally or an exception is thrown.

scenes to be used

  • Scenario 1: Double verification lock implements singleton mode and locks the current class
  • Scenario 2: Add synchronization locks to the business to prevent thread safety

2.1. Object lock

Including method locks (the default lock object is this, the current instance object) and synchronized code block locks (you specify the lock object yourself)

1. Code block format: Manually specify the lock object, which can be this or a custom lock.
  • Example 1
public class SynchronizedObjectLock implements Runnable {
    
    
    static SynchronizedObjectLock instance = new SynchronizedObjectLock();
    @Override
    public void run() {
    
    
        // 同步代码块形式——锁为this,两个线程使用的锁是一样的,线程1必须要等到线程0释放了该锁后,才能执行
        synchronized (this) {
    
    
            System.out.println("我是线程" + Thread.currentThread().getName());
            try {
    
    
                Thread.sleep(3000);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            System.out.println(Thread.currentThread().getName() + "结束");
        }
    }

    public static void main(String[] args) {
    
    
        Thread t1 = new Thread(instance);
        Thread t2 = new Thread(instance);
        t1.start();
        t2.start();
    }
}

Output result:

我是线程Thread-0
Thread-0结束
我是线程Thread-1
Thread-1结束
  • Example 2
public class SynchronizedObjectLock implements Runnable {
    
    
    static SynchronizedObjectLock instance = new SynchronizedObjectLock();
    // 创建2把锁
    Object block1 = new Object();
    Object block2 = new Object();

    @Override
    public void run() {
    
    
        // 这个代码块使用的是第一把锁,当他释放后,后面的代码块由于使用的是第二把锁,因此可以马上执行
        synchronized (block1) {
    
    
            System.out.println("block1锁,我是线程" + Thread.currentThread().getName());
            try {
    
    
                Thread.sleep(3000);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            System.out.println("block1锁," + Thread.currentThread().getName() + "结束");
        }

        synchronized (block2) {
    
    
            System.out.println("block2锁,我是线程" + Thread.currentThread().getName());
            try {
    
    
                Thread.sleep(3000);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            System.out.println("block2锁," + Thread.currentThread().getName() + "结束");
        }
    }

    public static void main(String[] args) {
    
    
        Thread t1 = new Thread(instance);
        Thread t2 = new Thread(instance);
        t1.start();
        t2.start();
    }
}

Output result:

block1锁,我是线程Thread-0
block1锁,Thread-0结束
block2锁,我是线程Thread-0  // 可以看到当第一个线程在执行完第一段同步代码块之后,第二个同步代码块可以马上得到执行,因为他们使用的锁不是同一把
block1锁,我是线程Thread-1
block2锁,Thread-0结束
block1锁,Thread-1结束
block2锁,我是线程Thread-1
block2锁,Thread-1结束
2. Method lock form: synchronized modifies ordinary methods, and the lock object defaults to this
public class SynchronizedObjectLock implements Runnable {
    
    
    static SynchronizedObjectLock instance = new SynchronizedObjectLock();

    @Override
    public void run() {
    
    
        method();
    }

    public synchronized void method() {
    
    
        System.out.println("我是线程" + Thread.currentThread().getName());
        try {
    
    
            Thread.sleep(3000);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        System.out.println(Thread.currentThread().getName() + "结束");
    }

    public static void main(String[] args) {
    
    
        Thread t1 = new Thread(instance);
        Thread t2 = new Thread(instance);
        t1.start();
        t2.start();
    }
}

Output result:

我是线程Thread-0
Thread-0结束
我是线程Thread-1
Thread-1结束

2.2. Class lock

Refers to synchronize modified static method or specify the lock object as a Class object

1. synchronize modified static method
  • Example 1
public class SynchronizedObjectLock implements Runnable {
    
    
    static SynchronizedObjectLock instance1 = new SynchronizedObjectLock();
    static SynchronizedObjectLock instance2 = new SynchronizedObjectLock();

    @Override
    public void run() {
    
    
        method();
    }

    // synchronized用在普通方法上,默认的锁就是this,当前实例
    public synchronized void method() {
    
    
        System.out.println("我是线程" + Thread.currentThread().getName());
        try {
    
    
            Thread.sleep(3000);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        System.out.println(Thread.currentThread().getName() + "结束");
    }

    public static void main(String[] args) {
    
    
        // t1和t2对应的this是两个不同的实例,所以代码不会串行
        Thread t1 = new Thread(instance1);
        Thread t2 = new Thread(instance2);
        t1.start();
        t2.start();
    }
}

Output result:

我是线程Thread-0
我是线程Thread-1
Thread-1结束
Thread-0结束
  • Example 2
public class SynchronizedObjectLock implements Runnable {
    
    
    static SynchronizedObjectLock instance1 = new SynchronizedObjectLock();
    static SynchronizedObjectLock instance2 = new SynchronizedObjectLock();

    @Override
    public void run() {
    
    
        method();
    }

    // synchronized用在静态方法上,默认的锁就是当前所在的Class类,所以无论是哪个线程访问它,需要的锁都只有一把
    public static synchronized void method() {
    
    
        System.out.println("我是线程" + Thread.currentThread().getName());
        try {
    
    
            Thread.sleep(3000);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        System.out.println(Thread.currentThread().getName() + "结束");
    }

    public static void main(String[] args) {
    
    
        Thread t1 = new Thread(instance1);
        Thread t2 = new Thread(instance2);
        t1.start();
        t2.start();
    }
}

Output result:

我是线程Thread-0
Thread-0结束
我是线程Thread-1
Thread-1结束
2. Synchronized specifies the lock object as a Class object
public class SynchronizedObjectLock implements Runnable {
    
    
    static SynchronizedObjectLock instance1 = new SynchronizedObjectLock();
    static SynchronizedObjectLock instance2 = new SynchronizedObjectLock();

    @Override
    public void run() {
    
    
        // 所有线程需要的锁都是同一把
        synchronized(SynchronizedObjectLock.class){
    
    
            System.out.println("我是线程" + Thread.currentThread().getName());
            try {
    
    
                Thread.sleep(3000);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            System.out.println(Thread.currentThread().getName() + "结束");
        }
    }

    public static void main(String[] args) {
    
    
        Thread t1 = new Thread(instance1);
        Thread t2 = new Thread(instance2);
        t1.start();
        t2.start();
    }
}

Output result:

我是线程Thread-0
Thread-0结束
我是线程Thread-1
Thread-1结束

3. Analysis of Synchronized principle

3.1. Principles of locking and releasing locks

Phenomenon, timing (built-in lock this), in-depth JVM look at the bytecode (decompile and look at the monitor command)

Dive into the JVM to look at the bytecode and create the following code:

public class SynchronizedDemo2 {
    
    

    Object object = new Object();
    public void method1() {
    
    
        synchronized (object) {
    
    

        }
        method2();
    }

    private static void method2() {
    
    

    }
}

Use javac command to compile and generate .class file

>javac SynchronizedDemo2.java

Use the javap command to decompile and view the information of the .class file

>javap -verbose SynchronizedDemo2.class

Get the following information:

  • Insert image description here

monitorenterJust pay attention to the sum in the red box monitorexit.

Monitorenterand Monitorexitinstructions will cause the object to increase or decrease its lock counter by 1 while it is executing. Each object is only associated with one monitor (lock) at the same time, and a monitor can only be obtained by one thread at the same time. When an object attempts to obtain ownership of the Monitor lock associated with this object, the monitorenter instruction will One of the following 3 situations occurs:

  • The monitor counter is 0, which means that it has not been acquired yet. Then this thread will acquire it immediately and then increase the lock counter by 1. Once +1, other threads need to wait if they want to acquire it.
  • If the monitor has obtained ownership of the lock and re-enters the lock, the lock counter will accumulate and become 2, and will continue to accumulate with the number of re-entries.
  • This lock has been acquired by another thread, waiting for the lock to be released

monitorexit指令: Release the ownership of the monitor. The release process is very simple, that is, the monitor counter is decremented by 1. If after the decrement, the counter is not 0, it means that it has just re-entered, and the current thread continues to hold ownership of the lock. If the counter becomes 0, it means that the current thread no longer has ownership of the monitor, that is, the lock is released.

The following figure shows the relationship between objects, object monitors, synchronization queues, and execution thread states :

img

It can be seen from the figure that when any thread accesses an Object, it must first obtain the Object's monitor. If the acquisition fails, the thread enters the synchronization state, and the thread status changes to BLOCKED. When the Object's monitor occupier is released, the synchronization Threads in the queue will have a chance to reacquire the monitor.

3.2. Reentrancy principle: locking times counter

  • What is reentrancy? Reentrant lock ?

Reentrant : (from Wikipedia) If a program or subroutine can be "interrupted at any time and then the operating system schedules the execution of another piece of code, and this code calls the subroutine without error ", it is called Reentrant (reentrant or re-entrant). That is, while the subroutine is running, the execution thread can enter and execute it again and still obtain the results expected at design time. Unlike thread safety when multiple threads execute concurrently, reentrancy emphasizes that it is still safe to re-enter the same subroutine when a single thread is executing .

Reentrant lock : Also known as recursive lock, it means that when the same thread acquires the lock in the outer method, the inner method that enters the thread will automatically acquire the lock (the prerequisite is that the lock object must be the same object or class). It will be blocked because it has been obtained before and has not been released.

  • Look at the following example
public class SynchronizedDemo {
    
    

    public static void main(String[] args) {
    
    
        SynchronizedDemo demo =  new SynchronizedDemo();
        demo.method1();
    }

    private synchronized void method1() {
    
    
        System.out.println(Thread.currentThread().getId() + ": method1()");
        method2();
    }

    private synchronized void method2() {
    
    
        System.out.println(Thread.currentThread().getId()+ ": method2()");
        method3();
    }

    private synchronized void method3() {
    
    
        System.out.println(Thread.currentThread().getId()+ ": method3()");
    }
}

Combined with the principles of locking and releasing locks mentioned above, it is not difficult to understand:

  • Execute monitorenter to obtain the lock
    • (monitor counter = 0, lock can be obtained)
    • Execute method1() method, monitor counter +1 -> 1 (obtain lock)
    • Execute method2() method, monitor counter +1 -> 2
    • Execute method3() method, monitor counter +1 -> 3
  • Execute monitorexit command
    • After method3() method is executed, monitor counter -1 -> 2
    • After method2() method is executed, monitor counter -1 -> 1
    • After the method2() method is executed, the monitor counter -1 -> 0 (the lock is released)
    • (monitor counter = 0, the lock is released)

This is the reentrancy of Synchronized, that is, in the same lock process , each object has a monitor counter. When the thread acquires the object lock, the monitor counter will be incremented by one. After the lock is released, the monitor counter will be decremented by one. The thread There is no need to acquire the same lock again.

3.3. Principles of ensuring visibility: memory model and happens-before rules

Synchronized's happens-before rule, that is, the monitor lock rule: to unlock the same monitor, happens-before precedes the locking of the monitor . Continue to look at the code:

public class MonitorDemo {
    
    
    private int a = 0;

    public synchronized void writer() {
    
         // 1
        a++;                                // 2
    }                                       // 3

    public synchronized void reader() {
    
        // 4
        int i = a;                         // 5
    }                                      // 6
}

The happens-before relationship of this code is shown in the figure:

img

The two nodes connected by each arrow in the figure represent the happens-before relationship between them. The black ones are deduced through the program sequence rules, and the red ones are deduced from the monitor lock rules: Thread A releases the lock happens-before thread B is locked , and the blue one is the happens-before relationship inferred through program sequence rules and monitor lock rules, and the happens-before relationship further deduced through transitivity rules. Now let’s focus on 2 happens-before 5. What can we draw from this relationship?

According to one of the definitions of happens-before: if A happens-before B, then the execution result of A is visible to B, and the execution order of A precedes B. Thread A first adds one to the shared variable A. From the 2 happens-before 5 relationship, we can see that the execution result of thread A is visible to thread B, that is, the value of a read by thread B is 1.

4. Optimization of locks in JVM

background:

Simply put, the monitorenter and monitorexit bytecodes in the JVM rely on the Mutex Lock of the underlying operating system. However, using Mutex Lock requires suspending the current thread and switching from user mode to kernel mode for execution . This switch The price is very expensive; however, in most cases in reality, synchronized methods run in a single-threaded environment (lock-free competition environment). If Mutex Lock is called every time, the performance of the program will be seriously affected. However, a large number of optimizations have been introduced to the implementation of locks in jdk1.6, such as Lock Coarsening, Lock Elimination, Lightweight Locking, Biased Locking, and Adaptability. Technologies such as Adaptive Spinning are used to reduce the cost of lock operations .

  • 锁粗化(Lock Coarsening): That is to say, reduce unnecessary unlock and lock operations that are closely connected, and expand multiple consecutive locks into a lock with a larger range.
  • 锁消除(Lock Elimination): Use the escape analysis of the runtime JIT compiler to eliminate the lock protection of some data that is not shared by other threads outside the current synchronization block. Through escape analysis, you can also allocate object space on the thread local Stack (it can also reduce the Heap garbage collection overhead).
  • 轻量级锁(Lightweight Locking): Behind the implementation of this lock is based on the assumption that in real situations, most of the synchronization code in our program is generally in a lock-free competition state (that is, a single-threaded execution environment). In the absence of lock competition, it is completely It can avoid calling the heavyweight mutex lock at the operating system level. Instead , only one CAS atomic instruction is needed to acquire and release the lock in monitorenter and monitorexit . When there is lock competition, the thread that fails to execute the CAS instruction will call the operating system mutex lock to enter the blocking state, and be awakened when the lock is released (the specific processing steps are discussed in detail below).
  • 偏向锁(Biased Locking): It is to avoid executing unnecessary CAS atomic instructions during the lock acquisition process without lock competition, because although CAS atomic instructions have a relatively small overhead compared to heavyweight locks, there is still a very considerable local delay.
  • 适应性自旋(Adaptive Spinning): When a thread fails to perform a CAS operation while acquiring a lightweight lock, it will enter busy waiting (Spinning) before entering the operating system heavyweight lock (mutex semaphore) associated with the monitor and then try again. When trying a certain If there is still no success after the number of times, the semaphore (i.e., mutex lock) associated with the monitor is called and enters the blocking state.

Let’s explain it in detail below, starting with the Synchronied synchronization lock:

4.1. Types of locks

In Java SE 1.6, Synchronied synchronization lock has a total of four states: 无锁, 偏向锁, 轻量级锁, 重量级锁, which will gradually escalate due to competition. Locks can be upgraded but cannot be downgraded. The purpose is to provide efficiency in acquiring and releasing locks .

Lock expansion direction: No lock → Biased lock → Lightweight lock → Heavyweight lock ( this process is irreversible )

4.2. Spin lock and adaptive spin lock

1. Spin lock

background:

  • As we all know, Synchronized is a very huge guy without lock optimization. When multiple threads compete for a lock, when one thread acquires the lock, it blocks all competing threads, which has a great impact on performance . The operations of suspending threads and resuming threads need to be transferred to the kernel state . These operations put great pressure on the concurrency performance of the system. At the same time, the HotSpot team noticed that in many cases, the locked state of shared data will only last for a short period of time, and it is not worthwhile to suspend and resume blocked threads for this period of time. In today's multi-processor environment, it is completely possible to let another thread that has not acquired the lock wait for a while (spin) outside the door without giving up the CPU's execution time. Waiting to see if the thread holding the lock will soon release the lock. In order to make the thread wait, we only need to let the thread execute a busy loop (spin). This is the reason for the spin lock.

Spin locks were introduced as early as JDK1.4, but were turned off by default at that time. It is enabled by default after JDK 1.6. Spin locks are essentially different from blocking. Regardless of its multi-processor requirements, if the lock takes up a very short time, the performance of the spin lock will be very good . On the contrary, it will bring more Performance overhead (because when the thread spins, it will always occupy the CPU time slice. If the lock takes too long, the spinning thread will consume CPU resources in vain). Therefore, the spin waiting time must have a certain limit. If the spin exceeds the limit and the lock is still not successfully acquired, the traditional method should be used to suspend the thread. In the JDK definition, the spin lock defaults to The number of spins is 10 times , and the user can change it using parameters -XX:PreBlockSpin.

But now another problem arises: If the thread lock is released just after the thread spin ends, is it not worth the gain? So at this time we need smarter locks to achieve more flexible spin. to improve concurrency performance. (An adaptive spin lock is needed here!)

2. Adaptive spin lock

Adaptive spin locks were introduced in JDK 1.6 . This means that the spin time is no longer fixed, but is determined by the previous spin time on the same lock and the status of the lock owner. If the spin wait has just successfully acquired the lock on the same lock object, and the thread holding the lock is running, then the JVM will think that the lock spin is very likely to acquire the lock, and will automatically increase the waiting time. For example, increase it to 100 cycles. On the contrary, if for a certain lock, spin rarely successfully acquires the lock. Then when acquiring this lock in the future, the spin process may be omitted to avoid wasting processor resources. With adaptive spin, the JVM will become more and more accurate in predicting the lock status of the program, and the JVM will become smarter.

4.3. Lock elimination

Lock elimination means that when the virtual machine just-in-time compiler is running, it eliminates locks that require synchronization on some codes but are detected as unlikely to have shared data competition . The main basis for determining lock elimination comes from the data support of escape analysis . This means: The JVM will judge that the synchronization in a program will obviously not escape and be accessed by other threads , then the JVM will treat them as data on the stack, thinking that these data are unique to the thread and do not need to be synchronized . At this point the lock will be removed.

Of course, in actual development, we clearly know which ones are unique to threads and do not need to add synchronization locks. However, there are many methods in the Java API that are synchronized. At this time, the JVM will determine whether this code needs Lock. If the data cannot escape, the lock will be eliminated. For example, the following operations: When operating String type data, since String is an immutable class, the string connection operation is always performed through the generated new String object . Therefore, the Javac compiler will automatically optimize String connection. Before JDK 1.5, the continuous append() operation of the StringBuffer object will be used. In JDK 1.5 and later versions, it will be converted into the continuous append() operation of the StringBuidler object.

public static String test03(String s1, String s2, String s3) {
    
    
    String s = s1 + s2 + s3;
    return s;
}

The above code is compiled using javap.

img

As we all know, StringBuilder is not synchronized safely, but in the above code, the JVM determines that the code will not escape , and defaults the code to a thread-unique resource, which does not require synchronization, so the lock elimination operation is performed. (There are also various operations in Vector that can also achieve lock elimination. Without escaping from the data security defense)

4.4. Lock roughening

In principle, we all know that when adding a synchronization lock, try to limit the scope of the synchronization block to as small a range as possible (synchronization is only performed in the actual scope of the shared data, in order to reduce the number of operations that need to be synchronized) Make it as small as possible. In the presence of lock synchronization competition, threads waiting for the lock can also get the lock as early as possible).

Most of the above situations are perfectly correct, but if there is a series of operations that repeatedly lock and unlock the same object, or even the locking operation occurs in the loop body, then even if there is no thread competition, frequent operations will occur. Mutually exclusive synchronization operations can also lead to unnecessary performance operations.

Paste here the example java class written based on the above Javap compilation situation

public static String test04(String s1, String s2, String s3) {
    
    
    StringBuffer sb = new StringBuffer();
    sb.append(s1);
    sb.append(s2);
    sb.append(s3);
    return sb.toString();
}

This is the case in the above-mentioned continuous append() operation. The JVM will detect that such a series of operations are all locking the same object, then the JVM will expand (coarse) the scope of lock synchronization to the outside of the entire series of operations, so that the entire series of append() operations only need to add Just lock it once.

4.5. Lightweight lock

background:

  • Lightweight locks were introduced after JDK 1.6. It should be noted that lightweight locks do not replace heavyweight locks, but are an optimization proposed to prevent competition in synchronized blocks in most cases. It can reduce thread overhead caused by heavyweight locks blocking threads, thereby improving concurrency performance.

If you want to understand lightweight locks, you must first understand the memory layout of the object header in the HotSpot virtual machine. The Java object header introduced above has also been introduced in detail. There are two parts in the object header ( Object Header). **The first part is used to store the runtime data of the object itself, HashCode, GC Age, 锁标记位, 是否为偏向锁. **wait. Generally it is 32-bit or 64-bit (depending on the number of operating systems). Officially Mark Word, it is the key to realizing lightweight locks and biased locks. The other part stores a pointer ( Class Point) pointing to the object type data in the method area. If the object is an array, there will be an additional part used to store the length of the data . (Asked in the interview)

1. Lightweight locking

Before the thread executes the synchronized block, the JVM will first create a Lock Recordspace called the lock record () in the stack frame of the current thread to store the current Mark Wordcopy of the lock object (the JVM will copy the object header Mark Wordto the lock record , officially called Displaced Mark Ward) At this time, the status of the thread stack and object header is as shown in the figure:

img

As shown in the figure above: If the current object is not locked, the lock flag is in the 01 state. When the JVM executes the current thread, it will first create a lock record space in the current thread stack frame to store the current copy of the lock Lock Recordobject Mark Word.

Then, the virtual machine uses the CAS operation to copy the mark field Mark Word into the lock record, and updates it Mark Wordto Lock Recordthe pointer it points to. If the update is successful, then this thread owns the lock of the object, and the lock flag of the object Mark Word is updated to (the Mark Wordlast 2 bits) 00, which means that the object is in a lightweight lock state , as shown in the figure:

img

Mark WordIf this update operation fails, the JVM will check whether there is a pointer to the stack frame of the current thread in the current thread . If so, it means that the lock has been acquired and can be called directly. If not, it means that the lock has been preempted by other threads. If there are more than two threads competing for the same lock, the lightweight lock is no longer valid and directly expands to a heavyweight lock. Threads that have not obtained the lock will be blocked. . At this time, the lock flag is Mark Wordthe pointer to the heavyweight lock stored in 10.

When lightweight unlocking, an atomic CAS operation will be used Displaced Mark Wordto replace the object back into the object header. If successful, it means that no competition has occurred. If it fails, it means that there is competition for the current lock. The lock will expand into a heavyweight lock. Two threads compete for the lock at the same time, resulting in lock expansion. The flow chart is as follows:

img

4.6. Bias lock

background:

  • In most actual environments, there is not only no multi-thread competition for locks, but also always acquired multiple times by the same thread . So when the same thread repeatedly acquires the lock and releases the lock, there is no competition for the lock. So it looks like, Acquiring and releasing locks multiple times brings a lot of unnecessary performance overhead and context switching .

In order to solve this problem, the author of HotSpot optimized Synchronized in Java SE 1.6 and introduced biased locking. When a thread accesses a synchronized block and acquires a lock, the lock-biased thread ID will be stored in the lock record in the object header and stack frame . In the future, the thread does not need to perform CAS operations to lock and unlock when entering and exiting the synchronized block. . Just simply test Mark Wordwhether the object header stores a bias lock pointing to the current thread. If successful, it means that the thread has acquired the lock.

img

1. Cancellation of bias lock

Biased locks use a mechanism that waits for competition to occur before releasing the lock . So when other threads try to acquire the biased lock, the thread holding the biased lock will release the lock. However, the cancellation of the biased lock needs to wait until the global safety point (that is, the current thread has no bytecode being executed). It will first suspend the thread holding the biased lock, and then let you check whether the thread holding the biased lock is alive. If the thread is not active, directly set the object header to the lock-free state. If the thread is alive, the JVM will traverse the lock records in the stack frame. The lock records and object headers in the stack frame will either be biased toward other threads, return to a lock-free state, or mark the object as unsuitable for biased locks.

img

4.7. Comparison of advantages and disadvantages of locks

Lock advantage shortcoming scenes to be used
bias lock Locking and unlocking do not require CAS operations, there is no additional performance consumption, and there is only a nanosecond gap compared to executing asynchronous methods. If there is lock competition between threads, it will bring additional consumption of lock revocation. Suitable for scenarios where only one thread accesses the synchronized block
lightweight lock Competing threads will not block, improving response speed If a thread never gets a lock competing thread, using spin will consume CPU performance. In pursuit of response time, synchronized blocks execute very quickly
Heavyweight lock Thread competition does not apply to spin and does not consume CPU Threads are blocked and response time is slow. Under multi-threading, frequent acquisition and release of locks will cause huge performance consumption. Pursuing throughput, synchronized block execution speed is longer

5、Synchronized与Lock

5.1. Defects of synchronized

  • 效率低: The lock is rarely released. The lock will be released only after the code is executed or ends abnormally; when trying to acquire the lock, you cannot set a timeout , and you cannot interrupt a thread that is using the lock . Relatively speaking, Lock can interrupt and set a timeout.
  • 不够灵活: The timing of locking and releasing is single. Each lock has only a single condition (a certain object). Relatively speaking, read-write locks are more flexible.
  • 无法知道是否成功获得锁, Relatively speaking, Lock can get the status. If the lock is successfully acquired,..., if the acquisition fails,...

5.2. Lock to solve corresponding problems

I won’t explain too much about the Lock class here, but mainly look at the four methods in it:

  • lock(): lock
  • unlock(): Unlock
  • tryLock(): Try to acquire the lock and return a boolean value
  • tryLock(long,TimeUtil): Try to acquire the lock, you can set a timeout

Synchronized locking is only associated with one condition (whether to acquire the lock) and is inflexible. Condition与Lock的结合This problem was later solved.

When multiple threads compete for a lock, the remaining threads that have not obtained the lock can only keep trying to obtain the lock without interruption. High concurrency will lead to performance degradation. ReentrantLock's lockInterruptibly() method can prioritize responding to interrupts. If a thread waits too long, it can interrupt itself, and then ReentrantLock responds to the interrupt and no longer lets the thread continue to wait. With this mechanism, when using ReentrantLock, there will be no deadlock like synchronized .

ReentrantLockA commonly used class, it is a reentrant mutex Lock that has some of the same basic behavior and semantics as the implicit monitor lock accessed using synchronized methods and statements, but is more powerful. For detailed analysis, please see: JUC Lecture 10: JUC Lock: Detailed Explanation of ReentrantLock

6. Deeper understanding

Synchronized is implemented through software (JVM) and is simple and easy to use. Even with Lock after JDK5, it is still widely used.

  • What should you pay attention to when using Synchronized?
    • The lock object cannot be empty because the lock information is stored in the object header ;
    • The scope should not be too large , which will affect the speed of program execution. If the control scope is too large, it will be easy to make errors when writing code;
    • Avoid deadlock ;
    • If you have the choice, do not use the Lock or synchronized keywords. Use various tool classes in the java.util.concurrent package . If you do not use the classes in this package, if the business is satisfied, You can use the synchronized keyword to avoid errors because the amount of code is small ;
      • Reason: When multiple threads access local variables of the same method , thread safety problems will not occur, because local variables are stored in the virtual machine stack and are private to the thread, while the java.util.concurrenttool class under the package accesses the local variables of the method.
  • Is Synchronized a fair lock?
    • Synchronized is actually unfair. New threads may get the monitor immediately, while threads that have been waiting in the waiting area for a long time may wait again. This will help improve performance, but it may also lead to starvation.

7. Reference articles

Guess you like

Origin blog.csdn.net/qq_28959087/article/details/133074787
Recommended