In-depth analysis of the most hard-core synchronized interview questions on the entire network

foreword

I haven't posted for a while. A classmate in the study group often asks (prompts) me to push the progress. To be honest, it makes me very happy.

To be honest, synchronized wanted to write it for a long time, because in the current interview, the status of synchronized is basically similar to that of HashMap. Collection and concurrency are very important knowledge systems, and HashMap and synchronized are the core of the core.

Compared with HashMap, synchronized will be a little more complicated, because its main principles are in the JVM source code, so this time I spent a lot of time looking through the JVM source code, but to be honest, I have gained a lot, because there are many knowledge points to follow. The current mainstream view is still somewhat biased.

text

1. A small example of the use of synchronized?

public class SynchronizedTest {

    public static volatile int race = 0;

    private static CountDownLatch countDownLatch = new CountDownLatch(2);

    public static void main(String[] args) throws InterruptedException {
        // 循环开启2个线程来计数
        for (int i = 0; i < 2; i++) {
            new Thread(() -> {
                // 每个线程累加1万次
                for (int j = 0; j < 10000; j++) {
                    race++;
                }
                countDownLatch.countDown();
            }).start();
        }
        // 等待,直到所有线程处理结束才放行
        countDownLatch.await();
        // 期望输出 2万(2*1万)
        System.out.println(race);
    }
}

The familiar example of two thread counts, each thread increments by 10,000 times, the expected result is 20,000, but the actual running result is always a number less than or equal to 20,000, why is this happening?

Race++ may be just one operation in our opinion, but it is actually composed of multiple operations at the bottom, so there will be the following scenarios under concurrency:

In order to get the correct result, we can use synchronized to modify race++ at this point, as follows:

synchronized (SynchronizedTest.class) {
    race++;
}

After adding synchronized, the race can only be operated when the lock is preempted. The process at this time will become as follows:

2. Synchronized various locking scenarios?

1) For non-static methods, the object instance (this) is locked, and each object instance has a lock.

public synchronized void method() {}

2) Acting on the static method, the Class object of the class is locked, and there is only one copy of the Class object globally, so the static method lock is equivalent to a global lock of the class, which will lock all threads that call the method.

public static synchronized void method() {}

3) Acting on Lock.class, it locks the Class object of Lock, and there is only one globally.

synchronized (Lock.class) {}

4) Acting on this, the object instance is locked, and each object instance has a lock.

synchronized (this) {}

5) Acting on a static member variable, the static member variable object is locked. Since it is a static variable, there is only one globally.

public static Object monitor = new Object(); 
synchronized (monitor) {}

Some students may be confused, but it is actually easy to remember. Remember the following two points:

1) There must be an "object" to act as a "lock".

2) For the same class, there are usually only two kinds of objects to act as locks: instance objects and Class objects (there is only one global class for a class).

Class objects: Statically related objects belong to Class objects, and there is also a direct specification of Lock.class.

Instance objects: Non-statically related objects belong to instance objects.

3. Why do you need to add a synchronized lock when calling the wait/notify/notifyAll methods of Object?

This question is difficult to say, and simple to say. It's simple because everyone should remember the topic: "The difference between sleep and wait". A very important item in the answer is: "wait will release the object lock, but sleep will not". Since the lock is to be released, it must be Acquire the lock first.

It's hard to say because if you don't think about this topic and don't understand the underlying principles, you may be completely clueless.

The reason is that because these three methods will operate on the lock object, it is necessary to obtain the lock object first, and adding synchronized lock allows us to obtain the lock object.

Let's see an example:

public class SynchronizedTest {

    private static final Object lock = new Object();

    public static void testWait() throws InterruptedException {
        lock.wait();
    }

    public static void testNotify() throws InterruptedException {
        lock.notify();
    }
}

In this example, wait will release the lock object, and notify/notifyAll will wake up other threads waiting to acquire the lock object to preempt the lock object.

Since you want to operate the lock lock object, you must first acquire the lock lock object. Just like if you want to give the apple to other students, then you have to get the apple first.

Let's look at another counter example:

public class SynchronizedTest {

    private static final Object lock = new Object();

    public static synchronized void getLock() throws InterruptedException {
        lock.wait();
    }
}

After the method runs, an IllegalMonitorStateException will be thrown. Why, we obviously added synchronized to obtain the lock object?

Because adding the synchronized method to the getLock static method obtains the lock object of SynchronizedTest.class, and our wait() method is to release the lock object of the lock.

This is equivalent to wanting to give other students an apple (lock), but you only have a pear (SynchronizedTest.class).

4. How many lists does synchronize maintain at the bottom to store blocked threads?

This question follows on from the previous one, and obviously the interviewer wants to see if I really understand the underlying principles of synchronize.

The JVM model corresponding to the synchronized bottom layer is objectMonitor, which uses three doubly linked lists to store blocked threads: _cxq (Contention queue), _EntryList (EntryList), _WaitSet (WaitSet).

When the thread fails to acquire the lock and enters the block, it will first be added to the _cxq linked list, and the nodes of the _cxq linked list will be further transferred to the _EntryList linked list at some point.

The exact moment of the transfer? See Topic 30.

When the thread holding the lock releases the lock, the thread at the head of the _EntryList list will wake up, the thread is called the successor (assumed successor), and then the thread will try to preempt the lock.

When we call wait(), the thread will be put into _WaitSet, until notify()/notifyAll() is called, the thread will not be put back into _cxq or _EntryList, which is put into the head of _cxq linked list by default.

The overall process of objectMonitor is as follows:

5. Why is the thread awakened when the lock is released called "presumed successor"? Will the awakened thread be able to acquire the lock?

Because the awakened thread does not necessarily acquire the lock, the thread still needs to compete for the lock, and may fail, so the thread does not necessarily become the "successor" of the lock, but only has the opportunity to become, so We call it hypothetical.

This is one reason why synchronized is an unfair lock.

6. Is synchronized a fair lock or an unfair lock?

Unfair locks.

7. Why is synchronized an unfair lock? Where is the inequity reflected?

In fact, the unfairness of synchronized should have many places in the source code, because the designer did not design according to the fair lock, the core has the following points:

1) When the thread holding the lock releases the lock, the thread does two important things:

  1. First assign the owner property of the lock holder to null
  2. Wakes up a thread in the waiting list (assuming the successor).

Between 1 and 2, if another thread happens to be trying to acquire the lock (such as spinning), the lock can be acquired immediately.

2) When the thread fails to acquire the lock and enters the block, the order in which it is put into the linked list is inconsistent with the order in which it is finally awakened. That is to say, if you enter the linked list first, it does not mean that you will be awakened first.

8. Since the synchronized lock is added, when a thread calls wait, it is obviously still in the synchronized block. How can other threads enter synchronized to execute notify?

The following example: When calling lock.wait(), the thread is blocked here. At this time, the code execution should still be in the synchronized block. Why can other threads enter the synchronized block to execute notify()?

public class SynchronizedTest {

    private static final Object lock = new Object();

    public static void testWait() throws InterruptedException {
        synchronized (lock) {
            // 阻塞住,被唤醒之前不会输出aa,也就是还没离开synchronized
            lock.wait();
            System.out.println("aa");
        }
    }

    public static void testNotify() throws InterruptedException {
        synchronized (lock) {
            lock.notify();
            System.out.println("bb");
        }
    }
}

Just looking at the code does give people the illusion in the title, which is also the reason why many people don't use the wait() and notify() methods of Object well, including me.

This question needs to be seen from the bottom. When the thread enters synchronized, it needs to acquire the lock, but when calling lock.wait(), although the thread is still in the synchronized block, the lock has actually been released.

Therefore, other threads can acquire the lock lock at this time and enter the synchronized block to execute lock.notify().

9. If there are multiple threads entering the wait state, when a thread calls notify to wake up the thread, does it wake up in the order in which it entered the wait state?

the answer is negative. When I introduced why synchronized is an unfair lock, I also introduced that it will not wake up in sequence.

When calling wait, the node goes to the tail of the _WaitSet linked list.

When calling notify, according to different strategies, the node may be moved to the head of cxq, the tail of cxq, the head of EntryList, the tail of EntryList and so on.

So, the order of wakeup is not necessarily the order in which wait was entered.

10. How does notifyAll achieve full arousal?

nofity is to get the head node of WaitSet and perform the evoking operation.

The process of nofityAll can be simply understood as looping through all the nodes of the WaitSet and performing the notify operation on each node.

11. What lock optimizations does the JVM do?

Bias lock, lightweight lock, spin lock, adaptive spin, lock elimination, lock coarsening.

12. Why introduce biased locks and lightweight locks? Why are heavyweight locks expensive?

The bottom layer of heavyweight locks is implemented by the synchronization function of the system, which is implemented by using pthread_mutex_t (mutual exclusion lock) in linux.

These low-level synchronization function operations will involve: switching between operating system user mode and kernel mode, and context switching of processes, and these operations are time-consuming, so the overhead of heavyweight lock operations is relatively large.

In many cases, only one thread may acquire locks, or multiple threads may acquire locks alternately. In this case, it is not cost-effective to use heavyweight locks, so biased locks and lightweight locks are introduced to reduce Lock overhead when there is no concurrent contention.

13. The biased lock has revocation and expansion. Why should it be used with such a large performance loss?

The advantage of biased lock is that when only one thread acquires the lock, it only needs to modify the markword through a CAS operation, and then make a simple judgment each time, avoiding the CAS operation when the lightweight lock acquires and releases the lock each time.

If it is determined that the synchronized code block will be accessed by multiple threads or the competition is large, you can turn off the biased lock through the -XX:-UseBiasedLocking parameter.

14. What usage scenarios do bias locks, lightweight locks, and heavyweight locks correspond to?

1) Bias lock

Works when only one thread acquires the lock. When the second thread tries to acquire the lock, even if the first thread has released the lock, it will still be upgraded to a lightweight lock.

But there is a special case, if there is a heavy bias towards the lock, then a second thread can try to acquire the biased lock at this time.

2) Lightweight lock

Suitable for multiple threads to acquire locks alternately. The difference from the biased lock is that there can be multiple threads to acquire the lock, but there must be no competition. If there is, it will upgrade the heavyweight lock. Some students may say that there is no spin, please continue to read.

3) Heavyweight Lock

Suitable for multiple threads to acquire locks at the same time.

15. At what stage does spin occur?

Spin occurs during heavyweight lock phase.

99.99% of the sayings on the Internet say that the spin happens in the lightweight lock stage, but after actually looking at the source code (JDK8), this is not the case.

There is no spin operation in the lightweight lock phase. In the lightweight lock phase, as long as there is competition, it will directly expand into a heavyweight lock.

In the heavyweight lock phase, if the lock acquisition fails, it will try to spin to acquire the lock.

16. Why is the spin operation designed?

Because the suspension overhead of heavyweight locks is too high.

Generally speaking, the code in the synchronized code block should be executed soon. At this time, the thread competing for the lock spin for a period of time is easy to obtain the lock, which can save the overhead of heavyweight lock suspension.

17. How does adaptive spin reflect adaptation?

The adaptive spin lock has a limit on the number of spins, ranging from 1000 to 5000.

If the current spin acquires the lock successfully, the number of spins will be rewarded 100 times, and if the current spin acquisition fails, the penalty will be deducted 200 times.

So if the spin has been successful, the JVM thinks that the spin has a high success rate and is worth a few more spins, thus increasing the number of spin attempts.

Conversely, if the spin keeps failing, the JVM thinks the spin is just wasting time and tries to reduce it as much as possible.

18. Can synchronized locks be downgraded?

The answer is yes.

Specific trigger timing: In the global safepoint (safepoint), the attempt to downgrade the lock will be triggered when the cleanup task is performed.

When the lock is downgraded, the following operations are mainly performed:

1) Restore the markword object header of the lock object;

2) Reset the ObjectMonitor, then put the ObjectMonitor into the global free list and wait for subsequent use.

19. The difference between synchronized and ReentrantLock

1) Bottom layer implementation: synchronized is a keyword in Java and a lock at the JVM level; ReentrantLock is a lock implementation at the JDK level.

2) Does it need to be released manually: synchronized does not need to manually acquire and release the lock. When an exception occurs, the lock will be released automatically, so it will not cause deadlock; ReentrantLock will not take the initiative to go through unLock() when an exception occurs. To release the lock, it is likely to cause deadlock, so when using ReentrantLock, you need to release the lock in the finally block.

3) Fairness of locks: synchronized is an unfair lock; ReentrantLock is an unfair lock by default, but a fair lock can be selected by parameters.

4) Whether it can be interrupted: synchronized can not be interrupted; ReentrantLock can be interrupted.

5) Flexibility: When using synchronized, the waiting thread will wait until the lock is acquired; the use of ReentrantLock is more flexible, whether it returns immediately, whether it is successful, there is a response interruption, and there is a timeout period.

6) Performance: With the continuous optimization of synchronized in recent years, there is no obvious performance gap between ReentrantLock and synchronized, so performance should not be the main reason why we choose the two. The official recommendation is to use synchronized as much as possible, unless synchronized cannot meet the needs, you can use Lock.

20. What is the synchronization lock upgrade process?

The core process is shown in the figure below. Please save it and zoom in to view it. It is normal for some concepts to be incomprehensible. It will be introduced later in the article. Please continue to read.

If the picture quality is blurry, you can download the original picture from the Baidu cloud disk where I shared the interview questions.

synchronized lock flow chart

PS: Starting from here, the content below is based on the underlying principle, which is a detailed analysis of the lock escalation flow chart. Most interviewers may not ask directly, but when talking about lock escalation, the following can be said.

21. The underlying implementation of synchronized

The underlying implementation of synchronized is mainly distinguished: methods and code blocks, as shown in the following example.

/**
 * @author joonwhee
 * @date 2019/7/6
 */
public class SynchronizedDemo {

    private static final Object lock = new Object();

    public static void main(String[] args) {
        // 锁作用于代码块
        synchronized (lock) {
            System.out.println("hello word");
        }
    }

    // 锁作用于方法
    public synchronized void test() {
        System.out.println("test");
    }
}

After compiling the code, check its bytecode. The core code is as follows:

{
  public com.joonwhee.SynchronizedDemo();
    descriptor: ()V
    flags: ACC_PUBLIC
    Code:
      stack=1, locals=1, args_size=1
         0: aload_0
         1: invokespecial #1                  // Method java/lang/Object."<init>":()V
         4: return
      LineNumberTable:
        line 9: 0

  public static void main(java.lang.String[]);
    descriptor: ([Ljava/lang/String;)V
    flags: ACC_PUBLIC, ACC_STATIC
    Code:
      stack=2, locals=3, args_size=1
         0: getstatic     #2                  // Field lock:Ljava/lang/Object;
         3: dup
         4: astore_1
         5: monitorenter   // 进入同步块  
         6: getstatic     #3                  // Field java/lang/System.out:Ljava/io/PrintStream;
         9: ldc           #4                  // String hello word
        11: invokevirtual #5                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
        14: aload_1
        15: monitorexit   // 退出同步块  
        16: goto          24
        19: astore_2
        20: aload_1
        21: monitorexit  // 退出同步块  
        22: aload_2
        23: athrow
        24: return
      Exception table:
         from    to  target type
             6    16    19   any
            19    22    19   any


  public synchronized void test();
    descriptor: ()V
    flags: ACC_PUBLIC, ACC_SYNCHRONIZED  // ACC_SYNCHRONIZED 标记
    Code:
      stack=2, locals=1, args_size=1
         0: getstatic     #3                  // Field java/lang/System.out:Ljava/io/PrintStream;
         3: ldc           #6                  // String test
         5: invokevirtual #5                  // Method java/io/PrintStream.println:(Ljava/lang/String;)V
         8: return
      LineNumberTable:
        line 20: 0
        line 21: 8
}

When synchronized modifies a code block, the monitorenter and monitorexit instructions are generated after compilation, corresponding to entering and exiting the synchronized block, respectively. You can see that there are two monitorexits, because the JVM adds an implicit try-finally to the code block at compile time, and the lock is released in the finally, which is why synchronized does not need to release the lock manually.

When the synchronized method is modified, the ACC_SYNCHRONIZED flag will be generated after compilation. When the method is called, the calling instruction will check whether the ACC_SYNCHRONIZED access flag of the method is set. If it is set, it will try to obtain the lock first. 

There is essentially no difference between the two implementations, except that the synchronization of the method is implemented in an implicit way, without the need for bytecode.

22. Introduce Mark Word?

Before introducing Mark Word, you need to understand the memory layout of objects. In HotSpot, the storage layout of objects in heap memory can be divided into three parts: object header (Header), instance data (Instance Data), and alignment padding (Padding).

1) Object header (Header)

It mainly contains two types of information: Mark Word and Type Pointer.

Mark Word records the runtime data of the object, such as: HashCode, GC generation age, bias mark, lock mark, bias thread ID, bias epoch, etc. The 32-bit markword is shown in the following figure.

Type pointer, a pointer to its type metadata, which is used by the Java virtual machine to determine which class the object is an instance of. If the object is an array, there needs to be a data record for the length of the array.

2) Instance Data

The real effective information stored by the object is the content of the various types of fields we define in the code.

3) Align padding (Padding)

Hotspot requires that the size of the object must be an integer multiple of 8 bytes. Therefore, if the instance data is not an integer multiple of 8 bytes, this field needs to be filled.

23. Introduce Lock Record?

The lock record, which everyone should have heard of, is used to temporarily store the object's mark during lightweight locks.

Lock Record is BasicObjectLock in the source code, the source code is as follows:

class BasicObjectLock VALUE_OBJ_CLASS_SPEC {
 private:
  BasicLock _lock;
  oop       _obj;
};
class BasicLock VALUE_OBJ_CLASS_SPEC {
 private:
  volatile markOop _displaced_header; 
};

There are actually two properties:

1) _displaced_header: The mark word used to temporarily store the lock object in the lightweight lock, also known as the displaced mark word.

2) _obj: Points to the lock object.

In addition to temporarily storing markwords, Lock Record also has an important function that is used to implement lock re-entry counters. When each lock is re-entrant, a Lock Record will be used to record, but at this time _displaced_header is null .

In this way, when unlocking, every time it is unlocked, one Lock Record is removed. When removing, check if _displaced_header is null. If it is, it means that the lock is reentrant, and the real unlocking will not be performed; otherwise, it means that this is the last Lock Record, and the unlocking operation will be performed at this time. 

24. What is anonymity bias?

The so-called anonymous bias means that the lock has never been acquired, which is the first bias. The feature at this time is that the thread ID of the lock object markword is 0.

When the first thread acquires the bias lock, the thread ID will be changed from 0 to the thread ID, and then the thread ID will not be 0, because releasing the bias lock will not modify the thread ID.

This is why biased locking is suitable for scenarios where only one thread acquires the lock.

25. Where is the hashCode stored in biased lock mode?

There is no place to store the hashCode in the biased lock state.

Therefore, when an object has calculated hashCode, it can no longer enter the biased lock state.

If an object is currently in a biased lock state and receives a request to calculate its hashCode (a call to the Object::hashCode() or System::identityHashCode(Object) method), its biased lock state is immediately revoked.

26. Bias lock process?

First, when the bias lock is enabled, after the object is created, its bias lock flag bit is 1. If the bias lock is not enabled, after the object is created, its bias lock flag bit is 0.

Locking process:

1) Find a free Lock Record from the stack frame of the current thread, and point the obj property to the current lock object.

2) When acquiring a biased lock, various judgments will be made first. As shown in the locking flowchart, there are only two scenarios in which you can try to acquire a lock: anonymous bias and batch heavy bias.

3) Use CAS to try to fill your own thread ID into the lock object markword, and the lock will be acquired if the modification is successful.

4) If it is not the two scenarios in step 2, or the CAS modification fails, the biased lock will be revoked and upgraded to a lightweight lock.

5) If the thread successfully acquires the biased lock, then each time it enters the synchronization block, it only needs to simply judge whether the thread ID in the lock object markword is itself, and if so, enter it directly, with almost no extra overhead.

Unlocking process:

The unlocking of the biased lock is very simple, that is, assigning the obj property to null. The important point here is that the thread ID of the lock object markword will not be restored back to 0.

In the biased lock process, the state of the markword changes as shown in the following figure:

27. Batch re-bias and batch cancellation? heuristic algorithm?

We mentioned batch rebiasing above, and batch revocation was introduced at the same time as batch rebiasing. Officially, the two are collectively referred to as "heuristic algorithms".

Why introduce heuristics?

From the above introduction, we know that when only one thread acquires the lock, the biased lock only needs to perform a CAS operation when entering the synchronization block for the first time, and then only a simple judgment is required for each entry. neglect. Therefore, in the scenario where only one thread acquires the lock, the performance improvement of biased lock is very considerable.

However, if other threads try to obtain the lock, the biased lock needs to be revoked to a lock-free state or upgraded to a lightweight lock. The revocation of biased locks has a certain cost. If there is a multi-threaded competition in our usage scenario, which leads to the revocation of a large number of biased locks, the biased locks will lead to performance degradation.

The JVM developers made the following two observations from the analysis:

Opinion 1: For some objects, biased locking is obviously unhelpful. For example a producer-consumer queue involving two or more threads. Such objects are bound to have lock contention, and many such objects may be allocated during program execution.

This point of view describes a scenario with a lot of lock competition. For this scenario, a simple and rude method is to directly disable the biased lock, but this method is not optimal.

Because in the whole service, there may only be a small part of this scenario, and it is obviously not cost-effective to directly abandon the optimization of biased locks because of this small part of the scenario. The ideal would be to be able to identify such objects and disable bias locks only for them .

Batch undo is an optimization for this scenario.

Opinion 2: In some cases it is beneficial to re-bias a set of objects to another thread. Especially when one thread allocates many objects and performs an initial synchronization operation on each of them, but another thread does subsequent work on them.

We know that biased locks are designed to be used in scenarios where only one thread acquires the lock. The second half of this point of view is actually in line with this scenario, but because of the first half, the benefits of biased locking cannot be enjoyed, so what JVM developers have to do is to identify this scenario and optimize it.

For this scenario, the official introduction of batch re-bias to optimize.

batch rebias

The JVM chooses to use class as the granularity and maintains a biased lock revocation counter for each class. Whenever the object of this class has a biased lock revocation, the counter value is +1.

When the value of the counter exceeds the threshold of batch re-bias (default 20), the JVM considers that the above scenario 2 is hit at this time, and will batch-re-bias the entire class.

Each class will have a markword. When it is in a biased lock state, the markword will have an epoch attribute. When an instance object of the class is created, the epoch value of the instance object will be assigned the epoch value of the class, that is to say, under normal circumstances, the instance The epoch of the object and the epoch of the class are equal.

And when batch rebias occurs, epochs come in handy.

When batch re-bias occurs, the epoch value of the class is first +1, and then the stacks of all currently surviving threads are traversed to find all lock instance objects of the class that are in the biased lock state, and change their epoch value to the new value.

For those lock instance objects that are not currently held by any thread, their epoch value has not been updated, which will be 1 less than the epoch value of the class. The next time another thread prepares to acquire the lock object, it will not directly upgrade to a lightweight lock because the thread ID of the lock object is not 0 (that is, it has been acquired by other threads), but use CAS to try Acquire the bias lock to achieve the optimization effect of batch re-biasing.

PS: Corresponds to the selection box of "The epoch of the lock object is equal to the epoch of the class?" in the locking flowchart.

Bulk undo

Batch revocation is the follow-up process of batch re-biasing. It also uses class as the granularity and also uses a biased revocation counter.

After batch re-biasing, the interval between the current revocation time and the previous revocation time will be calculated each time the re-bias is performed. The bias is effective because the frequency of bias revocation is very low at this time, so the bias revocation counter is reset to 0.

When the batch is re-biased, the value of the bias counter continues to increase rapidly. When the value of the counter exceeds the threshold of batch revocation (default 40), the JVM considers that the instance object of this class has obvious lock competition, and it is not suitable to use the bias lock, then A batch undo operation will be triggered.

Batch revocation: Modify the markword of the class to a non-biased lock-free state, that is, the biased flag bit is 0, and the lock flag bit is 01. Then traverse the stacks of all currently surviving threads, find all the lock instance objects of the class that are in the biased lock state, and perform the biased lock revocation operation.

In this way, when the thread subsequently tries to obtain the lock instance object of the class, it will find that the markword of the class of the lock object is not in the biased lock state, knowing that the biased lock has been disabled for the class, and thus directly enter the lightweight lock process.

PS: Corresponds to the selection box of "Is the class of the lock object a biased mode?" in the locking flowchart.

28. Lightweight lock process?

Locking process:

If the biased lock is turned off, or the biased lock is upgraded, the light-weight lock locking process will be entered.

1) Find a free Lock Record from the stack frame of the current thread, and the obj attribute points to the lock object.

2) Change the markword of the lock object to a lock-free state and fill it into the displaced_header attribute of the Lock Rrcord.

3) Use CAS to change the mark of the object header to a pointer to Lock Record

The relationship between the thread stack and the lock object at this time is shown in the figure below. It can be seen that the displaced_header of 2 lock reentries is filled with null.

Unlocking process:

1) Assign the obj property to null.

2) Use CAS to restore the displaced mark word temporarily stored in the displaced_header attribute back to the mark word of the lock object.

29. Heavyweight lock process?

Locking process:

When a lightweight lock competes, it swells into a heavyweight lock.

1) Allocate an ObjectMonitor and populate the relevant properties.

2) Modify the markword of the lock object to: the ObjectMonitor address + the heavyweight lock mark bit (10)

3) Attempt to acquire the lock, and if that fails, try to spin to acquire the lock

4) If it fails after several attempts, the thread is encapsulated into ObjectWaiter, inserted into the cxq linked list, and the current thread enters the blocking state

5) When other locks are released, the nodes in the linked list will be awakened. The awakened node will try to acquire the lock again. After the acquisition is successful, it will remove itself from the cxq (EntryList) linked list.

The relationship between the thread stack, lock object, and ObjectMonitor at this time is shown in the following figure:

The core properties of ObjectMonitor are as follows:

ObjectMonitor() {
    _header       = NULL; // 锁对象的原始对象头
    _count        = 0;    // 抢占该锁的线程数,_count大约等于 _WaitSet线程数 + _EntryList线程数
    _waiters      = 0,    // 调用wait方法后的等待线程数
    _recursions   = 0;    // 锁的重入数
    _object       = NULL; // 指向锁对象指针
    _owner        = NULL; // 当前持有锁的线程
    _WaitSet      = NULL; // 存放调用wait()方法的线程
    _WaitSetLock  = 0 ;   // 操作_WaitSet链表的锁
    _Responsible  = NULL ;
    _succ         = NULL ;  // 假定继承人
    _cxq          = NULL ;  // 等待获取锁的线程链表,竞争锁失败后会被先放到cxq链表,之后再进入_EntryList链接
    FreeNext      = NULL ;  // 指向下一个空闲的ObjectMonitor
    _EntryList    = NULL ;  // 等待获取锁的线程链表,该链表的头结点是获取锁的第一候选者
    _SpinFreq     = 0 ;
    _SpinClock    = 0 ;
    OwnerIsThread = 0 ; // 标记_owner是指向占用当前锁的线程的指针还是BasicLock,1为线程,0为BasicLock,发生在轻锁升级重锁的时候
    _previous_owner_tid = 0;  // 监视器上一个所有者的线程id
  }

Unlocking process:

1) The reentry counter will be -1, the _recursions property in ObjectMonitor.

2) Release the lock first, and assign the owner property of the lock holder to null. At this time, other threads can already acquire the lock, such as spinning threads.

3) Wake up the next thread node from the EntryList or cxq linked list.

30. What is the queuing strategy of _cxq linked list and _EntryList linked list?

As mentioned above, "the nodes of the _cxq linked list will be further transferred to the _EntryList linked list at some point", what time is that?

Generally speaking, it can be considered that when the thread holding the lock releases the lock, the thread needs to wake up the next thread node in the linked list. At this time, if it is checked that _EntryList is empty and _cxq is not empty, it will The nodes of the _cxq linked list are transferred to _EntryList.

However, this is not all the case. The queuing strategy (QMode) and execution order of the queuing strategy of the _cxq linked list and the _EntryList linked list are as follows:

1) When QMode = 2, _cxq has a higher priority than EntryList. If _cxq is not empty at this time, the head node of the _cxq linked list will be woken up first. In addition to QMode = 2, other modes are to wake up the head node of _EntryList. 

2) When QMode = 3, regardless of whether _EntryList is empty, the nodes in the _cxq linked list will be directly transferred to the end of the _EntryList linked list.

3) When QMode = 4, regardless of whether _EntryList is empty, the nodes in the _cxq linked list will be directly transferred to the head of the _EntryList linked list.

4) Execute to this side, if _EntryList is not empty, directly wake up the head node of _EntryList and return, if _EntryList is empty at this time, continue to execute.

5) Executed here, it means that _EntryList is empty at this time.

6) When QMode = 1, transfer the nodes of the _cxq linked list to _EntryList, and change the order, that is, the original in the head of _cxq, will change to the tail of _EntryList.

7) In the remaining cases, transfer the nodes of the _cxq linked list to _EntryList, and the nodes are in the same order.

8) If _EntryList is not empty at this time, wake up the head node of _EntryList.

finally

I'm Jon Hui, a programmer who insists on sharing original technical dry goods . My goal is to help you get your favorite big company Offer, see you in the next issue.

Guess you like

Origin blog.csdn.net/v123411739/article/details/117401299