In-depth explanation of the underlying implementation of Synchronized synchronization lock

Preface

When it comes to multithreading, I have to talk about Synchronized. Many students can only use it. I don’t understand the underlying implementation principle of Synchronized. This is also a frequently asked link in interviews, such as:

  1.  The underlying realization principle of synchronized
  2. The realization of synchronized lock in JVM
  3. Synchronized lock upgrade sequence
  4.  The advantages and disadvantages of synchronized locks and application scenarios

Synchronized

Synchronized translated into Chinese means synchronization, which is also called "synchronized lock".

The function of synchronized is to ensure that at the same time, the modified code block or method will only be executed by one thread, so as to achieve the effect of ensuring concurrency safety.

Use of Synchronized

1. Three ways to use synchronized

  •  Modification instance method: acting on the current instance to lock
  •  Modified static method: acting on the current class object to lock
  •  Modified code block: specify the lock object, lock the given object

2. Synchronized code example

The underlying implementation of Synchronized The underlying implementation of
synchronized is completely dependent on the JVM virtual machine.

So talking about the underlying implementation of synchronized, you have to talk about the storage of data in the JVM memory: Java object header, and Monitor object monitor.

1.Java object header

In the JVM virtual machine, the storage layout of objects in memory can be divided into three areas:

  •  Object header (Header)
  •  Instance Data
  •  Padding

The Java object header mainly includes two parts of data:

  •  Type pointer ( Klass Pointer ): It is the pointer of the object to its class metadata. The virtual machine uses this pointer to determine which class instance the object is;
  •  Tag field ( Mark Word ): for storing runtime data object itself, such as a hash code (HashCode), GC generational age lock state flag thread holds the lock, the thread deflection ID, time stamp, etc. bias, It is the key to achieving lightweight locks and biased locks.

2. Java lock object storage location

Therefore, it is obvious that the lock object used by synchronized is stored in the tag field in the Java object header.

3.Monitor

The synchronized object lock, its pointer points to the starting address of a monitor object (implemented by C++). Every object actually has a monitor.

Monitor is described as an object monitor, which can be compared to a special room. There are some protected data in this room. The Monitor guarantees that only one thread can enter this room to access the protected data at a time. Entering the room means holding If there is a monitor, exit the room to release the monitor.

When the synchronized code block locked by syncrhoized is executed in the bytecode engine, it is mainly realized through the acquisition and release of the monitor of the lock object.

4. Thread state flow is reflected on Monitor

Described as an object monitor. When multiple threads request an object monitor at the same time, the object monitor will set several states to distinguish the requested thread:

  •  Contention List: All threads requesting locks will be placed in the competition queue first
  •  Entry List: Those qualified threads in the Contention List are moved to the Entry List
  •  Wait Set: Threads that are blocked by calling the wait method are placed in the Wait Set
  •  OnDeck: At most one thread can compete for lock at any time, and this thread is called OnDeck
  •  Owner: The thread that acquired the lock is called the Owner
  •  !Owner: the thread that released the lock

The following figure reflects a state transition relationship

Synchronized lock upgrade sequence

Locking solves the security of data, but it also brings performance degradation. After investigation, the author of the hotspot virtual machine found that in most cases, the locked code not only does not have multi-thread competition, but is always obtained by the same thread multiple times. So based on such a probability.

Synchronized has made some optimizations after JDK1.6. In order to reduce the performance overhead of acquiring locks and releasing locks, biased locks, lightweight locks, spin locks, and heavyweight locks are introduced. The state of the lock varies from the degree of intense competition. Continuously upgrade from low to high.

1. Bias lock

Biased lock is a lock optimization introduced in JDK6. In most cases, the lock not only does not have multi-thread competition, but is always acquired by the same thread multiple times. In order to make the thread acquire the lock cheaper, the bias lock is introduced.

The biased lock will be biased to the thread that first obtains it. If the lock is not acquired by other threads in the subsequent execution process, the thread holding the biased lock will never need to be synchronized.

2. Lightweight lock

If it is obvious that there are other threads requesting locks, then the biased locks will soon be upgraded to lightweight locks.

3. Spin lock

The principle of spin lock is very simple. If the thread holding the lock can release the lock resource in a short time, then those threads waiting for the competing lock do not need to switch between the kernel state and the user state to enter the blocking suspended state. Just wait (spin), and the lock can be acquired immediately after the thread holding the lock releases the lock, thus avoiding the consumption of switching between user threads and the kernel.

4. Heavyweight lock

Refers to the original Synchronized implementation, the characteristics of heavyweight locks: other threads will be blocked when they try to acquire the lock, and these threads will only be awakened after the thread holding the lock releases the lock.

Comparison of the advantages and disadvantages of biased locks, lightweight locks, and heavyweight locks

 

 

Guess you like

Origin blog.csdn.net/sinat_37903468/article/details/108939825