From shallow to deep analysis Synchronized, Synchronized thorough understanding of the principles underlying implementation

This article will record Synchronized common usage scenarios and the underlying Synchronized implementation principle. Although we usually often used Synchronized keyword in multiple threads, but probably for the lower level and we are familiar with the keyword in the end is how to achieve without too much concern. As a developer, since use to, you can try it at the bottom of the veil to reveal step by step.

Why use Synchronized?

First, we look at the code


public class Demo {
    private static int count=0;
    public /*synchronized*/ static void inc(){
        try {
            Thread.sleep(1);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        count++;
    }
    public static void main(String[] args) throws InterruptedException {
        for(int i=0;i<1000;i++){
            new Thread(()-> Demo.inc()).start();
        }
        Thread.sleep(3000);
        System.out.println("运行结果"+count);
    }
}
复制代码

The results of running this code: 运行结果970. In this code, first of all do not add synchronized keyword, we use the method of recycling with 1000 thread count to access this variable, run the results tell us that this state of shared variables is thread safe (we expect 1000 the access can get the results of 1000) . To solve this problem, Synchronized keyword can achieve their goals.

synchronized Profile

In multi-threaded programming synchronized been a veteran role, many people will call it the heavyweight lock. However, with Java SE 1.6 Dui synchronized conducted various optimization after, in some cases it is not so bad, Java SE 1.6 in order to obtain and release locks to reduce the performance overhead introduced to bring the biased locking and lightweight level locking. This will slowly introduced in a subsequent presentation.

The basic syntax of synchronized

There are three ways to lock synchronized, respectively,

  1. Examples of modified method, the current applied to the locking instance, before entering a synchronization code to obtain the current instance of the lock
  2. Static method, the lock acting on the current class objects, the synchronization code before entering the class object to obtain the current lock
  3. Modifying the code block, designated lock object, to lock a given object, before entering the synchronization code library to obtain a given lock of the object.

I find the Internet a Photo roughly corresponds also mentioned above.

Synchronized usage scenarios

synchronized Principle Analysis

Java object header and monitor is the basis of synchronized ! Below these two concepts do in detail.

About monitor, look at a small demo

package com.thread;

public class Demo1{

    private static int count = 0;

    public static void main(String[] args) {
        synchronized (Demo1.class) {
            inc();
        }

    }
    private static void inc() {
        count++;
    }
}
复制代码

The code above demo uses synchroized keyword class object is locked. After compiling, Demo1.class after switching to the same directory, and then view the byte code files javap -v Demo1.class:

public static void main(java.lang.String[]);
    descriptor: ([Ljava/lang/String;)V
    flags: ACC_PUBLIC, ACC_STATIC
    Code:
      stack=2, locals=3, args_size=1
         0: ldc           #2                  // class com/thread/SynchronizedDemo
         2: dup
         3: astore_1
         4: monitorenter       //注意这个
         5: invokestatic  #3                  // Method inc:()V
         8: aload_1
         9: monitorexit    //注意这个
        10: goto          18
        13: astore_2
        14: aload_1
        15: monitorexit   //注意这个
        16: aload_2
        17: athrow
        18: return

复制代码

When a thread acquires the lock, in fact, get a monitor object (monitor), monitor can be considered as a synchronization object, all Java objects are born carrying the monitor. The monitor is a unique after adding Synchronized keyword. synchronized sync blocks use monitorenter and monitorexit instructions to synchronize these two commands are essentially object to a monitor (monitor) were acquired, this process is exclusive, which means that only one thread at a time acquisition synchronized to the protected object's monitor. When the thread execution to monitorenter instruction, will try to get the object corresponding to monitor ownership, that is try to get the object's lock, and the implementation of monitorexit, is the release of monitor ownership.

Object layout in memory

In the Hotspot virtual machine, the layout object is stored in memory can be divided into three regions: the object header (Header), instance data (Instance Data), alignment padding (Padding). In general, the use of synchronized lock object is stored in advance in Java objects. It is lightweight biased lock and key lock.

image

Java object header

The main object header data consists of two parts: Mark Word (marker fields), Klass Pointer (pointer type). Point klass : the object is a pointer to its class metadata, the virtual machine is determined by the pointer which is an instance of the object class; Mark Word : for storing runtime data object itself, such as a hash code (HashCode), GC generational age, the lock status flags, thread holds the lock, missed the thread ID, timestamp, and so biased, it is lightweight lock and key biased locking implementation.

MarkWord four states may be stored when locked

synchronized lock escalation

In analyzing markword, he referred to the biased lock, lock lightweight, heavyweight lock. In analyzing the difference between these types of locks, let's think about a problem with the lock enables data security, but will bring performance degradation. Do not use locks can be based on threaded parallel program to enhance the performance, but it can not guarantee thread safety. Between the two seems to be no way to reach not only meet the performance can meet the safety requirements.

On the hotspot virtual machine after an investigation found that, in most cases, there is no lock code is not only multi-threaded competition, and always get the same thread several times. So based on a probability, it is the synchronized after JDK1.6 do some optimization, in order to reduce to obtain and release locks to the performance overhead introduced bias lock, the concept of lightweight lock. So you will find in synchronized, the four states are latched: no lock, tend to lock, lock lightweight, heavyweight lock; lock state according to the degree of fierce competition from low to high-escalating.

The basic principle of bias lock

Biased locking acquisition As mentioned above, in most cases, there is no multi-threaded lock not only competitive, but always obtained from the same thread several times, in order to allow the thread to obtain a lower cost of locks on the introduction of the concept of biased locking . Understand how biased locking it? When a thread is added a synchronization lock access code block, the current thread ID is stored in the object header, the thread enters and exits subsequent addition of this synchronization lock time code block, it does not require locking and releasing the lock again. But direct comparison is stored inside the head pointing to the current thread biased locking. If equal representation biased locking is biased in favor of the current thread, do not need to try to get locked revocation of biased locking biased locking using a competition occurs only until the release of the lock mechanism, so when the other thread tries to compete biased locking, holding there will be a thread releases the lock biased lock. It is biased directly and the upgrade is applied to the object lock state of the lock lightweight. When the original holders biased locking thread will be revoked, the original get biased locking thread there are two cases:

  1. Original thread to obtain a lock if you have biased out of the critical zone, which is synchronized block execution is over, so this time will not object header set to lock status and lock threads can compete biased but before the CAS re-thread basis
  2. If the original get biased locking thread synchronization code block has not been performed, is in the critical zone, this time will get the original biased locking thread upgraded to continue to implement synchronized block after lightweight lock in our application development certainly there will be more than two threads contending for the vast majority of cases, if you tend to open the lock, it will acquire lock enhance resource consumption. Therefore, the parameters can be set by jvm UseBiasedLocking biased locking on or off

This is a classic online biased locking flowchart

Biased locking flowchart

The basic principle of lightweight lock

Lock

After the upgrade lock is a lightweight lock, Markword object will make the appropriate changes. Upgraded to lightweight lock process:

  1. Create a thread lock in record LockRecord own stack frame in.
  2. Copy the object header locks the object to lock in MarkWord record thread just created.
  3. The Owner record pointer points to lock the lock object.
  4. MarkWord the object header locks the object point is replaced with the lock record pointer.

Spinlocks

Lightweight locked in the locking process, used the so-called spin-spin locks, it refers to the time when there is another thread to compete lock this thread will wait for the cycle in place, rather than the thread to block until the get after the thread releases the lock of the lock, the thread can acquire the lock immediately. Note that lock in place when the cycle is to consume the cpu, equivalent to some for a loop in execution had nothing. So, lightweight lock applies to those scenes synchronized block quickly executed, so that the thread still waiting for a very short time to be able to get a lock. The use of spin locks, there is also a certain probability that background, most of the time in the synchronized block execution is very short. So seemingly without objection by the cycle but can enhance the performance of the lock. But there are certain conditions necessary spin control, or if a thread executing synchronized block of time is very long, then the thread continues to cycle but will consume CPU resources. The default number of spins in the case is 10 times, can be modified after JDK1.6, the introduction of adaptive spin lock, adaptive means that the number of spins is not fixed by preBlockSpin, but once before according to the same spin state of lock time and the lock owner to decide. If a lock on the same object spin-wait just successfully won locks, and thread holding the lock is running, the virtual machine will think that this spin is likely to succeed again, and then it will allow spin wait for a relatively longer time. If for a lock, spin rarely been successful, that in attempting to acquire the lock will be possible to omit the spin process, directly blocking the thread processor to avoid wasting resources in the future.

Unlock

Lightweight when unlocked, will use the atomic CAS operation Displaced Mark Word replacement head back to the object, if successful, it means that there is no competition occurs. If it fails, there is competition represents the current lock, the lock will be expanded into a heavyweight lock.

Lightweight lock flowchart

Heavyweight lock

When the lightweight heavyweight lock to lock inflation, it can only mean that the thread is suspended waiting to be blocked to wake up.

Compare locks

Compare locks

to sum up

JVM during operation will be based on the actual situation to add some of the Synchronized keyword will be locked automatically upgraded to achieve self-optimization. These are the realization of the principle of Synchronized and later java1.6 optimization done to upgrade and lock principle may be encountered in actual operation it . Although we all know how to use the synchronized keyword, but I think it is a step by step process to dig the realization of the principle it is a pleasure.

Guess you like

Origin juejin.im/post/5d00de5bf265da1b6c5f6f43