[Multithreading] Synchronized keyword

Synchronized is a very important keyword in Java.

1. Origins

  There will always be generated transaction-specific reasons, the following code as an introduction to the lead Synchronized

public class SynchronizedDemo implements Runnable {
    private static int count = 0;

    public static void main(String[] args) {
        for (int i = 0; i < 10; i++) {
            Thread thread = new Thread(new SynchronizedDemo());
            thread.start();
        }
        try {
            Thread.sleep(500);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println("result: " + count);
    }

    @Override
    public void run() {
        for (int i = 0; i < 1000000; i++)
            count++;
    }
}

 10 threads simultaneously operating count, each thread do a million times count increment operator, the final result is not necessarily out of a million, and the results of each run is not the same.

2. Synchronized implementation principle

The principle: JVM through the entry, exit, an object monitor (Monitor) method to achieve synchronization sync block.

Look at the following piece of code

public class SynchronizedDemo {
    public static void main(String[] args) {
        synchronized (SynchronizedDemo.class) {
        }
        method();
    }

    private static void method() {
    }
}

After compilation, see the bytecode file with javap -v SynchronizedDemo.class:

 

 

      As shown, the upper part with yellow highlighted is the need to pay attention to the part, and this is unique after Tim Synchronized keyword. After the first synchronization code block must be executed monitorenter instruction, when the exit monitorexit instruction. It can be seen after analysis using Synchronized synchronized, the key is the need to monitor the monitor object's acquisition, when the thread gets monitor to continue down, otherwise, they can wait. And this acquisition process is mutually exclusive , that only one thread at a time can get to monitor. Above the demo after executing the synchronized block will be followed by another to perform a static synchronized method, but this method is still a lock on this object class object, then the executing thread needs to acquire the lock on it? The answer is do not have to, it can be seen from the above figure, when performing a static method of synchronization is only a monitorexit instruction, and not monitorenter acquire a lock instruction. This is the lock of the re-entry of that process in the same lock, the thread does not need to acquire the same lock again. Synchronized innate reentrancy. Each object has a counter, when a thread acquires the object lock, the counter will be incremented by one, after the release of the lock will counter decrements .
      Any object has its own monitor, when this object is invoked by the synchronization blocks or synchronization method of this object, thread execution method must first obtain the object's monitor to enter the synchronized block and synchronization method, if no response to monitoring threading will be blocked at the entrance and a sync block synchronization method proceeds to bLOCKED state.
FIG performance of the object, the relationship between the object monitor, the synchronous queue and executing thread state:

 

 This can be seen, any thread access to the Object, we must first get Object monitor, if the acquisition fails, the thread enters the synchronized state, the thread state to BLOCKED, when the Object monitor occupants release, in sync queue thread will have the opportunity to reacquire the monitor.

3. lock acquisition and lock release semantic memory
public class MonitorDemo {
    private int a = 0;

    public synchronized void writer() {     // 1
        a++;                                // 2
    }                                       // 3

    public synchronized void reader() {    // 4
        int i = a;                         // 5
    }                                      // 6
}

 

 As can be seen from the figure, the thread A first reads the value of the shared variable a = 0 to start the main memory and then copied to the variable own local memory, an operation performed after the addition, then the value is flushed to main memory , the entire process is known lock thread a -> execute a critical section of code -> lock release corresponding memory semantics.

 

 

Thread B to acquire the lock time will be equally shared main memory from the value of variable a, this time is the latest value of 1, the value is then copied to the working memory thread B's go, release the lock when the same will be rewritten to the main memory.

On the whole, the results of execution of thread A (a = 1) are visible on thread B, implementation principle is: the lock is released when the values ​​are flushed to main memory, the main forces from the memory when another thread to acquire the lock get the latest value.

From the horizontal point of view, it is like a thread A communicate via shared variables in main memory and thread B, A tells B the two of us shared data is now a matter, communication mechanism between the threads of this coincided exactly java memory model concurrent shared memory model structure.

4. Synchronized optimization
  By the above discussion, we should now be on Synchronized impression, its greatest feature is only one thread at a time can get on the target monitor (monitor), to enter into the synchronized block or synchronized method, that is, the performance of mutually exclusive (exclusive) . This is certainly inefficient way, we can only by a thread, since you can only pass one of these forms will not change, then we can not let pass each speed becomes a little faster. Figuratively, to pay at the cashier, before the way is that we all go to line up, then go to the cashier give change bill payment, payment when sometimes took out his wallet in the bag and get some more money, this process is relatively time-consuming, then, Alipay process to liberate everyone purse to find the money, and now only needs to scan the next to complete the payment, and also eliminates the need the change the cashier of the time with you. Is also a need to queue, but the entire payment time is greatly reduced, the overall efficiency is not high rate of change becomes faster? This optimization can also be extended to the optimization of the lock, to shorten the time to acquire a lock.
4.1 CAS operations

3.1.1 What is CAS?

     When using the lock, the thread acquires the lock is a pessimistic locking strategy , which assumes that each execution of a critical section of code can create conflicts, so the current thread to acquire the lock time will also block other threads to acquire the lock. The CAS operation (also known as lock-free operations) is an optimistic locking strategy , when it is assumed that all the threads access to shared resources does not conflict, since the conflict does not occur naturally does not block operation of the other threads. Therefore, the thread does not block pause state occurs. So, if there was a conflict how to do? No lock operation is to use ** CAS (compare and swap) ** aka thread compare exchange to identify whether there is a conflict, the conflict appeared to retry the current operation until there is no conflict.

Process 3.1.2 CAS operations

      CAS exchange comparison process may be understood as popular CAS (V, O, N) , three values are: an actual value V stored in the memory address; new updated value of N; O expected value (old value) . When V and O are the same, that is to say the old value and the actual value of the same memory indicates that the value has not been changed by another thread before, that is, the old value O is for now the latest value, and naturally the new value of N can be assigned to V. On the contrary, V and O are not the same, indicating that the value has been another thread to turn over the old value of O is not the latest version of the value, so you can not assign a new value V N, V can be returned. When multiple threads using a CAS operating variable is, only one thread will be successful, and successful update, the rest will fail. The thread will try again failed, of course, can choose to suspend a thread

CAS implementation requires hardware instruction set support, after JDK1.5 virtual machine to be provided by the processor using the CMPXCHG instructions.

The main problem (before non-optimized) is a veteran of the Synchronized: there will be a performance problem and wake lock thread blocks brought in the presence of the thread competition, because it is a mutually exclusive synchronous (blocking synchronization). The CAS is not arbitrary between threads hang, when CAS will conduct certain operations failed attempts, rather than time-consuming wake of the pending operation, and therefore also called non-blocking synchronization. This is the main difference between the two.

3.1.3 CAS issues

1. ABA problem because the CAS will check the old values have not changed, there is an interesting question here. For example, an old value to the A to the B, and then into A, just do the old value and CAS examination revealed no change still is A, but in fact did change. Solutions can follow common database optimistic locking way, add a version number can be resolved. The original path changes A-> B-> A becomes 1A-> 2B-> 3C. java language so good, of course, provide the atomic java package after 1.5 in the ABA AtomicStampedReference to solve the problem, the idea is to solve the case.

2. spin too long

Non-blocking synchronization using CAS, that is not the thread is suspended, it will spin (nothing more than an endless loop) were the next attempt, if the spin here too long on performance is a big consumption. If the JVM can support pause instructions provided by the processor, so there will be some improvement in efficiency.

3. atomic operation can only guarantee a shared variable

When performing operations on a shared variable that can guarantee atomicity CAS, if the operation of a plurality of shared variables, CAS can not guarantee atomicity. There is a solution is to use an object to integrate multiple shared variables, namely a class member variable is these shared variables. This object is then done CAS operation can guarantee atomicity. atomic AtomicReference provided to ensure that the object between the reference atom.

3.2 Java object header

When synchronization is to get the object's monitor, that is, to acquire the lock object. So understanding how the lock object? Nothing more than a sign similar to the object, then this flag is stored in the object header Java objects. Mark Word advance of the default Java objects in object storage Hashcode, age and generational lock flag. 32 for the JVM Mark Word default storage structure (Note: java object header as well as the following excerpt from a lock state change "java concurrent programming of art", a book that I think the writing is good enough, there is no organization in their own language trespass) :

 

 

As shown by default information is stored hasdcode, the age and value of the lock flag like Mark Word.

Java SE 1.6, the lock There are four kinds of state-level from low to high are: lock-free status, tend to lock status, lock status lightweight and heavyweight lock state , these states will be gradually upgraded with competition . Lock can upgrade but can not downgrade means that bias can not be downgraded to a biased locking the lock upgrade to a lightweight lock. This lock upgrade but can not downgrade strategy is designed to improve efficiency and get the lock to release the lock. MarkWord object changes as following:

3.2 biased locking

   HotSpot author has found that, in most cases, there is no multi-threaded lock not only competitive, but always get many times by the same thread, a thread to acquire a lock in order to allow lower costs and the introduction of bias lock.
 
Biased locking acquisition
      When a thread to acquire the lock and access the sync block, will object header and locks on the stack frame in store lock bias thread ID, after the thread does not require a CAS in sync blocks entry and exit to lock and unlock , simply to test the head of Mark Word objects in memory if the current thread pointing to bias lock. If the test is successful, it indicates that the thread has acquired the lock. If the test fails, you will need to re-test Mark Word in biased locking identity is set to 1 (indicating the current is biased lock): If not set, the CAS compete lock; if set, try using CAS object head biased locking point to the current thread
 
Biased locking revocation
      Use a biased locking mechanism until it releases the lock of competition there, so when another thread tries to lock biased competition, held biased locking thread will release the lock.

 

 

As shown, the biased locking withdrawn to wait global security dots (no byte code being executed at this time point). It will first have to suspend biased locking thread, and then check whether the thread holds biased locking alive, if the thread is not active, the object is first set to lock-free state; if the thread is still alive, with a biased locking of the stack will be execution, objects tend to traverse the locks, mark Word stack of records and lock the object head or re biased in favor of other threads, or return to the no lock or tag objects not suitable as biased locking the last wake-up thread.

Figure 1 shows a thread under the process of obtaining biased locking thread 2 illustrates the process biased locking revocation.

 

 How to turn off biased locking

Biased locking in Java 6 and Java 7 in is enabled by default, but it is not activated until after the application starts in a few seconds, if necessary, can be used to shut down delay JVM parameters: -XX: BiasedLockingStartupDelay = 0 . If you determine the application in all the locks in a competitive state under normal circumstances, can be turned off by biased locking JVM parameters: -XX: -UseBiasedLocking = false , then the program goes into default lightweight lock state.
 
3.3 Lightweight lock
     Lock
Thread prior to performing synchronized block, JVM stack frame will be first in the current thread to create space for storing lock records , and the object head Mark Word copied to lock the record, officially known as Displaced Mark Word . The thread then try to use the CAS object header Mark Word replacement recording pointer to point to the lock . If successful, the current thread to acquire the lock, and if that fails, represents another thread lock contention, the current thread will try to use to get the spin lock.
    Unlock
Lightweight when unlocked, will use the atomic CAS operation Displaced Mark Word replacement head back to the object, if successful, it means that there is no competition occurs. If it fails, there is competition represents the current lock, the lock will be expanded into a heavyweight lock. The figure is two threads compete for locks, the lock inflation leads to a flowchart.

 

 

Because the spin will consume CPU, in order to avoid unwanted spin (such as to acquire a lock thread is blocked to live), once heavyweight lock to lock upgrade, it will not return to the lightweight lock state. When the lock is in this state, when the other thread tries to acquire the lock will be blocked to live, when the thread releases the lock that holds the lock will wake up these threads, the thread will be awakened a new round of wins battle lock.

3.5 Comparison of various locks

 

 

Author: Listen ___
link: https: //juejin.im/post/5ae6dc04f265da0ba351d3ff

Guess you like

Origin www.cnblogs.com/zhengwangzw/p/11546579.html