The principle of synchronized and optimized lock

Copyright notice: reproduced, please indicate the source of downstream https://blog.csdn.net/Hollake/article/details/90711538

Foreword

In order to help consolidate their own memory lock optimization knowledge, some conclusions do it today.

synchronized multi-threaded programming has been a veteran, and before the due JDK1.6 achieve synchronization performance brought about excessive consumption, which is called heavyweight lock With the JDK1.6 synchronized various optimization , resulting in a biased locking , lightweight lock, adaptive locks , etc., it is not so now there is a heavyweight.

The principle

How to synchronize achieve synchronous operation it? It is synchronized via the following three ways:

  1. For normal synchronization method, the lock is the current instance of the object.
  2. For static synchronized method, the object lock is the class of the current class.
  3. For the synchronization code blocks, the object lock is synchronized {} are configured

It is how to achieve specific guarantees synchronization method or synchronous code block at most, only one thread to access it? Each object is a lock monitor (Monitor), in fact, JVM is achieved by a method and code blocks entry and exit Monitor objects synchronized.

JVM is a particular method or modification of synchronized code block is inserted after compilation monitorenter and monitorexit two instructions, i.e., to a synchronous method or the block synchronization code inserted before the monitorenter, monitorexit inserted when exiting , so that we can ensure that each only one thread can access the synchronization code blocks or synchronization method, the following is an example to illustrate:

public class SynchronizedTest {
    public void test(){
        synchronized (SynchronizedTest.class) {
            System.out.println("code ");
        }
    }
}

javac SynchronizedTest.java compiled, followed by javap -c SynchronizedTest.class decompile the following results were obtained:

is a synchronized reentrant lock, a thread attempts to acquire ownership of the monitor to the monitorenter instruction, when executed, determines whether the number of entries 0 monitor, if it is 0, indicating the current lock may be acquired, after entering the acquired number set to 1, the thread owns the lock, if the judge enters the lock number greater than 0, and found a thread that owns the lock yourself, then enter the number of +1, each execution monitorexit quit once to enter the number 1.

Lock optimization

To reduce the acquire and release locks bring performance overhead, JDK1.6 introduces a bias in the lock and lightweight lock, after JDK1.6, a total of four locks state level from low to high: lock-free status , tend to lock status, lock status lightweight, heavyweight lock status. Lock can be upgraded from low to high, but can not downgrade it is biased locking lock can be upgraded to lightweight, lightweight lock but can not downgrade to bias lock. Before reading took part, it is recommended to head for simple Java objects to understand.

Biased locking

Introduction biased locking authors found that in most cases there is no competition, are obtained by a thread several times, in order to lower the cost to acquire a lock, then the introduction of bias lock.

Acquiring the lock , when a thread is to enter the synchronized block to acquire a lock, check whether the object header be stored in the current thread ID, and if not, to determine what biased lock flag bit is set, and if not, indicating that the lock has not been any thread gets, then replace mark Word using CAS way, if the replacement is successful, the objects mark Word of the thread ID point to themselves, and whether the bias flag is set to 1, if it is found biased locking flags and is 1, then try to CAS the way the current object biased locking head points to the current thread. When the next thread to acquire a lock directly determine whether the object is stored in the head own thread ID, if it is to go directly, without re-acquired.

Lock release , biased locking biased locking only try to compete in other threads when the thread holding the lock release will tend to lock. Biased locking revocation must wait for global security points (without the byte code being executed). It will first have to suspend biased locking thread, and then check whether the thread holding the lock bias alive, if not, then it will not object header is set to lock state; if still alive, continue completed, the final target will be set to head the state has no lock or tag is not suitable as biased lock, which is a virtual machine to decide, finally wake up waiting threads.

We can see the advantage of a biased locking thread multiple times to get the same lock, has been held in the case of bias lock, thread ID need only determine whether the current object is the current head of the thread, which exist only nanoseconds and unlocked gap level.

Lightweight lock

Acquiring the lock , when the code entered sync blocks, if the synchronization object is a non-lock state, the current thread creates a lock record area in the stack frame, while the object header locks the object of Mark Word copied to the lock record, and then try CAS is used to update a pointer to Mark Word lock record.

If the update is successful, it represents a lock to get success, if it fails to indicate the current latched competition, try to use the spin current thread to acquire the lock.

Lock release , will be replaced with the information recording lock the recording area of the object head back to use the CAS operation, if successful, then there is no competition occurs if that fails, then it leads to lock inflation heavyweight lock.

Because CAS spin will consume CPU resources, in the thread without competition, it will be much better compared mutually exclusive, but in the fierce competition in the thread, not only mutually exclusive expenses, as well as the cost of CAS, even more than wt lock slower.

Spin lock and lock adaptive

When the CPU has more than two on a single physical host, allowing more than two simultaneously executed in parallel, then obtaining the same lock contended, so the thread does not lock does not give up or get the execution time of the CPU, and see who whether there is a lock thread will soon release a lock in order to allow the thread to wait, we just let the thread executes a busy loop (spin), this technique is called a spin lock . Spin has been waiting a waste of CPU resources, can not let the spin thread has been waiting, waiting for the default value is the number of spins 10 times.

JDK1.6 introduces optional adaptive lock, it's time to wait by the front waiting for the next time the lock and the lock owner to decide, for example, on the same lock object, spin just successfully won lock, the JVM would think the spin is also very likely to succeed, then the waiting time will increase, on the contrary, if rarely get to spin a lock is successful, it is possible to omit the spin directly, to avoid wasting CPU resources.

 

 

 

references:

"Java concurrent programming art"

"In-depth understanding of the Java Virtual Machine"

Guess you like

Origin blog.csdn.net/Hollake/article/details/90711538