jvm virtual machine notes <eight> thread-safe lock and optimization

A thread-safe

1.1 What is thread safe?

If you can safely use multiple threads for an object, then it is thread-safe.

 

Thread-safe 1.2 Java language

Here the discussion thread safety, it is limited to the existence of this premise shared data access between multiple threads.

The various operations of the Java language shared data is divided into five categories:

(1) immutable

Immutable objects must be thread-safe (no reference to this in the case of escape, will not be operating other threads, thread private occur).

If the shared data is a fundamental data type, as long as the final keyword when defining it can be modified to ensure that it is immutable.

If the shared data is an object, then you need to ensure that the behavior of the object will not have any impact to their state. (Such as the String class, call its substring (), replace (), concat () These methods do not affect its original value)

Ensure state of the object immutable easiest way is to object variables are declared with the state final.

(2) Absolute thread safety

Many Java API noted in that he is thread-safe classes, most are not absolute security thread in a multithreaded situation, if not in the end make the method call, then additional synchronization measures (such as Synchronized), is still unsafe.

To achieve absolute security thread, that "regardless of the operating environment, the caller does not need any additional synchronization measures" usually takes a lot of overhead and even unrealistic.

(3) the relative security thread

In the Java language, most of thread-safe classes are of this type. Such as Vector, HashTable

(4) thread-compatible

Refers to the object itself is not thread safe, but may end by calling the proper use of means to ensure synchronization objects can be safely used in a concurrent environment. As HashMap, ArrayList.

(5) line-hostile

It refers to the end regardless of whether to take the call synchronous measures, the code can not be used concurrently in a multithreaded environment.

 

1.3 thread-safe implementation

1.3.1 exclusive synchronous (blocking synchronization)

Synchronization: refers to a plurality of concurrent threads accessing shared data, shared data to ensure that the number is only a thread (or some, when using the semaphore) to use at the same time.

Mutex is a means of achieving synchronization, mutual exclusion is due to synchronization is the fruit.

(1) synchronized keyword

In Java, the most basic of mutual exclusion synchronization means is the use of the synchronized keyword.

Principle: synchronize via key after compilation, is formed monitorenter and monitorexit bytecodes two instructions both before and after the sync block, these two byte code needs a reference to the object to specify the type of parameter to be locked and unlocked.

Locked objects: If synchronized explicitly specified object argument that reference this object, you need to specify, based on synchronized instance method is a modification of the method or class, fetch the corresponding object instance as a lock or a Class object to object.

Process: When a monitorenter instruction, we must first try to get a lock object. If the object is not locked, or the current thread already owns the lock that object, the lock count is incremented, appropriate, in the implementation of monitorexit instruction will lock counter by 1, when the counter is 0, the lock is released . If the lock fails to obtain the object that the current thread will block until another thread object lock is released so far.

Heavyweight lock: Java threads are mapped to the native operating system thread, or if you want to wake up a blocked thread, we need to help complete operating system, which requires the transition from the user state to the core state, so the state transition needs consume a lot of processor time. For simple sync blocks, state transition time consumed there may be even longer than the user's code is executed, it is a heavyweight operation synchronized Java language. So only use this operation if necessary, while the virtual machine itself will do some optimization, such as adding some spin before notifying the operating system thread waits for blocking the process, to avoid the frequent cuts to the core state.

(2)ReentrantLock

Java.util.concurrent package may also be used in the reentrant ReentrantLock to achieve synchronization lock (Lock () and unlock () method with the try / finally used).

Difference: Compared synchronized, ReentrantLock added some advanced features, mainly in the following three: interruptible wait, can achieve a fair lock (via a constructor with a boolean value, default is non-equity), and the lock can be bound and more conditions

PS:  fair locks and non-lock fair

Fair locks : refers to multiple threads at the same time waiting for a lock, the lock must be obtained sequentially in chronological order application lock.

Unfair lock : When the lock is released, any thread waiting for a lock, have access to the lock.

The non-synchronized lock fair, ReentrantLock case is unfair, but it can be a fair lock default.

 

1.3.2 Non-blocking synchronization

Mutex synchronization main problem is the thread blocking and performance issues caused by wake. So this synchronization is also known blocking synchronization.

(1) optimistic concurrency strategy based collision detection

Mutual exclusion synchronization issues: Synchronized mutex belongs to the pessimistic lock, it has an obvious disadvantage, regardless of which data exists or not competition is locked, with the increase concurrency, and if the lock is a long time, the performance overhead It becomes very large.

The solution: optimistic optimistic concurrency strategy based collision detection. In this mode, has no concept of the so-called lock, each thread will go directly perform operations, detect the presence of competition to share data with other threads after the calculation is complete, if not then let the success of this operation, if there is competition in the shared data You may continue to re-execute the operation and testing, until the success of this concurrency strategy and many do not realize the need to suspend the thread, because this operation is called synchronous non-blocking synchronization.

Guaranteed by hardware: With the development of hardware instruction set, and can ensure collision detection operation two operations are atomic, such instructions have CAS (Compare and Swap), LL / SC like.

CAS instruction: There are three operands, namely memory location (V), the old value (A), the new value (B). If the value of A == V, then the updated value V by B.

Achieved: Java program can use the CAS operation, the package is provided by the class sun.misc.Unsafek inside compareAndSwapInt () and compareAndSwapLong () and several methods. Virtual machine inside to do a special deal with these methods, CAS commanded immediate result is a translation of the relevant platforms, no process method call.

CAS logic loophole: If a variable V when the initial read is the value of A, and it remains to check the value of A at the time of preparation of the assignment, then we can say that it is not the value of its thread rehabilitated? There may be a situation during this period its value is changed to B, then there is changed back to A, then the CAS operation will mistakenly believe that it has never changed. This vulnerability is called CAS operations "ABA" problem. ABA problem does not affect the validity of the concurrent program in most cases.

PS:   pessimistic locking and optimistic locking

Pessimistic locking : When holding data, always think the data will be modified by other threads, if not synchronized measures, there is always a problem, so no matter whether there is competition data, should be locked, user mode and mentality conversion, maintenance lock counter check for blocked thread needs to wake up and other operations.

Optimistic locking : every time that others pick up data are not modified, so it will not be locked, but at the time of the last update will determine what others during this time did not go to update the data, if the update fails, repeatedly and performing operation detection.

 

1.3.3 No synchronization scheme

If a method should not involve sharing of data, naturally, without any measures to ensure the accuracy of synchronization, some of the code is inherently thread-safe.

 

Two lock optimization

And spin-spin locks adaptive 2.1

2.1 Lock elimination

2.2 锁粗 of

2.3 Lightweight lock

2.4 biased locking

Biased locking, lightweight locks are optimistic locking, pessimistic locking lock heavyweight.

The beginning of an object is instantiated when no threads to access it at the time. It is biased, meaning that it now believes there may be only one thread to access it, so when the first time thread to access it, it will tend to this thread, this time, the object holds biased locking. Toward the first thread, the thread used to modify the object to be biased locking head when the CAS operation, and the object header ThreadID into its own ID, time after visiting the object again, only need to compare ID, do not need to use CAS during the operation.

Once the second thread has access to this object because biased locking will not take the initiative to release, so biased state when the second thread can see an object, then show that competition already exists on this object, the original holders of the inspection object lock whether the thread is still alive, if hung up, you can not become an object lock status, and then again toward a new thread if the original thread is still alive, then immediately perform an operation that the thread stack, examine the use of the object, If you still need to hold biased locking is biased locking upgraded to lightweight lock (lock is biased upgraded this time as a lightweight lock). If there is used, it can return to the non-lock state of the object, and then re-biased.
Lightweight lock that there is competition, but the degree of competition is very light, usually two threads for the same lock operations will stagger, or wait a little (spin), another thread releases the lock. But when in addition to the thread owns the lock spin more than a certain number of times, or a thread holding a lock, a spin, while another third visit, lightweight lock inflation heavyweight lock, the lock heavyweight the threads are blocked, preventing the CPU idle.

A heavyweight lock

  Last article to introduce the principle of Synchronized usage and implementation. Now we should know, Synchronized is called a lock monitor (monitor) by an internal target to achieve. But the essence of the monitor lock is dependent on the underlying operating system Mutex Lock to achieve. The reason for the operating system to switch between threads which need to be converted from user mode to kernel mode, this cost is very high, transitions between states require a relatively long period of time, which is why low Synchronized efficiency. Therefore, this depends on the operating system Mutex Lock Lock realized we call "heavyweight lock." Synchronized to the JDK to do all kinds of optimization, the core is to reduce the use of such a heavyweight lock. After JDK1.6, in order to obtain and release locks reduce performance brought consumption, improve performance, the introduction of "lightweight lock" and "biased locking."

Second, lightweight lock 

  There are four state of the lock: no lock status, tend to lock, lightweight and heavyweight lock lock. With the lock of the competition, you can upgrade from lock to lock biased lightweight lock, then upgrade heavyweight lock (lock but the upgrade is unidirectional, that is to say only upgrade from low to high, the lock will not appear downgrade). JDK 1.6 is enabled by default in biased locking lock and lightweight, we can also -XX: Disable biased locking -UseBiasedLocking. The lock state is stored in the file header object to the JDK Example 32:

Lock status

25 bit

4bit

1bit

2bit

23bit

2bit

Whether it is biased locking

Lock flag

Lightweight lock

Pointing the stack pointer record lock

00

Heavyweight lock

Mutex pointer (heavyweight lock) of

10

GC mark

air

11

Biased locking

Thread ID

Epoch

Object generational Age

1

01

no lock

hashCode object

Object generational Age

0

01

  "Lightweight" is relative to the use of the operating system mutex to achieve the traditional lock. However, you first need to stress that it is not intended to replace the lock lightweight heavyweight lock, it was intended under the premise of no multithreading competition, reduce the use of traditional heavyweight lock on performance of consumption. Prior to the implementation explain lightweight lock, first understand that the lightweight lock the adaptation scenario is the case thread alternately perform synchronization blocks, the situation if there is access to the same lock at the same time, it will lead to the lightweight lock inflation heavyweight lock.

1, lightweight lock locking procedure

  (1) When the code to enter the synchronized block, if the synchronization object lock status is no lock state (lock flag is "01" state, whether it is biased locking "0"), the virtual machine first stack frame in the current thread establishing a record called lock (lock record) space for storing a copy of the current lock object of Mark Word, officially known as Displaced Mark Word. At this time the state of the object header with a thread stack shown in Figure 2.1.

  (2) copy target header Mark Word copied into the lock record.

  (3) the copy is successful, the virtual machine will try to use the CAS operation target Mark Word updated to point to Lock Record pointer, and Lock record in the owner pointer to the object mark word. If the update is successful, proceed to step (3), otherwise step (4).

  (4) If the update action is successful, then the thread owns the lock of the object, and the object of Mark Word lock flag is set to "00", it means that this object is lightweight locked state, this time the thread stack state of an object head shown in Figure 2.2.

  (5) If this update fails, the virtual machine will first check whether the object of Mark Word points to the current thread's stack frame, if it means the current thread already owns the lock of this object, it can directly enter a synchronization block continues . Otherwise, a plurality of threads described lock contention, lightweight expanded heavyweight lock must lock, lock flag state value becomes "10", Mark Word is stored pointing heavyweight lock (mutex) pointer back threads waiting for the lock should enter the blocked state. The current thread will try to use spin to acquire the lock is to prevent the spin thread is blocked, and the use of the process cycle to acquire the lock.

 

                     Figure 2.1 CAS lightweight lock operation before the stack state object

   

                      Stack state of the object after the operation of FIG lightweight lock CAS 2.2

2, lightweight lock unlocking process:

  (1) by the CAS operation attempts to replace the current thread Mark Word Displaced Mark Word copied object.

  (2) if the replacement is successful, the entire synchronization process is complete.

  (3) If the replacement fails, the other threads have tried to acquire the lock (lock has been expanded this time), it would have to release the lock at the same time, wake suspended thread.

Third, the biased locking

  Introducing bias to lock in the absence of competition in the case of multi-threaded to minimize unnecessary lightweight lock execution path, because the lock acquisition and release multiple CAS dependent lightweight atomic instruction, but only biased locking in replacement of ThreadID when a CAS dependent atomic instruction (because once multithreaded competition must be withdrawn biased locking occurs, the performance loss of the biased locking undo operation must be less than the performance cost savings CAS atomic instruction). Mentioned above, the lock is lightweight in order to improve performance when a thread sync blocks are alternately performed, while the lock is biased further improve performance when only one thread is executing sync block.

1, biased locking acquisition process:

  (1) Access Mark Word identifies whether the biased locking set to 1, if the lock flag is confirmed to be biased state 01--.

  (2) If the state to be biased, then the test points to the current thread if the thread ID, and if so, proceeds to step (5), otherwise go to step (3).

  (3) If the thread ID is not pointing to the current thread, the CAS operation lock by competition. If the competition is successful, Mark Word thread ID is set to the current thread ID, then execute (5); if competition fails to perform (4).

  (4) If the CAS obtain biased locking fails, it means there is competition. Biased locking thread gets when reaching the global security point (safepoint) is suspended, biased locking upgraded to lightweight lock, then blocked at the point of thread-safe synchronization code continues down.

  (5) perform synchronization code.

2, tend to release the lock:

  Biased locking revocation mentioned in the fourth step . When biased locking only encounter other thread tries to compete biased locking, hold the thread releases the lock will tend to lock, the thread will not take the initiative to release the lock bias. Biased locking revocation, need to wait for global security points (without the byte code is executing at this point in time), it will first have to suspend biased locking thread, the object is to determine whether the lock is locked state, return to the revocation is not biased locking lock (flag "01") or lightweight lock (flag "00") state.

3, the conversion between Heavyweight locks, lock and lightweight biased locking

 

                                        FIG converted FIG three 2.3

  The figure is mainly a summary of the above, if there is a better understanding of the above, then this figure should be easy to understand.

Fourth, other optimization 

1、适应性自旋(Adaptive Spinning):从轻量级锁获取的流程中我们知道当线程在获取轻量级锁的过程中执行CAS操作失败时,是要通过自旋来获取重量级锁的。问题在于,自旋是需要消耗CPU的,如果一直获取不到锁的话,那该线程就一直处在自旋状态,白白浪费CPU资源。解决这个问题最简单的办法就是指定自旋的次数,例如让其循环10次,如果还没获取到锁就进入阻塞状态。但是JDK采用了更聪明的方式——适应性自旋,简单来说就是线程如果自旋成功了,则下次自旋的次数会更多,如果自旋失败了,则自旋的次数就会减少。

2、锁粗化(Lock Coarsening):锁粗化的概念应该比较好理解,就是将多次连接在一起的加锁、解锁操作合并为一次,将多个连续的锁扩展成一个范围更大的锁。举个例子:

Copy the code
 1 package com.paddx.test.string;
 2 
 3 public class StringBufferTest {
 4     StringBuffer stringBuffer = new StringBuffer();
 5 
 6     public void append(){
 7         stringBuffer.append("a");
 8         stringBuffer.append("b");
 9         stringBuffer.append("c");
10     }
11 }
Copy the code

  这里每次调用stringBuffer.append方法都需要加锁和解锁,如果虚拟机检测到有一系列连串的对同一个对象加锁和解锁操作,就会将其合并成一次范围更大的加锁和解锁操作,即在第一次append方法时进行加锁,最后一次append方法结束后进行解锁。

3、锁消除(Lock Elimination):锁消除即删除不必要的加锁操作。根据代码逃逸技术,如果判断到一段代码中,堆上的数据不会逃逸出当前线程,那么可以认为这段代码是线程安全的,不必要加锁。看下面这段程序:

Copy the code
 1 package com.paddx.test.concurrent;
 2 
 3 public class SynchronizedTest02 {
 4 
 5     public static void main(String[] args) {
 6         SynchronizedTest02 test02 = new SynchronizedTest02();
 7         //启动预热
 8         for (int i = 0; i < 10000; i++) {
 9             i++;
10         }
11         long start = System.currentTimeMillis();
12         for (int i = 0; i < 100000000; i++) {
13             test02.append("abc", "def");
14         }
15         System.out.println("Time=" + (System.currentTimeMillis() - start));
16     }
17 
18     public void append(String str1, String str2) {
19         StringBuffer sb = new StringBuffer();
20         sb.append(str1).append(str2);
21     }
22 }
Copy the code

Although StringBuffer's append a synchronization method, but this program StringBuffer belong to a local variable, and will not escape out of the process, so in fact, this process is thread-safe lock can be eliminated. Here are the results I performed locally:

  In order to minimize the impact of other factors, where disabled biased locking (-XX: -UseBiasedLocking). Through the above procedures, can be seen after the elimination of locking performance is still a relatively large improvement.

  Note: Results may be performed between JDK versions are not the same, JDK version I used here was 1.6.

Guess you like

Origin www.cnblogs.com/lvoooop/p/12142467.html