Westward Synchronized and lock escalation

Foreword

Junior partner Hello everyone, I am a jack xu, today is the Ching Ming holidays, to chat with you synchronized. Benpian is the first concurrent programming, and why it is the first, because too many things involved in concurrent programming, obscure, just pull out a knowledge can write an article, such a calculation finished a series of concurrent programming at least to ten. I will summarize the knowledge points, ranked classification category, with a user-friendly way to tell you is clear, I speak to understand. .

Why Synchronized

This problem is very simple, we first look at the following codes

10000 open thread, the variable count is incremented, the result is 9998, it is clear that there is a thread safe. Why is there such a result, the answer is simple

这里稍微解释下为啥会得不到 100(知道的可直接跳过), i++ 这个操作,计算机需要分成三步来执行。
1、读取 i 的值。
2、把 i 加 1.
3、把 最终 i 的结果写入内存之中。
所以,(1)、假如线程 A 读取了 i 的值为 i = 0,(2)、这个时候线程 B 也读取了 i 的值 i = 0。
(3)、接着 A把 i 加 1,然后写入内存,此时 i = 1。(4)、紧接着,B也把 i 加 1,此时线程B中的 i = 1,
然后线程B 把 i 写入内存,此时内存中的 i = 1。也就是说,线程 A, B 都对 i 进行了自增,但最终的结果却是1,不是 2.
复制代码

Ultimately sentence is not so much the operation atomicity, then how to solve this problem, you can add Synchronized

Three characteristics

In the above example demonstrated that atomic. synchronized to ensure visibility, according to a predetermined happens-before, after a thread executing the synchronized code, all code changes to variables can immediately be seen by other threads. The order of words is prohibited instruction reordering, code block code execution down sequentially from the final analysis, then a word, three features synchronized concurrency problems is guaranteed, which is synchronized is snake oil, be sure to use his !

Instructions

Grammatically speaking, Synchronized total of three uses:

  • Examples of modified method
public synchronized void eat(){
	.......
  .......
}
复制代码
  • Modified static method
public static synchronized void eat(){
	.......
  .......
}
复制代码
  • Modified block
public void eat(){
   synchronized(this){
   	.......
 	.......
   }
}
复制代码
public void eat(){
   synchronized(Eat.class){
   	.......
 	.......
   }
}
复制代码

Wherein the first and third on the other, the second and fourth and so on, this is very simple, here is a summary of the use of synchronized:

  • Selection of a lock object may be any object;
  • The lock object is locked synchronized block, not their own;
  • Different types of multiple Thread synchronization code to be executed if the lock object to use all the threads holding together the same object;
  • Need to synchronize code in curly braces. Means that need to be synchronized to ensure the required atomicity, visibility, orderliness of any one or more. Do not put no code synchronization in, influence the code efficiency.

Lock escalation

Well, the climax of this came, we listen carefully, called heavyweight lock in early JDK, synchronized, because the lock application resources must be called by the kernel, the system, from user mode -> kernel mode conversion efficiency is relatively low, JDK1. after doing some optimization 6, in order to reduce to obtain and release locks bring performance overhead introduced bias lock, the concept of lightweight lock. So you will find in synchronized, the four states are latched: no lock, tend to lock, lock lightweight, heavyweight lock;

We know synchronized lock object, the object is Object, Object layout in the heap, as shown in FIG.

In front of a specific size is 8 bytes markword, the next four bytes are the class pointer is the object belongs to which class, People is People.class, Cat category is Cat.class, after the class instance data there is to see you field a, int age is 4 bytes, string name is the English one byte, 2 bytes Chinese set (the number of bytes depends on the use of Chinese String encode a utf-8 type, the Chinese accounted 2 to 3 bytes, if GBK type, the Chinese 2 bytes), and finally the front three combined not divisible by 8, is padded to be divisible by 8. Profile is markword (8 * 8 = 64 bits) in the figure, inside the lock escalation is change markdown flag.

Therefore, online map are 32-bit, here I draw is 64-bit, we found a total of five states, with two is not enough, so when the 01 forward by one.

Biased locking

On the hotspot virtual machine after an investigation found that, in most cases, there is no lock code is not only multi-threaded competition, and always get the same thread several times. So based on a probability, we started locking the lock is biased, when a thread synchronization lock access plus a block of code, will first try to store the current thread in the head by an object ID CAS operation

(1) If the current thread is stored successfully markword ID, then make synchronization block

(2) If the same thread when locked, no contention, thread pointer just determined whether the same can be performed directly synchronized block

(3) If there are other threads lock has gained favor, this case shows that the current lock there is competition, you need to undo the thread has been biased locking, and to upgrade the locks held it as a lightweight lock (this operation have to wait until global safety point, that is, no threads in bytecode execution) to perform

In our application development, certainly there will be more than two threads compete in most cases, if you tend to open the lock, it will enhance access to resources depletion lock. Therefore, the parameters can be set by jvm UseBiasedLocking biased locking on or off

Lightweight lock

Revocation biased lock, upgrade lock lightweight, each thread in its own thread stack generates LockRecord, with the CAS operation markword set to point to the LR own thread, a lock is provided to give successful. Lightweight locked in the locking process, the use of spin locks, the use of spin locks, in fact, there are certain conditions, if a thread executing synchronized block of time is very long, then the thread continues to cycle but will consume CPU resources.

Number (1) by default spin is 10 times, can -XX: to modify PreBlockSpin, or spin thread count of more than half the number of CPU cores

(2) After JDK1.6, the introduction of adaptive spin lock, adaptive means that the number of spin is not fixed, but at the same time according to a time before the spin lock and the lock owner the state to decide. If a lock on the same object spin-wait just successfully won locks, and thread holding the lock is running, the virtual machine will think that this spin is likely to succeed again, and then it will allow spin wait for a relatively longer time. If for a lock, spin rarely been successful, that in attempting to acquire the lock will be possible to omit the spin process, directly blocking the thread processor to avoid wasting resources in the future

After either of these two cases upgraded to heavyweight lock

Heavyweight lock

This time to disturb Lafayette, and apply resources to the operating system, linux mutex, CPU calls from grade 3-0 level system, thread suspension, into the waiting queue, waiting for the operating system scheduler, and then mapped back to the user space.

We just write a simple code with the synchronized keyword. First compiled into .class files, and then use javap -c xxx.class disassemble. We can get the code corresponding to the assembler instruction java. Which can be found the following two lines of instructions.

Bytecode instructions two dimensions is critical, monitorenter, moniterexit (Note: using a code block ACC_SYNCHRONIZED, this is a sign bit, the underlying principle or two instructions)

java each object is associated with a monitor lock monitor, when the monitor is in a locked state will occupy. Thread tries to acquire ownership of the monitor when executing monitorenter instruction, as follows:

  • If the number entering the monitor is 0, the thread enters the monitor, and then enter the number is set to 1, the thread is the owner of the monitor.
  • If the thread has occupied the monitor, only to re-enter, then enter the number into the monitor plus one.
  • If another thread has occupied the monitor, the thread enters the blocked state into the monitor until the number is zero, then re-attempt to obtain ownership of the monitor.

As can be seen from the above two processes, the first: monitor is re-entrant, he counters, second: monitor lock non-fair

monitor operating system dependent mutexLock (mutex) to achieve, after entering the kernel thread is blocked (Linux) scheduling state, this will cause the system to switch back and forth between user mode and kernel mode, seriously affect the performance of the lock

Lock elimination

We all know that StringBuffer is thread-safe, because it's critical methods are modified to be synchronized, but we look at the above code, we will find, sb This reference will only be used in the add method, other threads can not be reference (because it is local variables, stack private), sb is therefore impossible to share resources, JVM will automatically lock the elimination of internal StringBuffer object.

public void add(String str1,String str2){
         StringBuffer sb = new StringBuffer();
         sb.append(str1).append(str2);
}
复制代码

to sum up

Good knowledge points covered in this article have been synchronized to explain very clearly. synchronized Java concurrent programming is the most common way to ensure thread-safe, its use is also relatively simple. Before synchronized to optimize the performance of synchronized than ReentrantLock much, but since the introduction of synchronized biased locking, after the lightweight lock (spin locks), the performance difference between the two is almost the same. In the case of both methods are available, the use of synchronized official even suggested, in fact, I feel synchronized optimization techniques borrowed CAS ReentrantLock in. They are trying to put in user mode locking problem solve and avoid thread blocking into the kernel mode.

Guess you like

Origin juejin.im/post/5e898b8fe51d4546cd2fda30