Biased locking, bias thread id, spin locks

Understand the basics of lock

If you want a thorough understanding of Java lock ins and outs, you need to understand the basics.

One of the basics: the type of lock

Lock classification from the macro is divided into pessimistic locking and optimistic locking.

Optimistic locking

Optimistic locking is an optimistic idea that think reading and writing less, low probability of concurrent write encounter, every time the data are to pick up other people think is not modified, it will not be locked, but will be updated when in the meantime determine what others did not go to update the data, take the first reads the current version at the time of writing, then lock operation (compare to keep up with a version number, as if the update), if it fails will have to repeat read - comparison - write operation.

Basic optimistic locking in java are realized by CAS operation, CAS is an update of an atomic operation, compares the current value with the incoming value is the same as the update fails otherwise.

Pessimistic locking

Pessimistic locking is pessimistic that the idea that believes write more, experiencing a high probability of concurrent write, pick up data every time the thought that others will modify, so every time the lock when reading and writing data, so people think this will block read and write data until I got the lock. java in the pessimistic lock is locked in Synchronized, AQS framework is the first attempt to acquire cas optimistic locking lock, get less, it will be converted to a pessimistic lock, such as RetreenLock.

Basics of the two: java thread is blocked costs

java thread is mapped to the operating system on native threads, or if you want to wake up a blocked thread will need an operating system intervention is required to switch between user and kernel mode state, the switch consumes a lot of system resources, because users state and kernel mode has its own private memory space, dedicated registers, switches to user mode to kernel mode need to pass a number of variables, parameters to the kernel, which also need to protect the user mode when switching the number of register values, and other variables to the end of the kernel mode call switch back to user mode to continue working.

  1. If the thread state is a high frequency switching operation, which will consume a lot of CPU processing time;
  2. If those simple blocks of code need to be synchronized, to obtain a lock suspend operations consume much longer time than the user code execution, this synchronization strategy is clearly very bad.

synchronized cause a race not lock the thread into the blocked state, so it is java language in a heavyweight synchronous manipulation, known as heavyweight lock, in order to mitigate these performance issues, JVM 1.5 from the start, the introduction of lightweight lock and tend to lock a spin lock is enabled by default, they are all optimistic locking.

Clearly the cost of java thread switch, one is to understand the advantages and disadvantages of various java lock.

The three basics: markword

Before introducing java lock, at first say what is markword, markword is java object data structure part of, to learn more about java object structure can click here , here only detailed markword because markword various objects and java closely related to the type of lock;

markword data length in 32-bit and 64-bit virtual machine (not open compressed pointer), respectively, and for the 32bit 64bit, its last status flag 2bit is locked , state flag for the current object, which object state , determines the content of markword stored, as shown in the following table:

status Flag Memory contents
Unlocked 01 The hash code of the object, the object generational Age
Lightweight lock 00 Pointers to lock records
Expansion (heavyweight lock) 10 Execution heavyweight lock pointer
GC mark 11 Null (no information is recorded)
Be biased 01 Bias thread ID, timestamp bias, the object generational Age

32-bit virtual machine in different states markword structure as shown below:

Write pictures described here

Markword understand the structure, it helps to understand the back lock unlock java process locks;

summary

The aforementioned four lock java, they are heavyweight locks, spin locks, lightweight and biased locking lock, 
different locks have different characteristics, each lock only in their particular scene, will have a good performance, java does not lock which can can have excellent efficiency in all cases, the introduction of so many locks reason is to deal with different situations;

Earlier talked about heavyweight lock is a pessimistic lock, spin locks, lock and lightweight biased locking belongs optimistic locking, so now you can almost understand their scope of application, but how to use these types of lock it , we must look back to analyze their specific characteristics;

 

 

Heavyweight lock Synchronized

  

 

 

   

It has a plurality of queues, when a plurality of threads to access an object with the monitor, the monitor will subject the threads are stored in separate containers.

  1. Contention List: competition in the queue, all requests lock the thread is first placed in this competition in the queue;

  2. Entry List: Contention List in those threads are eligible to become a candidate resources are moved to the Entry List of;

  3. Wait Set: What to call the wait method blocked thread is placed here;

  4. OnDeck: any time, at most one thread is competing resource lock, the thread is to be OnDeck;

  5. Owner: The current thread has acquired the resources is called Owner;

  6. ! Owner: The current thread releases the lock.

JVM takes out data from the end of the queue for a lock contention candidate (OnDeck), but the concurrency, ContentionList CAS is accessed for a large number of concurrent threads, in order to reduce competition for the tail element, JVM thread portion will move EntryList to compete as a candidate thread. Owner threads in the unlock, the ContentionList in part thread migration to EntryList and specify a thread in EntryList is OnDeck thread (usually the first to go in that thread). Owner thread does not pass directly to OnDeck thread lock, but the lock contention right handed OnDeck, OnDeck need to re-lock contention. Although this sacrifices some fairness, but can greatly enhance the throughput of the system, in the JVM, but also to choose this behavior as "competitive switch."

OnDeck thread to acquire the lock after resource becomes Owner threads, but did not get a lock resource still stuck in EntryList in. If the thread is blocked Owner wait method, then transferred to WaitSet queue until a certain moment by notify or notifyAll wakes up, will go again in EntryList.

In ContentionList, EntryList, WaitSet the thread is blocked, the blockage is (to be completed by the operating system Linux uses kernel functions implemented pthread_mutex_lock kernel).

Synchronized non-fair locks.  Synchronized in the thread enters ContentionList, waiting threads will attempt to acquire the lock spin, if you can not get into the ContentionList, which for obvious thread has entered the queue is not fair, there is an unfair thing is to get the spin lock thread may also direct resources to seize the lock OnDeck thread.

 

 

 

For the synchronized keyword, you may have heard before, he is a heavyweight lock , big spending, suggest that you use less. But you may have heard, but then to jdk1.6, the keyword is done a lot of optimization, so not to force unlike previously, suggest that you use.

It is what kind of optimization, which makes synchronized and won the hearts of it? Why heavyweight lock overhead is big?

Weight lock in a multithreaded causes the thread blocked;

But when blocking or wake up a thread , we need to help the operating system, which requires the user mode switch to kernel mode , and the transition state is the need to consume a lot of time, it is possible to execute code much longer than the user's time.
That is why the great heavyweight thread overhead.

 

 

 

Now I say to explain step by step how to be synchronized optimization, how to heavyweight from biased locking lock.

 

 

Biased locking

  偏向锁是jdk1.6引入的一项锁优化,其中的“偏”是偏心的偏。它的意思就是说,这个锁会偏向于第一个获得它的线程,在接下来的执行过程中,假如该锁没有被其他线程所获取,没有其他线程来竞争该锁,那么持有偏向锁的线程将永远不需要进行同步操作。
也就是说:
在此线程之后的执行过程中,如果再次进入或者退出同一段同步块代码,并不再需要去进行加锁或者解锁操作

当我们创建一个对象LockObject时,该对象的部分Markword关键数据如下。

不过,当线程执行到临界区(critical section)时,此时会利用CAS(Compare and Swap)操作,将线程ID插入到Markword中,同时修改偏向锁的标志位。

此时的Mark word的结构信息如下:

bit fields   是否偏向锁 锁标志位
threadId epoch 1 01

此时偏向锁的状态为“1”,说明对象的偏向锁生效了,

总结下偏向锁的步骤:

  1. Load-and-test,也就是简单判断一下当前线程id是否与Markword当中的线程id是否一致.
  2. 如果一致,则说明此线程已经成功获得了锁,继续执行下面的代码.
  3. 如果不一致,则要检查一下对象是否还是可偏向,即“是否偏向锁”标志位的值。
  4. 如果还未偏向,则利用CAS操作来竞争锁,也即是第一次获取锁时的操作。

如果此对象已经偏向了,并且不是偏向自己,则说明存在了竞争。此时可能就要根据另外线程的情况,可能是重新偏向,也有可能是做偏向撤销,但大部分情况下就是升级成轻量级锁了。
可以看出,偏向锁是针对于一个线程而言的,线程获得锁之后就不会再有解锁等操作了,这样可以省略很多开销。假如有两个线程来竞争该锁话,那么偏向锁就失效了,进而升级成轻量级锁了。
为什么要这样做呢?因为经验表明,其实大部分情况下,都会是同一个线程进入同一块同步代码块的。这也是为什么会有偏向锁出现的原因。
 在Jdk1.6中,偏向锁的开关是默认开启的,适用于只有一个线程访问同步块的场景。

    

 

 

始终只有一个线程在执行同步块 在有锁的竞争时,偏向锁会多做很多额外操作,尤其是撤销偏向所的时候会导致进入安全点,安全点会导致stw,导致性能下降,这种情况下应当禁用;

查看停顿–安全点停顿日志

要查看安全点停顿,可以打开安全点日志,通过设置JVM参数 -XX:+PrintGCApplicationStoppedTime 会打出系统停止的时间,添加-XX:+PrintSafepointStatistics -XX:PrintSafepointStatisticsCount=1 这两个参数会打印出详细信息,可以查看到使用偏向锁导致的停顿,时间非常短暂,但是争用严重的情况下,停顿次数也会非常多;

注意:安全点日志不能一直打开: 
1. 安全点日志默认输出到stdout,一是stdout日志的整洁性,二是stdout所重定向的文件如果不在/dev/shm,可能被锁。 
2. 对于一些很短的停顿,比如取消偏向锁,打印的消耗比停顿本身还大。 
3. 安全点日志是在安全点内打印的,本身加大了安全点的停顿时间。

所以安全日志应该只在问题排查时打开。 
如果在生产系统上要打开,再再增加下面四个参数: 
-XX:+UnlockDiagnosticVMOptions -XX: -DisplayVMOutput -XX:+LogVMOutput -XX:LogFile=/dev/shm/vm.log 
打开Diagnostic(只是开放了更多的flag可选,不会主动激活某个flag),关掉输出VM日志到stdout,输出到独立文件,/dev/shm目录(内存文件系统)。

Write pictures described here

此日志分三部分: 
第一部分是时间戳,VM Operation的类型 
第二部分是线程概况,被中括号括起来 
total: 安全点里的总线程数 
initially_running: 安全点时开始时正在运行状态的线程数 
wait_to_block: 在VM Operation开始前需要等待其暂停的线程数

第三部分是到达安全点时的各个阶段以及执行操作所花的时间,其中最重要的是vmop

  • spin: 等待线程响应safepoint号召的时间;
  • block: 暂停所有线程所用的时间;
  • sync: 等于 spin+block,这是从开始到进入安全点所耗的时间,可用于判断进入安全点耗时;
  • cleanup: 清理所用时间;
  • vmop: 真正执行VM Operation的时间。

可见,那些很多但又很短的安全点,全都是RevokeBias, 高并发的应用会禁用掉偏向锁。

jvm开启/关闭偏向锁

    • 开启偏向锁:-XX:+UseBiasedLocking -XX:BiasedLockingStartupDelay=0
    • 关闭偏向锁:-XX:-UseBiasedLocking

 

 

锁膨胀

刚才说了,当出现有两个线程来竞争锁的话,那么偏向锁就失效了,此时锁就会膨胀,升级为轻量级锁。这也是我们经常所说的锁膨胀

锁撤销

由于偏向锁失效了,那么接下来就得把该锁撤销,锁撤销的开销花费还是挺大的,其大概的过程如下:

  1. 在一个安全点停止拥有锁的线程。
  2. 遍历线程栈,如果存在锁记录的话,需要修复锁记录和Markword,使其变成无锁状态。
  3. 唤醒当前线程,将当前锁升级成轻量级锁。
    所以,如果某些同步代码块大多数情况下都是有两个及以上的线程竞争的话,那么偏向锁就会是一种累赘,对于这种情况,我们可以一开始就把偏向锁这个默认功能给关闭

轻量级锁

 

在代码进入同步块的时候,如果此同步对象没有被锁定(锁标志位为“01”状态),虚拟机首先将在当前线程的栈帧中建立一个名为锁记录(Lock Record)的空间,用于存储锁对象目前的MarkWord的拷贝(官方把这份拷贝加了一个Displaced前缀,即Displaced Mark Word),这时候线程堆栈与对象头的状态如图13-3所示。


然后,虚拟机将使用CAS操作尝试将对象的Mark Word更新为指向Lock Record的指针。如果这个更新动作成功了,那么这个线程就拥有了该对象的锁,并且对象Mark Word的锁标志位(Mark Word的最后2bit)将转变为“00”,即表示此对象处于轻量级锁定状态,这时候线程堆栈与对象头的状态如图13-4所示。

如果这个更新操作失败了,虚拟机首先会检查对象的Mark Word是否指向当前线程的栈帧,如果只说明当前线程已经拥有了这个对象的锁,那就可以直接进入同步块继续执行,否则说明这个锁对象已经被其他线程抢占了。如果有两条以上的线程争用同一个锁,那轻量级锁就不再有效,要膨胀为重量级锁,锁标志的状态值变为“10”,Mark Word中存储的就是指向重量级锁(互斥量)的指针,后面等待锁的线程也要进入阻塞状态。

上面描述的是轻量级锁的加锁过程,它的解锁过程也是通过CAS操作来进行的,如果对象的Mark Word仍然指向着线程的锁记录,那就用CAS操作把对象当前的Mark Word和线程中复制的Displaced Mark Word替换回来,如果替换成功,整个同步过程就完成了。如果替换失败,说明有其他线程尝试过获取该锁,那就要在释放锁的同时,唤醒被挂起的线程。

轻量级锁能提升程序同步性能的依据是“对于绝大部分的锁,在整个同步周期内都是不存在竞争的”,这是一个经验数据。如果没有竞争,轻量级锁使用CAS操作避免了使用互斥量的开销,但如果存在锁竞争,除了互斥量的开销外,还额外发生了CAS操作,因此在有竞争的情况下,轻量级锁会比传统的重量级锁更慢。

 

 

 

 总结偏向锁撤销以后升级为轻量级锁的过程:

  1. 线程在自己的栈桢中创建锁记录 LockRecord。
  2. 将锁对象的对象头中的MarkWord复制到线程的刚刚创建的锁记录中。
  3. 将锁记录中的Owner指针指向锁对象。
  4. 将锁对象的对象头的MarkWord替换为指向锁记录的指针对应的图描述如下(图来自周志明深入java虚拟机)



轻量级锁主要有两种

  1. 自旋锁
  2. 自适应自旋锁
自旋锁

所谓自旋,就是指当有另外一个线程来竞争锁时,这个线程会在原地循环等待,而不是把该线程给阻塞,直到那个获得锁的线程释放锁之后,这个线程就可以马上获得锁的。
注意,锁在原地循环的时候,是会消耗cpu的,就相当于在执行一个啥也没有的for循环。
所以,轻量级锁适用于那些同步代码块执行的很快的场景,这样,线程原地等待很短很短的时间就能够获得锁了。
经验表明,大部分同步代码块执行的时间都是很短很短的,也正是基于这个原因,才有了轻量级锁这么个东西。

自旋锁的一些问题
  1. 如果同步代码块执行的很慢,需要消耗大量的时间,那么这个时侯,其他线程在原地等待空消耗cpu,这会让人很难受。
  2. 本来一个线程把锁释放之后,当前线程是能够获得锁的,但是假如这个时候有好几个线程都在竞争这个锁的话,那么有可能当前线程会获取不到锁,还得原地等待继续空循环消耗cup,甚至有可能一直获取不到锁。

基于这个问题,我们必须给线程空循环设置一个次数,当线程超过了这个次数,我们就认为,继续使用自旋锁就不适合了,此时锁会再次膨胀,升级为重量级锁
默认情况下,自旋的次数为10次,用户可以通过-XX:PreBlockSpin来进行更改。

自旋锁是在JDK1.4.2的时候引入的

自适应自旋锁

The so-called adaptive spin lock spin frequency is empty thread loop waiting is not fixed, but can be changed dynamically with the number of spin-wait according to the actual situation.
It probably principle is this:
If a thread 1 has just successfully obtained a lock, when it releases the lock, thread 2 acquires the lock, and thread 2 in the process running, then thread 1 want to get the lock , but the thread 2 has not released the lock, so the thread 1 can only spin-wait, but believes the virtual machine, because the thread 1 has just won the lock, then the virtual machine that spins the thread 1 is also likely to be successful again get the lock, so the number of threads 1 spin will be extended .
In addition, for certain if a lock, a thread after a spin, with little success to get the lock, then later when the thread to acquire the lock, it is possible to ignore a direct spin process, directly upgraded to heavyweight lock in order to avoid empty loop to wait a waste of resources.

Lightweight lock is also known as non-blocking synchronization , optimistic locking , because the process does not suspend the thread is blocked, but to empty thread loop waiting for serial execution.

 

 

 

 

Compare the advantages of lock

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/yiyepiaolingruqiu/p/11583820.html