The underlying principle of synchronized and lock in java

1. Optimization of locks in the JVM:

In short, monitorenter and monitorexit bytecodes in the JVM are implemented by relying on the Mutex Lock of the underlying operating system, but since the use of Mutex Lock needs to suspend the current thread and switch from user mode to kernel mode to execute, this switching The cost is very expensive; however, in most cases in reality, the synchronization method runs in a single-threaded environment (lock-free environment). If you call Mutex Lock every time, it will seriously affect the performance of the program. However, in jdk1.6, a large number of optimizations were introduced for the implementation of locks, such as Lock Coarsening, Lock Elimination, Lightweight Locking, Biased Locking, Adaptability Spin (Adaptive Spinning) and other techniques to reduce the overhead of lock operations.

Lock Coarsening (Lock Coarsening): That is, to reduce unnecessary lock and lock operations that are closely linked together, and expand multiple consecutive locks into a lock with a larger range.

Lock Elimination: Eliminate  the lock protection of some data that is not shared by other threads outside the current synchronization block through escape analysis of the JIT compiler at runtime, and can also allocate object space on the thread-local Stack through escape analysis. (While also reducing the garbage collection overhead on Heap).

Lightweight Locking:  This kind of lock implementation is based on the assumption that most of the synchronization code in our program is generally in a lock-free competition state (that is, a single-threaded execution environment) in a real situation, In the case of no lock competition, it is completely possible to avoid calling the heavyweight mutex at the operating system level. Instead, only one CAS atomic instruction can complete the acquisition and release of locks in monitorenter and monitorexit. In the case of lock competition, the thread that fails to execute the CAS instruction will call the operating system mutex to enter the blocking state, and wake up when the lock is released.

Biased Locking:  It is to avoid executing unnecessary CAS atomic instructions during lock acquisition without lock competition, because CAS atomic instructions have relatively small overhead compared to heavyweight locks, but they still exist very considerable. local delay.

Adaptive Spinning:  When a thread fails to perform a CAS operation in the process of acquiring a lightweight lock, it will enter busy waiting (Spinning) before entering the operating system heavyweight lock (mutex semaphore) associated with the monitor. Then try again, if it is still unsuccessful after a certain number of attempts, call the semaphore (that is, the mutex) associated with the monitor to enter the blocking state.

 2. The principle of synchronized means: 

 There are two built-in synchronized syntaxes in the java language: 1. synchronized statement; 2. synchronized method. For the synchronized statement, when the Java source code is compiled into bytecode by javac, the monitorenter and monitorexit bytecode instructions are inserted at the entry and exit positions of the synchronized block, respectively. The synchronized method will be translated into ordinary method call and return instructions such as: invokevirtual, areturn instructions. There is no special instruction at the VM bytecode level to implement the method modified by synchronized, but in the method table of the Class file. The synchronized flag in the access_flags field of the method is set to 1, indicating that the method is a synchronized method and uses the object calling the method or the internal object of the Class to which the method belongs to represent Klass as the lock object.

Three.synchronized specific: 

1. Thread state and state transition

    When multiple threads request an object monitor at the same time, the object monitor will set several states to distinguish the requesting thread:

  • Contention List: All threads requesting locks will be placed on the contention queue first
  • Entry List:Contention List中那些有资格成为候选人的线程被移到Entry List
  • Wait Set:那些调用wait方法被阻塞的线程被放置到Wait Set
  • OnDeck:任何时刻最多只能有一个线程正在竞争锁,该线程称为OnDeck
  • Owner:获得锁的线程称为Owner
  • !Owner:释放锁的线程
下图反映了个状态转换关系:
  新请求锁的线程将首先被加入到ConetentionList中,当某个拥有锁的线程(Owner状态)调用unlock之后,如果发现EntryList为空则从ContentionList中移动线程到EntryList,下面说明下ContentionList和EntryList的实现方式:

1.1 Contention List虚拟队列

  Contention List并不是一个真正的Queue,而只是一个虚拟队列,原因在于ContentionList是由Node及其next指针逻辑构成,并不存在一个Queue的数据结构。ContentionList是一个后进先出(LIFO)的队列,每次新加入Node时都会在队头进行,通过CAS改变第一个节点的的指针为新增节点,同时设置新增节点的next指向后续节点,而取得操作则发生在队尾。显然,该结构其实是个Lock-Free的队列。

  因为只有Owner线程才能从队尾取元素,也即线程出列操作无争用,当然也就避免了CAS的ABA问题。

ps.这里怎么看都像FIFO,查了很久,没找到关于Contention List 的详细说明,这里先不纠结了。

1.2 EntryList

EntryList与ContentionList逻辑上同属等待队列,ContentionList会被线程并发访问,为了降低对ContentionList队尾的争用,而建立EntryList。Owner线程在unlock时会从ContentionList中迁移线程到EntryList,并会指定EntryList中的某个线程(一般为Head)为Ready(OnDeck)线程。Owner线程并不是把锁传递给OnDeck线程,只是把竞争锁的权利交给OnDeck,OnDeck线程需要重新竞争锁。这样做虽然牺牲了一定的公平性,但极大的提高了整体吞吐量,在Hotspot中把OnDeck的选择行为称之为“竞争切换”。
 
OnDeck线程获得锁后即变为owner线程,无法获得锁则会依然留在EntryList中,考虑到公平性,在EntryList中的位置不发生变化(依然在队头)。如果Owner线程被wait方法阻塞,则转移到WaitSet队列;如果在某个时刻被notify/notifyAll唤醒,则再次转移到EntryList。

   

四.lock的原理表示: 

Lock是java 1.5中引入的线程同步工具,它主要用于多线程下共享资源的控制。本质上Lock仅仅是一个接口(位于源码包中的javautilconcurrentlocks中),它包含以下方法

//尝试获取锁,获取成功则返回,否则阻塞当前线程
void lock(); 

//尝试获取锁,线程在成功获取锁之前被中断,则放弃获取锁,抛出异常 
void lockInterruptibly() throws InterruptedException; 

//尝试获取锁,获取锁成功则返回true,否则返回false 
boolean tryLock(); 

//尝试获取锁,若在规定时间内获取到锁,则返回true,否则返回false,未获取锁之前被中断,则抛出异常 
boolean tryLock(long time, TimeUnit unit) 
                                   throws InterruptedException; 

//释放锁
void unlock(); 

//返回当前锁的条件变量,通过条件变量可以实现类似notify和wait的功能,一个锁可以有多个条件变量
Condition newCondition();

      Lock有三个实现类,一个是ReentrantLock,另两个是ReentrantReadWriteLock类中的两个静态内部类ReadLock和WriteLock。

     使用方法:多线程下访问(互斥)共享资源时, 访问前加锁,访问结束以后解锁,解锁的操作推荐放入finally块中。

Lock l = ...; //根据不同的实现Lock接口类的构造函数得到一个锁对象 
l.lock(); //获取锁位于try块的外面 
try { 
      // access the resource protected by this lock 
} finally { 
     l.unlock(); 
}

     注意,加锁位于对资源访问的try块的外部,特别是使用lockInterruptibly方法加锁时就必须要这样做,这为了防止线程在获取锁时        被中断,这时就      不必(也不能)释放锁。

try {
     l.lockInterruptibly();//获取锁失败时不会执行finally块中的unlock语句
      try{
          // access the resource protected by this lock
     }finally{
          l.unlock();
     }
} catch (InterruptedException e) {
     // TODO Auto-generated catch block
     e.printStackTrace();
}

     synchronized和lock性能比较:

    在JDK1.5中,synchronized是性能低效的。因为这是一个重量级操作,它对性能最大的影响是阻塞的是实现,挂起线程和恢复线程的操作都需要转入内核态中完成,这些操作给系统的并发性带来了很大的压力。相比之下使用Java提供的Lock对象,性能更高一些。多线程环境下,synchronized的吞吐量下降的非常严重,而ReentrankLock则能基本保持在同一个比较稳定的水平上。

    到了JDK1.6,发生了变化,对synchronize加入了很多优化措施,有自适应自旋,锁消除,锁粗化,轻量级锁,偏向锁等等。导致在JDK1.6上synchronize的性能并不比Lock差。官方也表示,他们也更支持synchronize,在未来的版本中还有优化余地,所以还是提倡在synchronized能实现需求的情况下,优先考虑使用synchronized来进行同步。


     五.下面浅析以下两种锁机制的底层的实现策略:

    互斥同步最主要的问题就是进行线程阻塞和唤醒所带来的性能问题,因而这种同步又称为阻塞同步,它属于一种悲观的并发策略,即线程获得的是独占锁。独占锁意味着其他线程只能依靠阻塞来等待线程释放锁。而在CPU转换线程阻塞时会引起线程上下文切换,当有很多线程竞争锁的时候,会引起CPU频繁的上下文切换导致效率很低。synchronized采用的便是这种并发策略。

    随着指令集的发展,我们有了另一种选择:基于冲突检测的乐观并发策略,通俗地讲就是先进性操作,如果没有其他线程争用共享数据,那操作就成功了,如果共享数据被争用,产生了冲突,那就再进行其他的补偿措施(最常见的补偿措施就是不断地重拾,直到试成功为止),这种乐观的并发策略的许多实现都不需要把线程挂起,因此这种同步被称为非阻塞同步。ReetrantLock采用的便是这种并发策略。

    在乐观的并发策略中,需要操作和冲突检测这两个步骤具备原子性,它靠硬件指令来保证,这里用的是CAS操作(Compare and Swap)。JDK1.5之后,Java程序才可以使用CAS操作。我们可以进一步研究ReentrantLock的源代码,会发现其中比较重要的获得锁的一个方法是compareAndSetState,这里其实就是调用的CPU提供的特殊指令。现代的CPU提供了指令,可以自动更新共享数据,而且能够检测到其他线程的干扰,而compareAndSet() 就用这些代替了锁定。这个算法称作非阻塞算法,意思是一个线程的失败或者挂起不应该影响其他线程的失败或挂起。

    Java 5中引入了注入AutomicInteger、AutomicLong、AutomicReference等特殊的原子性变量类,它们提供的如:compareAndSet()、incrementAndSet()和getAndIncrement()等方法都使用了CAS操作。因此,它们都是由硬件指令来保证的原子方法。


     六.用途比较:

    基本语法上,ReentrantLock与synchronized很相似,它们都具备一样的线程重入特性,只是代码写法上有点区别而已,一个表现为API层面的互斥锁(Lock),一个表现为原生语法层面的互斥锁(synchronized)。ReentrantLock相对synchronized而言还是增加了一些高级功能,主要有以下三项:

    1、等待可中断:当持有锁的线程长期不释放锁时,正在等待的线程可以选择放弃等待,改为处理其他事情,它对处理执行时间非常上的同步块很有帮助。而在等待由synchronized产生的互斥锁时,会一直阻塞,是不能被中断的。

    2、可实现公平锁:多个线程在等待同一个锁时,必须按照申请锁的时间顺序排队等待,而非公平锁则不保证这点,在锁释放时,任何一个等待锁的线程都有机会获得锁。synchronized中的锁时非公平锁,ReentrantLock默认情况下也是非公平锁,但可以通过构造方法ReentrantLock(ture)来要求使用公平锁。

    3、锁可以绑定多个条件:ReentrantLock对象可以同时绑定多个Condition对象(名曰:条件变量或条件队列),而在synchronized中,锁对象的wait()和notify()或notifyAll()方法可以实现一个隐含条件,但如果要和多于一个的条件关联的时候,就不得不额外地添加一个锁,而ReentrantLock则无需这么做,只需要多次调用newCondition()方法即可。而且我们还可以通过绑定Condition对象来判断当前线程通知的是哪些线程(即与Condition对象绑定在一起的其他线程)。




 



 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325965270&siteId=291194637