Concurrency mechanisms underlying artistic -java 2.java realization of the principle of concurrent programming

 

  java programming code are compiled java byte code, bytecode class loader to inside jvm, jvm bytecode execution, must be transformed into the final assembly instruction execution on the CPU, concurrency java used depends on the jvm instructions implementation and the CPU.

2.1 volatile applications

  In multi-threaded programming synchronized and volatile plays an important role, volatile lightweight synchronized, assured him of the "visibility" of shared variables in a multiprocessor development. Visibility means that when a thread modifies a shared variable, another thread can read this thread modified value. If used properly volatile variable qualifier, he less than synchronized and implementation costs. Because he does not cause the thread context switching and scheduling.

  2.1.1volatile definition implementation principle

    java language specification Version 3 definition of volatile follows: java programming language allows threads to access shared variables, variables can be shared in order to ensure accurate and consistent updates, thread lock should ensure that he get through this variable alone row.

    java language provides volatile, and in some cases to be more convenient than the lock, if a field is declared as volatile, java thread memory model ensures that all threads see the value of this variable is the same.

    Definition of the CPU

    

    How volatile is to ensure the visibility of memory?

    instance = new Singleton (); // instance variables and volatile

    Into assembly code  

    。。。。

    There volatile variable is modified when the shared variable write operation, or the Lock prefix instructions, will lead to two things

    1. The current data processor cache line is written back to system memory

    2. The write-back memory handling can lead to other CPU caches data in the memory address is not valid

     In order to increase the processing speed, memory, and processors are not communicating directly, but now the internal cache memory system data during the read operation, but do not know when the operation is complete is written to memory. If you declare a variable volatile write operation is performed, jvm will send a lock prefix instructions to the processor, where the variable data cache line is written back to system memory under more processors, in order to ensure each processor cache is the same, will achieve cache coherency protocol, each processor checks its own cache value is not expired on the spread of the data bus sniffer, when the processor finds himself cache line corresponding to the memory address is modified, will own the processor cache line is set to an invalid state, when the processor of the data to be modified, it will reproduce the data read from the system memory cache processor

    volatile realization of the principle of two

      1.Lock prefix instructions cause the processor cache back to memory (cache locking)

      2. A processor's cache memory is written back to causes other processor's cache is invalid.

  2.1.2 volatile usage optimization

    Concurrent jdk's new class of a set of queues Linked-TransferQueue, when he uses volatile variables, additional bytes in a way to optimize the team and team lists the team performance

2.2 synchronized implementation of the principles and applications

   Every object in Java can be used as a lock, when a thread access, we must first acquire the lock object code block or exit until the lock is released when an exception is thrown.

   Three common usage:  

     Ordinary method,

     Static method

       Sync block

  Static methods and the conventional method is achieved by ACC_SYNCHRONIZED lock flag, block synchronization code and MoniterExit achieved by MonitorEnter instructions in the instruction process will MonitorEnter compiled code is inserted to the starting position. MoniterExit instruction code and the end is inserted into the thrown position 

  2.2.1 java object header

  Comparison 2.2.2 upgrade and lock

    java se1.6 order to obtain and release locks reduce performance brought consumption, the introduction of the "biased locking" and "lightweight lock" locks a total of four states from low to high are: lock-free state, biased locking status, lightweight and heavyweight lock lock state, these states will compete with the escalating situation. Lock can upgrade but can not downgrade. Lock escalation can not downgrade strategy is designed to improve efficiency and get the lock to release the lock.

    1, biased locking

      In most cases, there is no multi-threaded lock not only competitive, but always get many times by the same thread, a thread to acquire a lock in order to allow lower costs and the introduction of bias lock. When one thread to access synchronization code blocks and acquire the lock will be locked in the recording head and the object in the stack frame store lock bias thread ID, when the CAS operation is not required after the thread entry and exit lock synchronization code block or unlocked, Mark word simply to test whether an object is stored in the head pointing to the current thread biased locking. If the test is successful, it indicates that the thread has acquired the lock. If the test fails, you need to test Mark word in biased locking identity is set to 1 (indicating the current is biased lock): If not set, the CAS compete lock; if set, try using CAS object head biased locking points to the current thread.

    (1) biased locking revocation

      Tend to wait until the lock using a competitive appearance will release the lock mechanism, all the current thread tries to compete biased lock, hold the thread releases the lock will tend to lock. Biased locking revocation requires waiting for the global security dots (in this bytecode execution time not any). He will first have to suspend biased locking thread, and then check whether the thread holds biased locking alive, if the thread is not active, the object head lock status is set to None, if the thread is still alive, with a biased locking of the stack will be execution, traversal biased locking lock object record, mark Word stack of records and lock the object head either re biased in favor of other threads, or return to the lock-free state or tagged object is not suitable as biased locking the last wake-up thread;

 

  (2) a closed biased locking

      偏向锁1.6和1.7默认是开启的,在程序启动几秒后才激活,有必要可以使用jvm参数来关闭延迟 -XX:BiasedLockingStartupDelay=0。如果你确定应用程序里所有的锁通常情况下处于竞争状态,可以通过jvm参数来关闭偏向锁 -XX:-UswBiaseLocking=false,那么程序默认进入轻量级锁的状态。

  2.轻量级锁

   (1)轻量级锁加锁

      线程在执行同步代码块之前,jvm会先当前线程的栈帧中穿建用于存储锁记录的空间,并将对象头中的Mard Word复制到锁记录中,官方称为Displaced Mark Word。然后线程尝试使用CAS将对象头中的Mard Word替换为指向锁记录的指针。如果成功,当前线程获得锁,如果失败,表示其他线程竞争锁,当前线程便尝试使用自旋来获取锁。

   (2)轻量级解锁

      轻量级锁解锁时,会使用原子的CAS操作将Displaced Mark Word替换回到对象头,如果成功,则表示没有竞争发生,如果失败,表示当前锁存在竞争锁就会膨胀成重量级锁

 

    因为自旋会消耗CPU,为了避免无用的自旋(比如获得锁的线程被阻塞住了),一旦锁升级成重量级锁,就不会在恢复到轻量级锁状态。当所处于这个状态下,其他线程试图获取锁时,都会被阻塞住,当持有所得线程释放锁之后唤醒这些线程,被唤醒的线程就会进行新一轮的夺锁之争。

  3.锁的优缺点对比

  

2.2.3 原子操作的实现原理

   原子(atomic)本意是“不能被进一步分割的最小粒子”,而原子操作(atomic operation)本意为“不可被中断的一个或一系列操作”。再多处理器上实现原子操作就变得有点复杂

  1.术语定义

 

 

  2.处理器如何实现原子操作

    1.使用总线锁保证原子性

    2.使用缓存锁保证原子性

    有两个操作不会使用缓存锁

      当操作数据不能被缓存的处理器内部,或操作的数据跨多个缓存行时 处理器会调用总线锁

      有些处理器不支持缓存锁

   

 

Guess you like

Origin www.cnblogs.com/panda777/p/11298333.html