Java Multithreading High Concurrency Basics (2) - Concurrency Level

Due to the limitation of access to the critical section, we need to set a concurrent access control strategy, which is the concurrency level. According to the classification of the concurrency level, it can be roughly divided into blocking, starvation-free, barrier-free, lock-free, and wait-free.

Blocking

  1. Blocking

  2. Starvation-Free

Obstruction-Free

  3. Obstruction-Free

Lock-Free (no lock)

  4. Lock-Free (LF)

Wait-Free (no waiting)

  5. Wait-Free (WF)

  6. Wait-Free Bounded (WFB)

  7. Wait-Free Population Oblivious (WFPO)

1. Blocking

In the basic concept chapter (1), we mentioned blocking and mentioned the reasons for blocking. In the case of multi-threaded high concurrency, the use of synchronized locks (synchronized) or reentrant locks (reentrantLock) will cause threads outside the critical section to form blocking threads.

 

2. Starvation-Free

In Basic Concepts (1), we also talked about the phenomenon of hunger. The reason for starvation is that the thread cannot get resources (CPU resources) for a long time and cannot continue to execute. For example, a thread with a higher priority can easily acquire a lock, while a thread with a lower priority may not get execution resources for a long time. This results in unfair resource usage (what is fair lock and what is unfair lock will be introduced later). Starvation-free means that each thread can get the time slice allocated by the system to perform its own tasks.

 

3. Obstruction-Free

Barrier-free means that during the execution process, there will be no obstacles, and no blocking will be formed due to the critical section. It is a non-blocking scheduling.

In concurrent access control, blocking scheduling is a pessimistic strategy, because the system will worry about shared data inconsistencies due to access by multiple threads (the so-called shared data is the data stored in the main memory, JMM-Java memory A way of communication between threads in the model).

而非阻塞调度是一种乐观的策略,因为系统认为多线程的访问不会产生冲突,或者产生冲突的概率比较小,因此允许多线程同时访问临界区,但是这很容易形成数据不一致,无障碍要求在线程执行产生冲突时,要进行数据回滚。所以说,无障碍执行并不是可以很顺畅的执行,更严重的,可能没有一个线程能顺利执行下去。

 

4.无锁(Lock-Free)

无锁的并行就是无障碍实现的一种,它规定线程必须在有限步骤内完成。在无锁的情况下,所有线程可以同时访问临界区。在高并发多线程中,CAS(Compare And Swap,比较交换)技术就是一种无锁实现.在它的实现中,使用了一个无限循环,当要修改的内容和期望内容一致时,才去做修改.因此,CAS对死锁是免疫的.在java.util.concurrent.atomic包下(在jdk的rt.jar中)的各种原子类实现,都使用了CAS技术.例如在AtomicInteger中的getAndSet(int newValue)方法.

关于CAS技术,我会专门写一片文章阐述,这里不多说.

另外,使用无锁方式,省去了线程之间竞争临界区资源锁而产生的性能损耗,也没有线程之间频繁调度带来的开销.

 

 5.无等待(Wait-Free)

无等待表示任何线程都可以在有限步骤内结束,而不必关心其他线程进度如何.进一步分类可以分为有界无等待Wait-Free Bounded (WFB)集居数无关无等待Wait-Free Population Oblivious (WFPO,外国人起的这名字也是醉了).

有界无等待:按照英文愿意,是指方法的执行过程都可以在有界限的步骤内完成,但是这个过程可能是与线程数量相关的.

集居数无关无等待(也可以叫做线程数无关无等待):在英文文献中,是这么说的--一个无等待的方法,如果其性能和活动线程数目无关,那么被称为集居数无关无等待的。

Wait-free bounded(有界无等待):

如果所有的L个线程消耗C(N,L)或者更少的时间完成操作:OpsF() < C(N,L)

Wait-free population oblivious(集居数无关无等待,在并发变成实战中翻译成了线程数无关无等待,也准确):

如果所有的L个线程在有限操作内完成F,并且和L无关:OpsF() < C(N)

其中,设F为一个函数方法,设L为同时调用F的并发线程数目,设N为一个与L无关的变量,设OpsF()代表一个指定线程完成F需要进行的操作步骤,设C(N,L)为一个依赖N和L的函数.

 

一种典型的无等待结构就是Read-Copy-Update(RCU).它的基本思想是,对数据的读可以不加控制,因此,所有的读线程都是无等待的.但是在写数据的时候,需要先取得原始数据的副本,接着修改副本数据,修改完成后,然后在合适的时机回写数据.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326004157&siteId=291194637