Java Concurrency (b) - concurrency level

Because of the critical region, concurrency between multiple threads to be controlled. According to concurrency control strategy can be divided into several levels of concurrency: blocking, no hunger, barrier-free, no lock, no waiting.

1. obstruction

When a thread is waiting for resources to be occupied by a different thread, and the current thread can not continue until the resources are released, then we call this thread as " blocking " of. When we use the synchronized keyword or reentrant lock (the two technologies will be introduced in subsequent articles), the resulting thread is blocked.
Blocked thread before executing the subsequent code will try to get a lock critical section, if the attempt fails, the thread will be suspended in a wait state until up to seize the resources required.

2. No Hunger

A thread is prioritized to distinguish when the time thread scheduling priority will always tend to allocate resources to high-priority thread. In other words, the same allocation of resources is unfair. The following figure shows the non-equity and fairness in both cases.

Non-equity lock, the system allows high-priority thread into the low-priority thread before execution, which may result in low priority thread has been the lack of implementation, which produce hunger . But if it is fair locks, the system will first come first served according to the order of execution threads, no matter how high thread priority must line up, so that all threads have the opportunity to perform, that is, no hunger .

3. Accessibility

Accessibility is a weakest non-blocking scheduling between threads of execution if it is accessible, then they will not cause problems critical area is suspended. In the case of accessibility, in order to ensure the sharing of data critical section is not destroyed, threads need to share data detection is complete, if the data is destroyed, the thread will be made to modify their own rollback, ensure data security.
Control congestion belong to the pessimistic strategy, that system is likely to occur believes competition for resources among multiple threads, so choose a shared data protection is the first priority. On the other hand, non-blocking scheduling belong optimistic strategy, the system considers the potential for conflict between multiple threads is very small, and therefore should be barrier-free multi-threaded execution, when a collision is detected and then roll back.
But when there is a serious conflict critical area, all threads may occur constantly rolled back their operations without a thread can get out of the critical zone, this situation will affect the normal execution of the system.

4. No Lock

In the above scenario accessibility, there may occur a thread can not exit the critical situation in the area, but no lock is to implement a policy of accessibility, add a condition on the basis of accessibility: there must be a guarantee thread can be limited step left to complete the operation within the critical area.
In the lock-free calls, a typical feature is likely to contain an infinite loop. In the loop thread will continue to try to modify a shared variable. If there is no conflict if the amendment is successful, the thread out of the critical zone; if you encounter a conflict retry modify operation. But in any case, lock-free algorithms ensure that there is always a thread can be successful.
Java provides a lock-free algorithms: Compare-And-Swap (CAS ), CAS because the computer supports atomic operations on the CPU level, so when multiple threads competing to modify the shared resource is bound to have a thread modified successfully. But when competition is fierce, some "bad luck" thread starvation situation will arise.

while (!atomicVar.compareAndSet(localVar, localVar+1)) {
    localVar = atomicVar.get();
}
复制代码

5. No wait

No lock requires only one thread can complete the operation within a finite number of steps, without waiting for further expansion is no lock on the basis of: it requires that all threads are finished in a finite number of steps. This will not cause hunger.
A typical configuration is no waiting RCU (Read-Copy-Update) . The basic idea is to read data can be uncontrolled. So read all the threads are not waiting. When writing data, first obtain a copy of the original data, and then only modify a copy of the data, waiting for the right time to copy data written back to the original data.

Guess you like

Origin juejin.im/post/5dc92ed7f265da4d0c175b3d