Into high concurrency (a) necessary foundation concept summary

Learning any language is a first master the basic academic concepts, based on a concept to grasp, go to get to the bottom of its internal implementation principle. Learning concurrent programming, we also need to grasp the basic concepts and then to learn the principles of its implementation, and finally apply it to the appropriate scene.

First, the basic concepts necessary

1. Synchronization (Synchronous) and asynchronous (Asynchronous)

Synchronization : synchronization method Once called, the thread of the synchronization method call this synchronous method must wait before they can proceed to follow after the code is finished, the subsequent code can use real-time data synchronization method returns.

Asynchronous : Asynchronous message passing more of a method, when the main thread calls the asynchronous method, if the asynchronous method execution messages sent to another method of a thread, asynchronous method execution experience in another thread, and will not hinder current follow the main thread of the process, that is, the main thread after calling the asynchronous method, other processes will be immediately after the execution of asynchronous method does not block waiting for the asynchronous method has finished executing. If the asynchronous method returns a value, then at the end of the asynchronous method will notify the main thread.

2. concurrent (Concurrency) and parallel (of Parallelism)

Concurrency : a plurality of tasks concurrently focus on the "alternate" executed, performs a task A, performs a task B, the system will switch between the two.

Parallel : Parallel to focus on multiple tasks simultaneously executed, it is "both" in the true sense.

In the single-core CPU system, a CPU executes one instruction every time it is impossible to execute multiple commands at the same time, a parallel can not happen this time, but multi-process or multi-thread concurrent tasks can occur, the system can not stop switching task to achieve concurrency. The basic condition for a parallel computer is the computer is a multi-core CPU.

3. Critical Area

Critical region : critical section is a multiple threads that can be shared by region, the data stored within it are public resources, but access or modify data in this area can only be done by a thread, other threads can only wait in front of the thread access or modify data after the completion of the critical region can continue to access or modify.

4. blocked (Blocking) and non-blocking (Non-Blocking)

Blocking : blocking and non-blocking is used to describe the interaction between threads, a thread is blocked it means taking up critical resources area, the other threads must wait for the completion of the previous thread access to critical resources of the area before they can continue to access the critical section resources, blocking results in subsequent threads hang, if the current thread has been occupied by the resource, then follow the thread will wait may cause timeouts and even avalanches.

Non-blocking : Non-blocking means do not affect each other between threads, each thread is constantly performing a forward, do not wait for each other between the threads.

The deadlock (Deadlock), hunger (Starvation) and livelock (Livelock)

Deadlock, livelock and starvation are described phrase activity thread. When multiple threads in any of these three, it indicates that program in a multithreaded environment may no longer be able to perform normal anymore.

Deadlock : Deadlock is a multi-threaded environment very serious problem, with multiple threads each occupy other threads continue execution of basic resources and does not release, resulting in multiple threads suspended waiting for, soon after this happens Basic led to the collapse of the program.

Hunger : Hunger refers to one or more threads for some reason can not get the basic resources of its normal operation, causing the thread has been unable to continue. The reasons include low thread priority, while a higher priority thread often preempt resources.

Livelock : Livelock means between threads uphold the principle of "humility" of each other initiative will give resources to use other resources to ensure the normal execution of other threads, but often between threads because humility is always a result of thread kept in beating thread, a thread can not normally get all the resources normally.

Second, the concurrency level

Multi-threaded environment, most concerned about is access to the critical resources of the region, concurrent access to the critical section resources must be controlled. According to concurrent strategy, the general level of concurrency into blocking, no hunger, barrier-free, lock-free and no-wait several.

1. obstruction

A thread is blocked, when a critical resource area is modified using the synchronized keyword or re-entry lock, so before you get access to resources and the current thread before the other thread to release resources, current resources are unable to continue to perform thread is blocked thread.

2. No Hunger (Starvation-Free)

If a multi-threaded environment, some thread with a higher priority, low priority thread tend to occupy a disadvantage in the thread scheduling, time allocation of critical resources area, often unable to meet the needs of low-priority thread, resulting in low priority the thread can not be executed properly. Obviously, this situation is unfair, it will lead to lower priority thread of hunger. This non-fair locks allow a higher priority thread to jump the queue, resulting in low priority thread of hunger. If the thread scheduling, the system threads will all be treated equally, then the higher priority thread also have to follow the principle of "first come first served", then we can solve the risk of low priority thread can not be executed.

3. Accessibility (Obstruction-Free)

Accessibility is a non-blocking weakest schedule, execute two threads if unimpeded, it will not be sharing data critical area which led to the party thread is suspended. That is, multiple threads can simultaneously enter the critical region, can share data on the critical area dedicated to modify, save changes when the thread, if you find data anomalies, it will be rolled back, if not unusual, then you can modify the shared data to complete critical section, follow the normal execution process.

Contrast blocking and barrier-free concurrency level, can be understood as obstruction pessimistic control strategy, because obstruction is mandatory suspend other threads, ensure data security, accessibility and control can be understood as an optimistic strategy, it is optimistic that more than threads share data operations critical area of ​​conflict does not occur, or is believed collision probability is very low, so multiple threads can be successful out of the critical zone, if a conflict occurs, then rolled back.

Control strategy for a barrier-free, can be done by "identity marker", the basic principle is the critical area before threading operation data, reading a mark the current critical section data, modify data when the thread is finished, then read this again mark, if the mark is not modified, then that no other thread revised data, then you can save data directly and markers modified during operation data of the thread, the thread if you modify the data found mark was changed by other threads, then resource access clashed think the current thread data rollback.

4. No Lock (Lock-Free)

Lock-free parallel can be understood as accessibility, in the absence of lock scene, a critical area of ​​data can be accessed simultaneously by multiple threads at the same time, but always ensure that there is a thread to complete data on the critical area within a finite number of steps access and security to leave the critical section.

5. No Wait (Wait-Free)

Relative to the no lock strategy, the concurrency level without waiting for is an extension of the former, there is no lock requires a thread successfully access to the critical region and security to leave without waiting for all the threads requires must be limited complete access to critical areas within a step, and does not cause hunger.

No concurrency strategy of waiting for a typical application case is RCU (Read Copy Update), the basic idea is the principle: All threads reading of the data is not controlled, and all threads can simultaneously read the data, related to write data, are in the original copy after copy of the data to carry out all write operations are performed on the copy, after the completion of data at the right time to update the original data referenced. There is a set of Java CopyOnWriteArrayList is no waiting for a typical application example of concurrency strategy, read the set of multithreaded is no waiting, write collection is performed on a set of copies, and at the right time to update the data to original collection of references on.

Learn more dry goods, welcome attention to my micro-channel public number: Java Mountain (Micro Signal: itlemon)
Public micro-channel number - Java swordfight -itlemon

Published 73 original articles · won praise 84 · views 470 000 +

Guess you like

Origin blog.csdn.net/Lammonpeter/article/details/103134809