[Concurrent programming - foundation] (a) Concurrent Programming Fundamentals

A concurrent and parallel

1.1, concurrency

       The program also has two or more threads, if the program is run on a single core processor, a plurality of threads alternately swapped or paged out, the threads are simultaneously present, each thread of execution in a states. If the program is run on a multi-core processor, the program in each thread assigned to a processor core, where multiple threads can run simultaneously.

1.2, Parallel

       Two or more events occur at the same time.

1.3, high concurrency

       High Concurrency Internet is one of the factors in the distributed system architecture design must be considered, it usually refers to the design assurance system can simultaneously handle many concurrent requests.

Two, CPU multi-level cache CPU cache

2.1 Why do you need CPU cache?

       CPU frequency too fast, approaching the main memory can not keep up, so that the processor clock cycle, CPU often needs to wait for main memory, a waste of resources. So the emergence of cache, in order to ease the speed mismatch between the CPU and memory problems (structure: CPU-> cache-> memory).

2.2, CPU cache what is the point?

       Buffer capacity is much smaller than main memory

2.3, locality principle

       (1) Event Locality: If a data is accessed, then it is likely to be accessed again in the near future.

       (2) spatial locality: If a data is accessed, then its adjacent data may also be accessed quickly.

Three, CPU cache coherency (MESI Modify | Exclusive | Share | Invalid)

3.1, cache coherency content:

       In order to ensure cache shared between multiple CPU cache consistency, it defines four state cache, while the CPU cache of four operations may produce inconsistent state, so the cache controller to monitor local operation and remote operation time , need to make some modifications to address certain catline state to ensure consistency between a plurality of cache data flow.

To understand cache coherency is required to convert the following four * four states and state clear.
Here Insert Picture Description

3.2, four states: MESI is an acronym for the four states.
  • M (modified): Modify is modified, the cache line is only cached in the CPU cache, and is modified, so that the data in main memory is inconsistent, the cache line of memory required in the future a write back time of the main memory, this point in time to allow other CPU memory before the corresponding main memory read, after which the value is written back inside the main memory, the status of the cache line becomes a state E (single enjoy).
  • E (exclusive): Exclusive exclusive, only the cache line is cached to the cache of the CPU, it had not been modified, is consistent with the data in the main memory, the status at any time, when there are other S becomes a shared state when the CPU reads the state memory. When the content of the CPU cache line, this condition becomes M modified state.
  • S (Shared): Share shared, the cache line may be a plurality of CPU cache, each cache and the main memory data is consistent with the data, when there is a CPU to modify the cache line when the cache line in the other CPU It will be voided. I became inactive
  • I (invalid): Invalid invalid, void, when representatives of the cache is invalidated, there may be another use of the CPU cache line.
3.3, four operations
  • local read: read data in the local cache
  • local write: to write data in a local cache inside
  • remote read: data memory read over
  • remote write: write data back to main memory to go inside
3.3, where the buffer and switch on the respective states

       In a typical multi-core systems, each core will have its own cache to a shared memory bus, each CPU will initiate a corresponding read or write request is cached object is to reduce the number of shared CPU write request, a in addition to E state cache, the read request can be satisfied

       M and E data is always accurate, and the real state of the cache line is the same. S state may be inconsistent, if a cache line in the S state of the cache set aside, another cache may actually have been the exclusive cache line, the cache line but will not sign a new cache line is E state, this is because the other caches will not broadcast it canceled out the notice of the cache line, the cache is not the same because the number of token of the cache line is saved, so there is no way to determine whether the exclusive cache line. E optimize speculative like one kind.

3.4, CPU multi-level cache - to optimize order execution

Here Insert Picture Description

(1) single-core situation does not allow the implementation of the away goal. In the multi-core era, while there will be multiple cores execute instructions, each core of instruction are likely to be out of order.
(2) In the processor also introduced some caching mechanism, each core has its own cache, data is written to the memory has led after the logical order, not really the last written.
(3) The final question is if we did not bring any protective measures, the processor and logic ultimately the result obtained results to differ materially.

Four, Java Memory Model

4.1, Java memory model abstract structure diagram

Here Insert Picture Description

4.2, Java memory model - synchronous operation and rules

Here Insert Picture Description
Synchronization rules:

  1. If you want to copy a variable from main memory into the working memory, to store and write operations need to be performed sequentially. But Java memory model requires only appeal operation must be performed in order, but there is no guarantee must be performed continuously.
  2. read and load, store one and write operations are not allowed alone.
  3. A thread is not allowed to discard its most recent assign operation, ie variables changed in the working memory must then be synchronized to the main memory.
  4. No reason to not allow a thread (not assign any action occurs) to synchronize data from the working memory back to the main memory.
  5. Only the birth of a new variable in main memory, not allowed to use a variable that has not been initialized (load or assign) directly in the working memory. That is, before the implementation of a variable use and store operations must be performed after assign and load operations.
  6. A variable at the same time allowing only one thread operating its lock, but lock operation can be repeated multiple times with a straight line, after repeatedly lock, unlock only perform the same number of operations, the variable will be unlocked. lock and unlock must be paired.
  7. If you do lock operation on a variable, will clear the working memory value of this variable, we need to re-execute the load value or assign variable initialization operation prior to the execution engine to use this variable.
  8. If a variable realization is not locked lock operation, it is not allowed to perform the unlock operation; variables are not allowed to unlock a locked by another thread.
  9. Unclock before performing operations on one variable, this variable must first be synchronized to the main memory (store and perform write operations).

Fifth, the advantages and risks of concurrent

5.1, concurrent risk
  • Security: it may produce the desired results do not match when multiple threads share data
  • Activity: when an operation can not be continued, active problem occurs. Such as deadlocks, hunger
  • Performance: when too many threads that will switch the CPU frequently, increasing the activation time; synchronization mechanism; consume excessive memory.
5.2, concurrency advantages
  • Speed: simultaneously processing a plurality of requests and responsive; complex operations may be divided into a plurality of processes simultaneously.
  • Design: programming easier, in some cases, may also have more choices
  • The use of resources: CPU can do something else while waiting for the IO
Published 20 original articles · won praise 1 · views 566

Guess you like

Origin blog.csdn.net/weixin_42295814/article/details/103765953