Java multi-threading - atomicity, visibility, orderly

The computer memory model:

  The computer program when running line, instructions executed by a CPU, the data stored in the physical memory among the computer, the CPU and the data will inevitably have to deal with at the time of execution of the instruction. At first, still live in peace, but with the development of CPU technology, the execution speed of the CPU faster. And because memory technology has not changed much, so the execution speed read and write data from memory and CPU compared to the process of the gap will be increasing, which causes the CPU to the memory of each operation has to spend a lot of waiting time, but can not let the memory becomes the bottleneck of computer processing, so people want to come out a good way is to increase the cache between the CPU and memory, that is, keep a copy of the data. He is characterized by fast, small memory, and expensive.

  When the execution of the program becomes the program is run, the data calculation will need to be copied from a main memory which the CPU cache, it can be written directly from the cache to read data and wherein the CPU calculates the data, when the end of the operation, then the data cache to the main memory to refresh them.

  With the continuous improvement of CPU power, buffer layer slowly can not meet the requirements, it gradually rises to multi-level cache. The data reading order and how tightly bound to the CPU, the cache can be divided into a CPU cache (L1), level two cache (L3), some high-end CPU, a three-level cache (L3), as per a cache all are part of the data stored in a cache. Technical difficulty and cost of manufacturing the three cache is relatively decreasing, so its capacity is relatively increasing. After With multi-level cache, program execution becomes when the CPU to read a data, first from a cache, if not found then find the secondary cache, or if it is not from three Find the cache or memory.

  Contains only a single-core CPU L1, L2, L3 cache; if CPU core comprising a plurality of, i.e. a multi-core CPU, each core containing the set of L1 (and even L2) cache, and shared L2 (and, or L3) cache . As computer capabilities continue to improve, we began to support multi-threading. We were to analyze single-threaded, multi-threaded influence in the single-core CPU, multi-core CPU in:

  Single-threaded: CPU core's cache is only one thread to access. Cache exclusive, and other issues an access violation does not occur.

  Single-core CPU, multithreading: process multiple threads simultaneously access shared data in the process, CPU will load a block of memory to the cache, the different threads when accessing the same physical address, will map to the same cache location , so that even if the thread switching occurs, the cache is still not fail. But because any time only one thread in execution, it will not cache access violation occurs.

  Multi-core CPU, multi-threading: each core has at least one L1 cache. Process multiple threads access a shared memory, and this multiple threads are executed on different cores, each core will keep a shared memory buffer in their cache. As the multi-core can be parallel, the situation may occur multiple threads simultaneously write their own caching, and data between their respective caches, there may be different.

And reordering optimization processor instructions

  In addition to adding the cache coherence cache memory between the CPU and cause the outside, there is a hardware problem, and that is internal to the processor arithmetic unit can be fully utilized as much as possible, the processor may have an input scrambled code the implementation process, this processor is optimized. In addition to current popular processors will optimize the code out of order processing, many programming language compilers will have a similar optimization, such as the Java virtual machine time compiler (JIT) would do the command rearrangement. One can imagine that if allowed to optimize processor and compiler instruction reordering, then it could lead to various problems.

Memory model

  Cache coherency problem, processor optimized instruction reordering problem is escalating due to hardware. So, is there any mechanism can solve these problems above it? The most straightforward approach is to abolish processor and optimized processor technology, the abolition of CPU cache, main memory and the CPU to directly interact. However, although you can do to ensure concurrency issues in multithreaded. However, this little unworthy of. Therefore, in order to ensure concurrent programming to meet the atomicity, visibility and orderliness. There is an important concept, that is - memory model.

  Atomic issues, visibility issues and orderliness problem is that people are out of the abstract definition. And this problem is to abstract the underlying cache coherency problem mentioned earlier, a processor and instruction reordering optimization problems. Atomicity refers to a cpu operation is not suspended in the middle and then scheduling neither operation is interrupted, or executed, or do not execute. Visibility means that when multiple threads access the same variable, a thread changes the value of this variable, other threads can immediately see the modified values. I.e. sequential ordering of execution program executed according to the order code.

  To ensure the shared memory (visibility, order, atomicity), shared memory model defined specifications in the memory system read and write operation of a multithreaded program behavior. These rules regulate read and write operations to memory, so as to ensure the correctness of execution. It is associated with the processor, cache, but not with concurrency-related, and the compiler is also relevant. It solves the CPU multi-level cache, the optimization processor, instruction memory access problem caused rearrangement to ensure visibility, and orderliness of the atomic concurrent scenarios. Concurrency memory model to solve the problem mainly in two ways: to limit the use of memory and processor optimization barrier.

Java Memory Model

  Memory model, which is to solve the problem of concurrent multi-threaded scenarios is an important specification, it is how to achieve concrete it, different programming languages ​​may differ in implementation. We know, Java programs are required to run in a Java virtual machine above, Java memory model (Java Memory Model, JMM) is a kind of standard memory model in line, shielding the differences in access to a variety of hardware and operating systems to ensure that the Java program in a variety of platforms to access memory mechanism can ensure consistent results and specifications. Java memory model mentioned, generally refers to the new memory model started using JDK 5.

  Java memory model specifies all the variables are stored in main memory, each thread also has its own working memory, working memory holds the thread in the thread is a copy of a copy of main memory used variables, variable thread All operations must be carried out in the working memory, but can not directly read and write main memory. Between different threads can not directly access the working memory of the other variables, the variables are passed between threads require data between their working memory and main memory simultaneously. The JMM will act on the data synchronization process between working memory and main memory, which specifies what to do and when to do data synchronization data synchronization. The summary, is a specification JMM, since the purpose of solving multi-threaded communication, local memory data inconsistency exists through shared memory, the compiler will reorder code instructions, codes will bring the processor out of order, etc. problem.

Implementation of the Java memory model

  Java provides a series of concurrent processing and related keywords, such as volatile, synchronized, final, concurrenand other packages. In fact, these are the Java memory model encapsulates the underlying implementation to provide some keywords programmers in the development of multi-threaded code, we can directly use synchronizedkeywords such as to control concurrent, never have to be concerned about the underlying compilation optimization, cache coherence and other issues. So, Java memory model, in addition to define a set of specifications, but also offers a range of primitives, it encapsulates the underlying implementation for developers directly.

Atomicity

In Java, in order to ensure the atomicity, two advanced bytecode instructions monitorenterand monitorexit. The two byte code, the corresponding keyword in Java is synchronized. Thus, in Java can be used synchronizedto guarantee the operation of the method and the code blocks are atomic.

Visibility

Java memory model by modifying the value of the variable after the new sync back to main memory before reading the dependent variable main memory from the main memory refresh this variable value as a way to achieve the transfer medium, the Java volatilekeywords provided a feature that is variable in its modified after being modified can be synchronized to the main memory immediately, by its modified variables are flushed from main memory before each is used. Therefore, you can use volatileto ensure that when multi-threaded operating variables of visibility, in addition volatile, Java in synchronizedand finaltwo keywords can be achieved visibility. But different implementations.

Orderliness

In Java, you can use synchronizedand volatileto ensure the orderly operation among multiple threads. Implementation differ: volatileKeyword prohibit instruction rearrangement. synchronizedKeywords guarantee the same time allowing only one thread operations.

Introduced over Java concurrent programming atomicity, visibility and ordering of keywords that can be used to solve. We found that, like synchronizedkeyword is omnipotent, he can satisfy the above three properties at the same time, it is also a lot of people abuse synchronizedreasons, but synchronizedis more performance impact, although the compiler optimization technology provides a lot of locks, but does not recommend excessive use.

 

Guess you like

Origin www.cnblogs.com/ding-dang/p/11072017.html