[A] Concurrent programming text takes you to read in-depth understanding of the Java Memory Model (Interview necessary)

Concurrent programming this piece of content, a senior senior engineers required knowledge, 25K do not know if from the concurrent programming, and that the basic top. But the complex concurrent programming content, learn how the system? All knowledge of the topic will explain the system of concurrent programming, including, but not limited to:

Thread communication mechanism, the principle of depth JMM memory model, the principle synchronized depth, depth volatile principle, the DCL, Detailed AQS, CAS, reentrant lock, write lock principle, complicated tools Detailed depth understanding threadLocal, Fork, Join, atoms class Detailed, Java concurrent collections Comments (ConcurrentHashMap, ConcurrentLinedQueue, ConcurrentListMap etc.), blocking queue depth exploration, in-depth thread pool principle and design ideas. This article is a thorough understanding of java memory model.

Zero, full mind map

The main line as shown above the red arrow, you can take a look at what the overall speaking yes. Java memory model is a bedding in front, followed by the content.

First, leads java memory model (not to focus on explaining)

Second, what that will be used java memory model?

Shared variables (instance fields, static fields, array elements) will be used. Local variables, methods to define parameters and so on will not be shared among threads, so they do not have the visibility of the problem of memory, is not affected by memory model

Three, java memory model abstract schematic

Java memory model referred to as the JMM (Java Memory Model), is an abstract specification defined by the Java Virtual Machine, used to mask the different hardware and operating system differences in memory access, so java program in a variety of platforms to achieve the same memory access effect.

3.1 Main Memory (Main Memory)

Main memory which can be simply understood as a computer memory, but not exactly the same. Main memory to be shared by all threads for a shared variable (such as static variables, instance or heap memory), a main memory which stores its "deity."

3.2 local memory (Working Memory)

Local memory which can be simply understood as a computer CPU cache, but not exactly the same. Each thread has its own working memory for a shared variable, the working memory which stores its "copy." Why have local memory concept? Because direct operation slow main memory

By operating a series of instructions read and write memory (JVM total memory model defines eight memory operation instruction, will go into detail later), the thread A static variable s = 0 read from the main memory, working memory, is then s = 3 updating results synchronized to the main memory of them. From the point of view of a single thread, the process without any problems.

Fourth, instruction reordering

Before reordering understand this concept before, we first convert the scene, came out of the java memory model, CPU hardware came this dimension.

4.1 Basic concepts:

In the implementation of the program in order to improve performance, compilers and processors often have instructions to do reordering (simple understanding is we write the original code instruction execution order should be A → B → C, but now the CPU is multi-core CPU, in order show the location, in order to improve the degree of parallelism, in order to improve performance, there may be other cases command sequence becomes B → A → C, etc.).

Of course, it is not just CPU reordering go, need to meet the following two conditions (rules to follow):

  1. You can not change the results of the program running in single-threaded environment;
  2. There is not allowed to re-sort the data dependencies

4.2 reordering divided into three categories:

  1. Compiler optimization reordering. The compiler in single-threaded programs without changing the semantics of the premise, you can rearrange the order of execution of the statement.
  2. ILP reordering. Modern processor instruction level parallelism techniques to overlap a plurality of instructions executed. If no data dependency exists, the processor may change the execution of machine instructions corresponding to the statement sequence.
  3. Reordering the system's memory. Since the processor uses the cache and the read / write buffer, which makes the load and store operations are executed may appear out of order.

Java source code into the final sequence of instructions actually executed, will undergo the following three reordering respectively:

So what kind of reordering rules to follow?

五、as-if-serial

5.1as-if-serial semantics means:

No matter how reorder the results (single-threaded) program can not be changed. Compiler, runtime and processors must comply with as-if-serial semantics. OK, this is equivalent to the CPU are set rules. Do not reorder. To meet the pre-conditions for my as-if-serial in order to reorder.

5.2

as-if-serial semantics of the single-threaded programs to protect up to comply with as-if-serial semantics compiler, runtime and processors together to create an illusion for the programmer to write single-threaded programs: the program is based on single-threaded programs order execution . as-if-serial semantics so that programmers do not have to worry about a single thread reordering interfere with their problems, there is no need to worry about memory visibility problems.

Note: as-if-serial guaranteed only single-threaded environment, invalid multi-threaded environment. That multi-threading, how do concurrent programming?

Problems caused by under six multi-threaded and solutions

Above these could lead to a reordering multithreaded program memory visibility problems, JMM So how to solve?

  • For compiling discouraged sorting, compiling JMM's attention sorting rules prohibit a particular type of compiler thinks highly of the sort (not all compilers have discouraged the sort prohibited).
  • For reordering processor, JMM processor reordering rule requires the Java compiler when generating a sequence of instructions, insert a particular type of memory barrier instructions, to disable a particular type of processor reordering (but not all processors reordering by the memory barrier instruction to disable).

JMM belong to language-level memory model, it ensures on different compilers and different processor platforms, discouraged by ordering and reordering processor compiler prohibit certain types of memory provide consistent visibility guarantee for programmers .

Seven, what is memory barrier?

7.1 memory barrier (Memory Barrier) is a CPU instruction '.

Also known as memory barrier instruction memory barrier or fence, a barrier instruction, which causes the CPU or compiler memory operation performed before and after the barrier instruction issued by a sort of constraint.

7.2 practical application scenarios:

volatile memory barrier is based on implementation. ** Add the volatile keyword observed when no added and the assembly code generated volatile keyword found is added the volatile keyword, a lock will be more prefix instructions. ** This command is equivalent to a memory barrier. Specific performance:

  • When writing a volatile variable, JMM will correspond to the thread local memory shared variable values ​​flushed to main memory immediately.
  • When reading a volatile variable, the thread will JMM corresponding local memory is disabled, the shared variable is read directly from main memory

Thus ensuring, if a thread volatile modification of shared variables to be updated, other threads can immediately see the update, which is called the thread visibility.

(Note, regarding volatile will explain later in a single chapter, but here the son-in)

Eight, happens-before principle

From the start jdk5, java JSR-133 using the new memory model, based on the concept happens-before to illustrate the operation of memory visibility between. In other words, in the JMM, if a result of executing the operation requires another visible, it happens-before relationship must exist between the two operating the operation. It happens-before JMM principle is a very important principle, which is to determine whether the data there is competition, whether the thread safety of the main basis to ensure the visibility of multi-threaded environment. The two operations both in the same thread, can be two different threads in.

Some excerpts happens-before rules are as follows: 1, the program sequence rules: a thread for each operation, happens-before any subsequent operation to the thread. 2, the monitor lock rules: unlocking operation of a lock, happens-before subsequent locking operation of the lock. 3, volatile domain rule: write to a volatile field, happens-before reading this volatile field to follow any thread. 4, delivery rules: if A happens-before B, and B happens-before C, then A happens-before C. Note: have executed before the former happens-before relationship, it does not mean that an operator has to operate in a post-operation between the two! Results of the former requires only one operation for the following operation is visible, and a previous operation before the operation in order to the.

So said so many rules to look at the relationship happens-before with the JMM

Nine, as-if-serial and happens-before Summary

  • Results of as-if-serial semantics guarantee within a single-threaded program is not changed
  • The results of happens-before relationship to ensure proper synchronization of multithreaded programs is not changed.
  • In fact, the degree of parallelism for all without changing the program execution results of the premise of improving the implementation of the program as much as possible.

Ten, pull for so long, how these persons to understand? in conclusion:

  • Reordering multicore CPU and the like in order to optimize performance of the operation, but can cause visibility problems. To solve these problems, so JMM need to develop some rules, let it free to reorder.
  • as-if-serial only guarantee is not free to reorder single-threaded environment, thread it down so much?
  • With so happens-before principle, which is one JMM (JSR-133 memory model) specification.
  • Memory barrier is a CPU instruction.
  • So, happens-before JMM is the ultimate goal of making, memory barrier is the concrete means of realization happens-before.

END

Eggs small welfare

Get a free Java study notes, interviews, documents, and video

Section are as follows:

Guess you like

Origin juejin.im/post/5d036627e51d457761476133