What concurrent programming memory model in the end is -Java

Memory model

In the computer CPU, memory, IO speed difference between the three, in order to improve system performance, balancing these three speed.

  • Increased CPU cache to balance and memory speed difference;
  • Operating system increases the process, thread, a time-division multiplexing CPU, thus balancing the speed difference between the CPU and I / O devices;
  • Compiler optimization instruction execution order, so that the buffer can be more rational use.

These three system optimization, the efficiency of the hardware has been significantly improved, but they also brought visibility, atomicity and order and other issues. Memory cache-based interactive Cpu well have solved the CPU speed and memory have conflicts, but have also increased the complexity of computer systems, introduced a new problem: cache coherence (Cache Coherence).

Each processor has its own cache too exclusive, multiple processors share the main system memory when multiple processor computing tasks related to the same piece of main memory area, will likely result in inconsistent data, then to whom data subject became a problem. In order to solve the consistency problem, each processor needs to follow some protocol to read and write operations under these agreements. So memory model can be understood as an abstract to solve the problem of cache coherency, under certain operating agreements, read and write to specific memory or cache process.

Java Memory Model

JMM role

Java Virtual Machine specification attempts to define a Java memory model (Java Memory Model, JMM), is used to shield the differences in memory access hardware and operating systems to allow Java programs to achieve a variety of platforms to achieve the same effect memory access . So that Java programmers can ignore the different memory models for different processor platforms, and only need to be concerned about JMM.

JMM abstract structure

Abstract structural view JMM

JMM processor memory model draws on ideas from an abstract point of view, JMM defines an abstract relationship between the thread and the main memory, it covers the cache, write buffer, registers and other hardware and compiler optimizations. The figure is a schematic view of the abstract of the JMM.

JMM in inter-thread communication

Two core issues in concurrent programming to consider: how to communicate between threads (visibility and order) and how to synchronize between threads (atomic). Communication refers to the way in which the exchange of information between threads; synchronization refers to the relative order of operations occurring between different threads in the program for controlling

JMM sets out the procedure in all variables (instance fields, static fields, constitute an array of object elements, etc.) are stored in main memory; its main objective is access rules defined program types of the variables, both from the virtual machine variables are stored such low-level details of the memory and removed from memory variable species. Between each thread has its own local memory, thread communication protocol under the control by limiting JMM main memory. Assumed by the two threads A and B, to give the thread A thread B transmits "hello" message, the process is a two threads FIG communication:

seen from the figure, thread A is assumed to be a message to the thread B, then it must be step two:

  1. A thread to the local copy of the shared variables in memory update message flushed to main memory
  2. Thread B to take the main memory to read the updated shared variable thread A message

Design and implementation of JMM

JMM related protocols more complex, we can learn from the compiler or JVM engineers, as well as Java engineers. This article only from the perspective of Java Java engineers to discuss the JMM is controlled by those agreements to ensure data consistency.

JMM implementation can be divided into two parts, including happen-before rule and a series of keywords. Its core objective is to ensure that the compiler, the processor platforms can provide consistent behavior, showed consistent results in memory. Specifically, it is to solve the visibility, as well as atomic ordering problem happens-before rules and volatile, synchronized, final keyword, thus ensuring consistency of data in memory.

Happens-Before rules

happens-before is the core concept of the JMM, happens-before to specify the order of execution between the two operations, both operations may be in a thread, it may be different in the thread, thus happen-before by JMM memory relationship provides visibility to the programmer to ensure that cross-thread, JMM is defined as follows:

  1. If a further happens-before operating the operation, the execution result of the operation of a second operation will be seen, the first order of execution and a discharge operation before the second operation.
  2. Two operations there happen-before relationship, does not mean that the Java platform to achieve specific sequence must be specified by happen-before relationship is performed. If the results after the re-ordering, consistent with the results by happen-before relationship is performed, then this reordering is not illegal (that is, JMM allow such reordering)

The following sample code, assuming execution thread A writer () method, thread B executed reader () method, see if the thread B "v == true", then the variable x thread B to see is how much?

class VolatileExample {
  int x = 0;
  volatile boolean v = false;
  public void writer() {
    x = 42;                 // 1
    v = true;               // 2
  }
  public void reader() {
    if (v == true) {        // 3
      // 这里 x 会是多少呢?  // 4
    }
  }
}

1. The program sequence rules

Rule program sequence (Program Order Rule): each operation within a thread, in accordance with the code sequence, the code book EDITORIAL ahead of the write operation occurs in the back.

2. volatile variable rules

volatile variable rules (Volatile Variable Rule): For a volatile variable modification was obtained after the first write operation occurred in the face of this variable was read. "Behind" refers to the sequence of time

3. The transfer rules

Transfer rules (Transitivity): if the operation occurs ahead A Procedure B, B ahead operation occurs in Procedure C, then A in the first operation occurs C.

In response to these items 1,2,3 happens-before we make a summary, below is happens-before we build diagrams based on volatile read and write.

4. Cheng Lock rules

Tube-lock rules (Monitor Lock Rule): a first unlock operation occurs in the face of the lock and key lock after the operation. "Behind" refers to the sequence of time

In the article before the source of concurrency issues in question count ++ concurrency problems mentioned in the thread switch results in a counting problem, in which we can try to take advantage of happens-before rule to solve this problem atom.

public class SafeCounter {
  private long count = 0L;
  public long get() {
    return cout;
  }
  public synchronized void addOne() {
    count++;
  }
}

The code can solve really solve the problem?

4. Thread start rule

Thread start rule (Thread Start Rule): start Thread object () method, first occurrence in this thread every movement.

6. thread terminates rules

Thread terminates rules (Thread Termination Rule): All operations are ahead of thread termination occurs in the detection for this thread, we can end by Thread.join () method, Thread.isAlive () Returns the value of other means to detect whether the thread execution completed.

7. Thread break rules

Thread break the rules (Thread Interruption Rule): call to interrupt the thread () method first occurred in the interrupted thread code detection to interrupt event occurs, can be detected by Thread.interrupted () method if an interrupt occurs.

8. The object finalization rules

End of rule objects (Finalizer Rule): completion of initialization of an object (the constructor is finished) first occurred in its finalize () method.

happens-before rules can be divided into a total of more than eight, the author only for common in concurrent programming before 6 are described in detail, the specific content can refer http://gee.cs.oswego.edu/dl/jmm/cookbook .html. In the JMM, I think the concept of these rules is more difficult to understand. To conclude happens-before rule emphasizes the visibility of a relationship, event A happens-before B, B implies A event for the event is visible, regardless of whether the events A and B occur in a thread.

volatile keyword

volatile own characteristics

  1. Visibility: reading of a volatile variable always see (any thread) last write this volatile variable.
  2. Atomicity: Read / write single atomic having a volatile variable, it is noted that for similar vaolatile ++ does not have atomic operation, because this operation is that the operation.

exhibited volatile memory in the semantic JMM

  1. When writing a variable, JMM will the thread corresponding local memory shared variables flushed to main memory.
  2. When reading a volatile variable, JMM will the thread corresponding local memory is deasserted. Then read the shared variable from main memory.

volatile java is provided to solve the visibility problem was the keyword, can be understood when seen jvm volatile keyword modified variables, will "disable caching" both thread local memory, each time this type of variable read operation will re-read from main memory into the local memory, each write operation will be synchronized immediately to the main memory, which is also further interpretation, was modified for a volatile variable must first write operation occurred in the volatile variable rules described after the face of this variable was read; the volatile modification of shared variables, will be disabled for certain types of instruction reordering, to guarantee the order of the questions.

synchronized- universal lock

Blocking rule of the tube, a first unlock operation of the lock face lock operation after the occurrence. In Java to solve the problem through the tube atom (Monitor), the specific performance of Synchronized keyword. Synchronized code block is modified at compile time monitorenter and monitorexit instructions inserted in the start position and end position, the JVM monitorenter and monitorexit ensure paired therewith, and the code obtained atomicity. synchronized in the lock and unlock operations are implicitly carried out, not only in java we can use the synchronized keyword, it can also be achieved using a variety of lock Lock interface to achieve.

synchronized semantic memory

  1. When a thread acquires the lock, thread-local memory will be set to invalid
  2. When a thread releases the lock, shared variables will be flushed to main memory

final- unknown Optimization

Atomicity in concurrent programming, simply change the visibility of shared variables and the order of the problems caused. Keyword final way to solve the problem of concurrency is to start from the source, so immutable variable, the variable is a variable that represents the current final modification does not change the compiler can safely be optimized.

to sum up

  1. JMM is used to block out the memory access hardware and operating system differences, so that a Java program in order to achieve a variety of platforms to achieve the same effect memory access
  2. Standing says programmers point of view JMM is a series of agreements (hanppens-before rule) and some keywords, Synchronized, volatile and final
  3. volatile by disabling caching and compiler optimization to ensure the order and visibility
  4. Synchronzed ensure atomicity of program execution, and the visibility ordering is complicated, if the universal
  5. final keyword modified variable immutable

Q&A

Count ++ attempt to solve the above problems with synchronized, difficult to see the code to copy here, there is nothing wrong with this code do? You can say what you think in the comments section, we learn together!

public class SafeCounter {
  private long count = 0L;
  public long get() {
    return cout;
  }
  public synchronized void addOne() {
    count++;
  }
}

The author's personal blog site

Guess you like

Origin www.cnblogs.com/liqiangchn/p/11735930.html