jvm virtual machine notes <seven> memory model with thread

Java Memory Model

Main memory and working memory
Java memory model main objectives: program access rules define the variables, i.e., variables in the virtual machine stored in the memory and the underlying details of such variables removed from memory. Here the variable (Variable) in the Java programming slightly different variables, including instance variables / static fields and array elements constituting an object, a method not including local variables and parameters (private to the thread). To obtain a better execution performance, Java memory model is not restricted to perform a particular register or cache engine using a processor and a main memory to be exchanged, it is also not limited in time compiler code execution order adjusting such rights. 

Java memory model specifies that all variables are stored in main memory (Main Memory) in (part of a virtual machine memory). Each thread also has its own working memory (Working Memory), a thread of working memory to save a copy of a copy of the main memory is used to thread variables, thread all operations on variables (read / assignment, etc.) must be in the working memory carried out, but not directly read and write main memory variables. Between different threads can not directly access the working memory of the other variables, the variable value passed between threads need to be done by both main memory.

Here the main memory / working memory and the memory area of ​​Java Java stack / heap / method area is not the same level of memory division. If both the association must be forced, that is defined from the main memory variable / / view the working memory, main memory, the main data portion corresponding to the instance of the Java heap objects, working memory corresponds to a virtual machine in a partial region of the stack . From the lower level, the main memory is hardware memory, and to obtain better computational speed, virtual machines and hardware systems may make working memory priority is stored in the registers and caches.

Interaction between the memory
between the main memory and the working memory a specific interactive protocol, i.e. how a variable copy from main memory into the working memory, working memory from the main memory synchronized back to the implementation details and the like, Java memory model defines the following eight operations to complete:
lock (lock): the role of the main variables in memory, the main memory of the variable marked as private to the current thread, other threads can not access it to a variable identified as a thread exclusive state.
Unlock (unlock): the role of the main memory variable, the variable is in a locked state is released, in order to be locked by other threads.
Read (Read): influencing variables in main memory, the value of a variable is transmitted from main memory into the working memory of the thread, for subsequent use load operation.
Load (load): influencing variables in the working memory, the read operation of the variable value obtained from memory into the working copy of the variable in memory.
: Use (Use) will perform the operation on the variable is acting in the working memory, pass the value of memory to perform a variable engine, whenever the virtual encounters a bytecode instructions need to use the value of the variable.
Assgin (assignment): the value of the variable acting on the work memory, the execution engine to a received from the working memory is assigned to a variable, this operation is performed every time the virtual encounters a bytecode instruction to the variable assignment.
Store (store): influencing variables in the working memory, the value of a variable in the work memory is transferred to the main memory, write operations for subsequent use.
Write (write): influencing variables in the main memory, the value of the variable store operation obtained from the main memory into the working memory variable.

If a variable is copied from the main memory to the working memory, read and load operations performed in sequence; if the variable synchronization back to main memory from the working memory, and executed sequentially store write operations. Java memory model also provides the following rules must be satisfied when performing the above-described eight basic operations:
do not allow read and load, store and write operations occur separately one, which does not allow a variable read from the main memory but do not accept the working memory, or write the main memory but the situation is not acceptable from the working memory to initiate back.
Does not allow a thread to discard its most recent assign operation, ie, after variable changes in the working memory necessary to synchronize the changes back to main memory.
It does not allow a thread is no reason (not assign any action occurs) the data from the working memory thread synchronization back to the main memory.
A new variable in the main memory can only be "born" is not allowed to use a variable that has not been initialized (load or assign) work directly in memory, you must first assign been executed before the execution is to use and store a variable and load operation.
A variable at the same time allowing only one thread operating its lock, but lock operation can be repeated multiple times with a thread, after repeatedly lock, unlock only perform the same number of operations, the variable will be unlocked.
If you do lock operation on a variable, zombie to clear the working memory value of this variable, prior to the execution engine to use this variable, we need to re-execute or assign value load operation initialized variables.
If a variable not previously been locked lock operation, it is not allowed to perform the unlock operation is not allowed to unlock a variable is locked by another thread live.
Before performing the unlock operation on a variable, the variable must first sync back to main memory (execution store and write operations).

For special rules volatile type variables

Volatile keyword can be said to be the most lightweight synchronization mechanism provided by the Java virtual machine.
When a variable is defined as volatile, it will have two features:
the first is to ensure the visibility of all threads , "visibility" means that when a thread modifies the value of this variable, the new value for the other thread is It can be immediately learned.

Visibility misconception about volatile variables: "volatile variable is immediately visible to all threads, all of the variables on volatile write operations can be immediately reflected in the other thread, in other words, volatile variables in each thread is the same, so based on calculation of volatile variables are safe "under concurrency. Argument of that sentence is not wrong, but that argument does not follow, "based on the operational volatile variable is safe under concurrency" this conclusion.
volatile variable does not exist in the working memory of each thread in consistency (volatile variables can also be inconsistencies in the working memory in each thread, but before each use to refresh, see inconsistent execution engine circumstances, it is considered that there is no inconsistency), but inside the Java operation is not atomic operations, leading to operational volatile variable in concurrent as unsafe.

 

The problem arises in the increment operation "race ++" among the decompiled code with javap found Increase () method of only one line of code in the Class file byte code consists of four instruction (return instruction is not generated by the race ++ this instruction may not be calculated), from readily Causes bytecode level concurrent failure of: getstatic instruction when the value takes the race to stack operation, to ensure that the value of the key volatile race at this time is correct but in execution iconst_1, when iadd these instructions, other threads may have increased the value of the race, while the value of the operation of the stack becomes stale data, so putstatic instruction might put a small value after the execution of the race synchronized back to the main memory.

Objectively speaking, this time using bytecode to analyze concurrency issues, still is not rigorous, because even compiled only one instruction, it does not mean that the implementation of this directive is an atomic operation. A bytecode instructions when executed explanation, explanation will be run many lines of code to implement its semantics, if compiler implementation, a bytecode instruction may be converted into several pieces of native machine code instructions, as used herein -XX: + PrintAssembly output parameters to analyze some of decompilation will be more rigorous.

Due to volatile variable visibility only guarantee, in computing the scene does not comply with the following two rules, we still have to pass the lock (or using synchronized atomic classes in java.util.concurrent) to ensure atomicity:

1) the operation result does not depend on the current value of the variable, or the value can be ensured that only a single thread to modify variables.

2) variable does not need to participate in other state variables constant constraint.

Volatile variables using the second reorder instruction semantics prohibited optimization , Common variable is only dependent on result of the assignment will ensure that all places can correct results obtained during the execution of the method, but the order can not be guaranteed and a variable assignment consistent order of execution of the program code. Because during the execution of a method thread can not perceive this point, which is the so-called "internal thread performance for serial semantics" Java memory model described in (WithinThread As-If-Serial Semantics ).

Selection significance volatile in many concurrent protection safety tool: In some cases, the performance of synchronization mechanism volatile indeed to give priority to lock (or use the synchronized keyword java.util.concurrent package inside the lock), but due to the virtual machine for many elimination and optimized implementation of the lock, making it difficult to quantify how much volatile believe will be faster than synchronized. consumption and general performance variables read operation volatile variable is almost no difference, but write operations may be slower, because it requires a lot of memory barrier instructions inserted in the local code to ensure order execution processor does not occur. Even so, the most volatile scenario the total cost is still lower than the lock, the only basis we choose among the volatile and volatile lock is just semantics can meet the needs of the scene.

Special rules java memory model for volatile variables defined . Suppose that T represents a thread, V and W represent the volatile variable, then performing read, when the load, use, assign, store and write operations need to satisfy the following rules:

1) only if the previous action thread T on variable V is performed when load, T V in order to execute use; and, only after a T V action is executed when use, T V in order to perform the load. T use of the V, and T load can be considered for the V. read the associated action, must appear together in a row (this rule requires the working memory, each use must refresh the latest value before the start of main memory V, to ensure that other threads can see the value of V modification).

2) only when the previous action of T is ASSIGN V, V T of the store to perform; and, only after a T operation is performed on the store V, to V T to perform assign. To assign V T and T store can think of to V, write associates must appear together in a row (this rule requires the working memory, each modified V must be immediately synchronized back to the main memory for other assurance thread sees its own modifications to the V's).

3) Assume that the operation A is T to V embodiment of the use or assign operation, assuming operation F is and action A associated load or store operation, assuming operation P is and acts F corresponding read or write operation of the V; and the like , B is assumed that the operation of the T or W use assign embodiment of operation, and G is assumed that the operation load or store operation associated with the operation B, and Q is assumed that the action corresponding to the operation G W of the read or write operation. If A before B, then P before Q (This rule requires that volatile variables optimized not be reordered instructions execute in the order to ensure that the program code).

Special rules for long and double-type variables

64-bit data types (long and double) allowing the virtual machine is not modified volatile read operation is divided into two 32-bit operation is performed, which allows virtual machine implementation may not be guaranteed to select the type of 64-bit data load, store , read, and write the four atomic operations, it is on the point of non-agreement atoms long and double (nonatomic Treatment of double and long Variables).

If multiple threads share a declared volatile as long or double type variable, and at the same time for them to read and modify operations, then some might read a thread that is non-original value, nor is it representative of the other threads modify worth value "half-Tag".

However, where such a reader with "half variable" are very rare (not appear in the current commercial VM), because Java memory model while allowing the virtual machine does not read long and double variables implemented as atomic operations, but allows the virtual machine implemented to select these operations are atomic operations, but also "highly recommended" virtual machine to achieve this.

Atom, visibility and orderliness

Atomicity (Atomicity): by the Java memory model to directly ensure the atomicity variable operation comprises a read, load, assign, use, store , and Write , we generally can be considered the basic data type of access includes atoms of (long and double exceptions).

If the application scenario requires atomicity guarantee a greater range, Java memory model also provides the lock and unlock operations to meet demand, despite the virtual machine is not to lock and unlock operations directly open to the user, but provides a higher level of word bytecode monitorenter and monitorexit instructions implicitly using these two operations, both the Java bytecode instruction code is the reaction sync blocks --synchronized keywords, so the operation is also provided between the synchronized block atomicity .

Visibility (Visibility): refers to when a thread changes the value of shared variables, other threads can be aware of this change immediately.

In addition to volatile , the Java there are two keys to achieve visibility, the synchronized and Final . Visibility is determined by the sync block "before performing the unlock operation on a variable, the variable must be synchronized back to the main memory (store and write operations performed) " obtained by this rule, the visibility of the final keyword means: by Once the final modified fields are initialized in the constructor, and the constructor did not "this" reference pass out (this reference escape is a very dangerous thing, there are other threads may access through this reference to "initialize half "object), then other threads can see the final value of the field.

Orderliness (Ordering): the Java program in the natural ordering of words can be summarized as: If viewed in this thread, all operations are ordered; observed if another thread in a thread, all operations They are disordered. Refers to the first phrase "manifestations thread of serial semantics" (Within-Thread As-if -Serial Semantics), after the word means "instruction reordering" phenomenon and "working memory and the main memory synchronization delay" phenomenon.

Java language provides two keywords volatile and synchronized to ensure the orderly operation between threads, volatile keyword itself contains the prohibition instruction reordering semantics, but it is a synchronized from "a variable allowed only at the same time thread lock its operation "rule obtained, the rules determine holds a lock with two synchronous blocks can only be serially entered.

The principle of first occurrence

Occurs first partial order relation between two operations defined in the Java memory model, if operation occurs in the first A operation B, in fact, that before the occurrence of operation B, the operation can affect the operation of the A B produced was observed, " Effect "includes modifying the values ​​of shared variables in memory / message transmitted / invoked method.

Here are some "natural" relationship first occurred under the Java Memory Model, without any assistance synchronizer already exists, can be used directly in the code. If the relationship between the two operations is not listed, and can not be pushed out of the following rules, they have no guarantee of the order, the virtual machine can freely reorder them.

1) the program sequence rules (Program Order Rule): in a thread, according to a program order of the code book EDITORIAL write operation occurs after the first operation. It should be accurately controlled flow rather than sequential order of the program code, to be considered as a branch / loop structure.

2) the tube locking rules (Monitor Lock Rule): a first unlock operation occurs in lock operations face the same lock later. Here it must be emphasized that the same lock, and "back" refers to the order in time.

3) volatile variable rules (Volatile Variable Rule): write to a volatile variable first occurred in the face of this variable after the read operation, where "behind" refers to the order in time.

4) Thread start rule (Rule the Thread Start): Start the Thread object () method first occurrence in this thread every movement.

5) thread terminates rules (Thread Termination Rule): All operations are ahead of thread termination occurs in the detection of this thread, we can end by Thread.join () method /Thread.isAlive () return value detected by other means the county has been terminated.

6) thread interrupts regular (Thread Interruption Rule): calling thread interrupt () method first occurred in the interrupted thread code detection to interrupt event occurs, can be detected by Thread.interrupted () method if an interrupt occurs.

7) End of rule objects (Finalizer Rule): initialization of an object is completed (constructor executes end) first occurred in its finalize () method to start.

8) transitivity (Transitivity): if the operation occurs ahead A Procedure B, B ahead operation occurs in operation C, the operation occurs in the first operation A C.

 The first time on the order of occurrence is not much relationship between the basic principle, so we do not measure the interference chronological concurrent security issues, everything must be subject to the principle of first occurrence.

Java thread

Concurrency is not necessarily dependent on multi-threading, but there talking about Java Concurrency, most of the threads inescapable relationship.

Implement threads

Mainstream Caozuojitong provides thread implementation, Java language provides a thread in the same process and Caozuojitong different hardware platforms, each instance of java.lang.Thread class represents a thread. Thread class and most of the Java API has significant differences, all of the key methods are declared to be Native. A Native methods may not use or can not use the platform-independent means to achieve it means that this method in the Java API. For this reason, we are here "to achieve thread" instead of "Java threads to achieve."

Implementing Threads are three main ways:

1. Use a kernel thread implementation

Kernel threads (Kernel Thread, KLT) is directly by the operating system kernel (Kernel, hereinafter referred to as the core) support threads, this thread is completed by the kernel thread switch, the kernel thread scheduling by manipulating the scheduler (Scheduler), and is responsible for the thread's tasks mapped onto each processor. Each kernel thread can be seen as the core of a spare, so the operating system has the ability to handle more than one thing at the same time, support for multi-threaded kernel called kernel multithreading (Multi-Thread Kernel).

Programs generally do not go directly to the kernel threads, but to use an advanced interface to kernel threads - LWP (Light Weight Process, LWP), lightweight process that we talked about in the usual sense of the thread, due to the each course consists of a lightweight kernel threads support , so only the first support kernel threads, in order to have a lightweight process. 1 between this lightweight processes and kernel threads: 1 relationship called one-threading model.

Lightweight process limitations: Because it is based on kernel threads to achieve, so a variety of process operations, such as the creation / destruction and synchronization, we need a system call. The relatively high cost of system calls, you need to switch back and forth in user mode (User Mode) and kernel mode (Kernel Mode); each lightweight process needs to have a support kernel threads, lightweight processes therefore need to consume some kernel resources (such as kernel thread's stack space), and therefore the system supports a lightweight process is limited.

2. User thread implementation

User thread the narrow sense refers entirely on user space threads library, the kernel can not perceive to implement threads exist. To create a user thread / sync / destruction and scheduling done entirely in user mode, kernel does not need help. If the program is implemented properly, this thread does not need to switch to kernel mode, and therefore fast operation and low consumption, can support larger number of threads, in part of high performance database multithreading is implemented by a user thread. This process and the user threads between 1: N relationship is called threading model-to-many.

3. Use a lightweight user threads, plus hybrid implementation process

There is both user threads, there are also lightweight processes.

 

Java thread scheduling

Thread scheduling is a system for the allocation process thread processor usage rights. Scheduling main ways:

Cooperative scheduling using multi-threaded system, a thread of execution time is controlled by the thread itself, the thread after performing their work to actively notify the system switches to another thread up. Advantages: simple. Disadvantages: execution time can not be controlled.

Using preemptive multithreading system calls, each thread execution time by the system, thread switching thread itself decide not help. Java uses this thread scheduling.

Java provides 10 levels of thread priority, however, thread priorities do not fly, because Java threads are mapped onto native thread system implemented, so the thread scheduling ultimately determined by the operating system.

State transition

Java language defines five kinds of process status, at any one point in time, a thread can only have one and only one state:

New (New) : Create a thread that has not been started in this state.

Run (Runable): including the operating system thread state Running and Ready, in this state of threads may be running, or it may wait for the CPU execution time allocated for it.

Wait indefinitely (Waiting): the thread is in this state will not be assigned CPU execution time, they want to wait for the other thread to wake up the display. The following method will thread into an indefinite wait state:

Timeout parameter is not set in the Object.wait () method.

Timeout parameter is not set the Thread.join () method.

LockSupport.park () method.

Waiting period (Timed Waiting): thread is in this state can not be allocated CPU execution time, but without waiting for other threads to be displayed to wake up, wake up automatically by the system after a certain time. The following method will thread into a wait state deadline:

To Thread.sleep ().

Provided the Object.wait () method Timeout parameter.

Timeout parameter set of the Thread.join () method.

LockSupport.parkNanos () method.

LockSupport.parkUntil () method.

Blocked (Blocked): the thread is blocked, the distinction between "blocked" and "wait state" is: "blocked" took place in waiting to acquire an exclusive lock, this event will be another thread to give up the lock time; " wait state "is waiting for some time, or wake action takes place. When the program proceeds to waiting for entry region sync block, the thread will enter this state.

End (Terminated): terminated thread thread state, the thread has completed execution.

Guess you like

Origin www.cnblogs.com/lvoooop/p/12132984.html