In-depth understanding of the knowledge point of java virtual machine-java memory model

Why concurrency?

  1. A very important reason is that the gap between the computing speed of the computer and its storage and communication subsystem is too large, a lot of time is spent on disk I / O, network communication, or database access. And letting the computer process several tasks at the same time is the easiest way to think of and prove to be a very effective means of "squeezing" the computing power of the processor.
  2. In addition to making full use of the capabilities of computer processors, it is another more specific concurrent application scenario that one server provides services to multiple clients simultaneously. To measure the performance of a service, the number of transactions per second (Transactions Per Second, TPS) is one of the most important indicators. It represents the total number of requests that the server can respond to on average in one second, and the TPS value and the program ’s Concurrency has a very close relationship.

Hardware efficiency and consistency

Most computing tasks cannot be completed by the processor "calculation". The processor must at least interact with the memory, such as reading operation data and storing operation results. This I \ O operation is difficult to eliminate ( Can not rely on registers to complete all calculation tasks)

Since the computing speed of the computer's storage device and the processor have several orders of magnitude difference, modern computers have to add a layer of cache (Cache) that reads and writes as close to the processor's computing speed as possible. Buffer between: After the operation is completed, it is synchronized from the cache back to the memory, so that the processor does not have to wait for slow memory read and write.

Cache Coherence : In order to solve the problem of cache coherence, each processor needs to follow some protocols when accessing the cache, and it must operate according to the protocol when reading and writing. Such protocols include MSI, MESI, MOSI, Synapse, etc. .

Out-of- order execution optimization : In order to make the computing unit inside the processor as full as possible, the processor may perform out-of-order execution optimization on the input code. The result of sequential execution is reorganized to ensure that the result is consistent with the result of sequential execution, but it does not guarantee that the calculation order of each statement in the program is consistent with the order in the input code.

java memory model

Main memory and working memory

The java memory model stipulates that all variables are stored in the main memory, and each thread has its own working memory. The working memory holds a copy of the main memory copy of the variables used by the thread, and all operations on the variables (read , Assignment, etc.) must be done in the working memory, and can not directly read and write variables in the main memory. Different threads cannot directly access the variables in the other party's working memory, and the transfer of variable values ​​between threads needs to be completed through the main memory.

At a lower level, the main memory directly corresponds to the memory of the physical hardware. In order to obtain better running speed, the virtual machine (even the optimization measures of the hardware system itself) may allow the working memory to be preferentially stored in the register and cache Medium, because the main access to read and write is the working memory when the program is running.

Inter-memory operation

Regarding the specific interaction protocol between main memory and working memory, the following 8 operations are defined in the java memory model to complete. The implementation of the virtual machine must ensure that each operation mentioned below is atomic and indivisible.

  • lock (lock): acts on main memory and identifies a variable as a thread exclusive state
  • unlock
  • read (read): transfer the value of a variable from the main memory to the working memory of the thread for subsequent load action
  • load: Put the variable value obtained from the main operation of the read operation into the variable copy of the working memory
  • use: Pass the value of a variable in the working memory to the execution engine. This operation will be executed whenever the virtual machine encounters a bytecode instruction that needs to use the value of the variable
  • assign: Assign the value received from the execution engine to a variable in working memory
  • store: transfer the value of a variable in the working memory to the main memory
  • write: Put the value of the variable obtained by the store operation from working memory into the variable of main memory

Note : The java memory model requires sequential operations: read-> load; store-> write. However, only sequential execution is required, and there is no guarantee of continuous execution.

Special rules for volatile variables

When a variable is defined as volatile, it will have two characteristics:

Ensure the visibility of this variable to all threads

"Visibility" here means that when a thread modifies the value of this variable, the new value is immediately known to other threads.

Volatile variables can only guarantee visibility. In operation scenarios that do not meet the following two rules, we still have to guarantee the atomicity by locking (using the atomic class in synchronized or java.util.concurrent).

  • The result of the operation does not depend on the current value of the variable, or it can ensure that only a single thread modifies the value of the variable
  • Variables do not need to participate in constant constraints with other state variables
Prohibit instruction reordering optimization

Ordinary variables only ensure that the correct results are obtained in all places that depend on the result of the assignment during the execution of the method, but they cannot guarantee that the order of variable assignment operations is the same as the order of execution in the program code.

Memory barrier: https://www.jianshu.com/p/64240319ed60

Special rules for long and double variables

Non-atomic agreement

The java memory model requires that the eight operations lock, unlock, read, load, assign, use, store, and write are all atomic, but for 64-bit data types (long and double), a relatively loose one is specifically defined in the model Regulations: Allow the virtual machine to divide the read and write operations of 64-bit data that are not modified by volatile into two 32-bit operations, that is, allow the virtual machine to realize the choice of load, store, read and 64-bit data types. Write the atomicity of these 4 operations.

Preemptive principle

Advance occurrence is a partial order relationship between two operations defined in the java memory model. If operation A occurs before operation B, it means that before operation B occurs, the impact of operation A can be observed by operation B. "Influence" includes modifying the value of shared variables in memory, sending messages, and calling methods.

Some "natural" pre-existing relationships under the java memory model
  • Program Order Rule : Within a thread, the program controls the flow order according to the code
  • Monitor Lock Rule : An unlock operation occurs before the lock operation on the same lock
  • Thread Start Rule
  • Volatile Variable Rule : A write operation to a volatile variable occurs first after a read operation to this variable
  • Thread Termination Rule
  • Thread Interruption Rule (Thread Interruption Rule) : The call to the thread interrupt () method occurs before the code of the interrupted thread detects the occurrence of an interrupt event, and whether an interruption occurs can be detected by the Thread.interrupted () method
  • Object finalization rule (Finalizer Rule) : The initialization of an object (the end of the execution of the constructor) first occurs at the beginning of his finalize () method
  • Transitivity : If operation A occurs before operation B and operation B occurs before operation C, then it can be concluded that operation A occurred before operation C

java and thread

Thread implementation
  • Use kernel threads
  • Use user thread
  • Mixed use of user threads and lightweight processes
java thread implementation

For SunJDK, his Windows version and Linux version are implemented using a one-to-one thread model, a java thread is mapped to a lightweight process.

java thread scheduling

Thread scheduling process refers to a thread assigned to the processor system usage rights, there are two main scheduling, namely cooperative thread scheduling (Cooperative Threads-Scheduling) and a preemptive thread scheduling (Preemptive Threads-Scheduling)

Cooperative thread scheduling

If you use a coordinated scheduling multi-thread system, the execution time of the thread is controlled by the thread itself. After the thread finishes its work, it must actively notify the system to switch to another thread

Advantages : simple to implement, and because the thread will switch the thread after finishing its own work, the switching operation is known to the thread itself, so there is no problem of thread synchronization

Disadvantages : thread execution time is uncontrollable

Preemptive thread scheduling

If a multi-threaded system with preemptive scheduling is used, then each thread will be allocated execution time by the system, and the thread switching is not determined by the thread itself

Published 8 original articles · Like1 · Visits 262

Guess you like

Origin blog.csdn.net/qq_40635011/article/details/105429062