Java JVM: Java Memory Model and Threads (7)

  To measure the performance of a service, transaction processing per second (TPS) is one of the important indicators, and the TPS value has a very close relationship with the concurrent capability of the program.

Here are the reading notes, and the previous article also introduced it: Java JMM: Memory Model

1. Efficiency and consistency of hardware

  • Memory model: Process abstraction for read and write access to a specific memory or cache under a specific operating protocol
  • In order to make full use of the operation unit, the processor optimizes the out-of-order execution of the input code, and reorganizes the results of the out-of-order execution after calculation

2. Java memory model

  • The main purpose of the Java memory model is to define the access rules of various variables in the program. The Java multi-threaded memory model is similar to the cpu cache model. It is established based on the cpu cache model. The Java thread memory model is standardized and shields the differences between different underlying computers. the difference
  • main memory and working memory
    • The Java memory model stipulates that all variables are stored in main memory, and each thread also has its own working memory
    • Different threads cannot directly access the variables in each other's working memory, and the transfer of variable values ​​​​between threads needs to be completed through the main memory
  • inter-memory interaction
    • The Java virtual machine must be atomic and indivisible for the following operations
      • lock (lock): identifies a variable as a state exclusive to a thread, acting on the main memory
      • unlock (unlock): Release a variable that is in a locked state and act on the main memory
      • read (read): transfer the value of a variable from the main memory to the working memory of the thread, and act on the main memory
      • load (load): Put the variable value obtained from the main memory by the read operation into the variable copy of the working memory, and act on the working memory
      • use (use): pass the value of a variable in the working memory to the execution engine, and act on the working memory
      • Assign (assignment): Assign a value received from the execution engine to a variable in the working memory, acting on the working memory
      • store (storage): transfer the value of a variable in the working memory to the main memory, and act on the working memory
      • write (write): Put the variable value obtained by the store operation from the working memory into the variable of the main memory, and act on the main memory
    • Copy the main memory to the working memory, execute the read and load operations in sequence, synchronize from the working memory back to the main memory, and execute the store and write operations in sequence
  • Special rules for volatile variables
    • Volatile can be said to be the lightest synchronization mechanism provided by the Java virtual machine
    • Define two characteristics of volatile
      • Guarantees the visibility of this variable to all threads
        • Visibility: When a thread changes the value of a variable, other threads can know immediately
        • Operations based on volatile variables are thread-safe under concurrency
        • Locks are still required to ensure atomicity
          • Encountered operation results do not depend on the current value of the variable, or can ensure that only a single thread modifies the value of the variable
            The variable does not need to participate in the invariant constraints with other state variables
      • Disable instruction reordering optimizations
        • In terms of hardware architecture, multiple instructions are allowed to be sent separately to each corresponding circuit unit for processing without the sequence specified by the program.
    • The performance consumption of volatile variable read operations is almost the same as that of ordinary variables
  • Special rules for long and double variables
    • Allow the virtual machine to perform the atomicity of the four operations of load, store, read and write of the 64-bit data type not modified by volatile, which is the non-atomic agreement of "long and double"
    • For the double type, since modern central processing units generally have a special floating-point unit FPU, which specializes in processing single and double-precision floating-point data, there will be no problem of non-atomic access
  • Atomicity, Visibility, and Order
    • atomicity
      • The atomic variable operations directly guaranteed by the Java memory model include read, load, assign, use, store, and write.
        • It can be considered that the access, reading and writing of basic data types are atomic
      • Larger-scale atomicity guarantees need to use lock and unlock to operate
      • Operations between synchronized blocks are also atomic
    • visibility
      • When a thread modifies a variable, other threads can know it immediately
      • The volatile special rule ensures that new values ​​are immediately synchronized to main memory and refreshed from main memory immediately before each use
      • Synchronized and final can also achieve visibility
        • Before the synchronization block operates on a variable unlock, the variable must be synchronized back to the main memory
        • The final initialization is completed, and the constructor does not pass the "this" reference, then the value of the final field can be seen in other threads
    • orderliness
      • Observed in the thread, all operations are in order, while in one thread, all operations in another thread are out of order
      • volatile and synchronized preserve order between threads
        • volatile has semantics that prohibit instruction reordering
        • synchronized A variable only allows one thread to lock it at the same time
    • Most of the concurrency control operations can be done using synchronized
  • antecedent principle
    • All ordering in the Java memory model is accomplished only by volatile and synchronized
    • Happen-before is a partial ordering relationship between two operations defined in the Java Memory Model
      • A happens before B, and the effect produced by A is observed by B
    • Program order rules, monitor locking rules, volatile variable rules, thread start rules, thread termination rules, thread interruption rules, object finalization rules, transitivity
    • There is basically no causal relationship between the chronological order and the principle of occurrence first, so when we measure concurrent security issues, we will not be disturbed by the order of time, and everything will be based on the principle of occurrence first

3. Java and threads

  • Implementation of threads
    • A thread is a more lightweight scheduling execution unit than a process, and a thread is the most basic unit for processor resource scheduling in Java
    • All key methods of the Thread class are Native
      • Native method often means that this method does not use or cannot be implemented using platform-independent means
    • Kernel thread (KLT) implementation: threads directly backed by the operating system kernel, 1:1 implementation
      • Programs generally do not use kernel threads directly, but use a high-level interface of kernel threads – Lightweight Process (LWP)
      • Lightweight Process Limitations
        • Based on kernel thread implementation, various thread operations require system calls
    • User thread implementation: 1:N implementation
      • In a broad sense: as long as it is not a kernel thread, it can be considered as a kind of user thread
      • In a narrow sense: completely built on the thread library of user space
    • Hybrid implementation: N:M implementation
      • Kernel threads are used together with user threads
    • Implementation of Java threads
      • HotSpot: Each Java thread is directly mapped to an operating system native thread to achieve
  • Java thread scheduling
    • cooperative thread scheduling
      • The execution time of the thread is controlled by the thread itself, and the system must be actively notified to switch to another thread
      • Advantages: easy to implement
      • Disadvantages: The thread execution time is uncontrollable, and the entire process or even the system will be blocked due to one thread (if there is a problem with one thread, the system will not be notified to switch threads)
    • preemptive thread scheduling
      • The execution time of each thread is allocated by the system, and the execution time of the thread is controlled by the system. There will be no problem that one thread will cause the entire system to block
      • The thread scheduling method used by Java is preemptive scheduling
  • state transition
    • New (New): A thread that has not been started after creation
    • Running (Runnable): It may be executing, or it may be waiting for the operating system to allocate execution time
    • Waiting indefinitely: waiting for other threads to explicitly wake up
    • Timed Waiting: automatically wake up by the system after a certain period of time
    • Blocked: The thread is blocked, waiting to acquire an exclusive lock. This event will occur when another thread gives up the lock, and the program waits to enter the synchronization area.
    • Terminated: thread state of a terminated thread

insert image description here

Guess you like

Origin blog.csdn.net/baidu_40468340/article/details/128836512