Java Virtual Machine study notes (five) - Efficient Concurrency

Disclaimer: This article is a blogger original, shall not be reproduced without the bloggers allowed. https://blog.csdn.net/weixin_36904568/article/details/90301235

A: memory and thread

1. Java memory model concept

The variables defined in the program (instance fields, static fields, array elements) of access rules

  • All variables are stored in the main memory
  • Each thread also has its own working memory, holds a copy of a copy of variables to use in the main memory. (Can not access each other's variable among different threads, the thread can not access main memory variables)

The interaction between the 2. Java Memory Model

(1) Main operation

  • lock to lock (main memory): the main memory variable flag is a thread exclusive state
  • unlock Unlock (main memory): main memory variables can be locked other threads
  • reading read (main memory): the value of the variable is transferred from the main memory to the working memory of the thread
  • load load (working memory): the variable values ​​read into the working memory of a copy
  • use use (working memory): the working memory variable value passed to the execution engine
  • Variable Variable execution engine to get the value passed to the working memory: assign assignment (working memory)
  • store memory (working memory): the value of the variable transfer threads from working memory to the main memory
  • writing write (main memory): variable into the variable value obtained by the main memory

(2) Process

  • → main memory working memory: read + load
  • → main memory working memory: store + write
  • Use variables: lock + unlock

(3) As a general rule

  1. You must write: Do not allow variables read from main memory, working memory does not accept. Or variable from the write-back launched a working memory, main memory does not accept
  2. Synchronous change: after work variable is assigned, must be synchronized back to main memory. A thread is no assignment, the data can not be synchronized back to main memory
  3. Initialized: Data needs to start with the main memory load and assignment, before you can use and storage
  4. The only lock: a variable can only be one thread locked at the same time, you can add multiple locks
  5. That empty lock: you need to load or execute an assignment before locking, variable initialization
  6. Own unlock: Unlock variable can only be locked himself
  7. Unlock synchronization: before unlocking need to synchronize back to main memory

(4) Volatile rules

Feature
1: to ensure the visibility of all variables on the thread without passing through the main memory to complete

No need to lock:

  • Result of the operation does not depend on the current value of the variable to ensure that only a single thread to modify the value of a variable
  • Variable does not need to participate in other state variables constant constraint

We need to lock:

  • Volatile variable consistency is not a problem at all working threads, but the operation still exist Volatile variable consistency in a concurrent
2: Optimization prohibiting command reorder

Avoid variables are optimized to perform in advance reorder

rule
  1. First refresh (load + use): in working memory, each time using volatile variables need to be refreshed in the main memory, guaranteed the latest value
  2. Fast synchronization (assign + store): In the working memory, each modification volatile variables need to be immediately synchronized back to the main memory, main memory variable to ensure that the latest value
  3. With the sequence: modified volatile variables, the same execution sequence of the program code

(5) Non-atoms long and double agreements

Allows read and write operations to the virtual machine is not modified volatile 64-bit data into two 32-bit operation, does not guarantee read, load, store, the write atomicity

3. Characteristics of the Java Memory Model

(1) Atomic

  • Java memory model guarantees a read, load, assign, use, store, atomic write operation
  • Atomicity (the synchronized sync blocks) obtained by the larger lock and unlock

(2) visibility

After modifying variable synchronous write back to main memory before the variable read refresh variable values ​​from the main memory.

  • volatile variables ensure immediate synchronization and refresh immediately
  • synchronized sync blocks
  • final keyword

(3) ordering

  • Operating ordered this thread: internal thread performance serial
  • Other operating disorderly thread: instruction reordering phenomenon, working memory with the main memory synchronization delays
    • volatile variable prohibit reordering
    • synchronized synchronized block to ensure the same time allowing only one thread lock

4. The principle of first occurrence

If the operation occurs ahead A Procedure B, A generates affect the operation of the operation B can be observed. And the first time the principle of the order occurs not necessarily linked

  • Between the same thread
    • Principle program order: the inner thread in order to control the flow of the foregoing operations occur in subsequent operations preceding
  • The sync block synchronized
    • Tube-lock principle: a first unlock operations occurs in lock with a lock operation
  • volatile variable
    • volatile variable principles: a write to a volatile variable occurs in the first read operation
  • Thread
    • Thread starts principles: start Thread object () method first occurred in operation within a thread
    • Thread terminates principle: all operations within the thread termination occurred in the first detection of this thread
    • Thread interrupt principle: interrup on the thread () method first occurs within this thread threaded code detection interrupt events
  • Objects
    • Object terminate principles: the completion of the initialization of an object first occurred in the finalize ()
  • Transitive principle

5. Java thread

Implement (1) thread

  • Kernel implements (1: 1): directly supported by the operating system kernel thread, the kernel to complete the handover, and scheduling threads. Typically uses lightweight processes (LWP consists of a kernel thread support)
    • LWP is an independent scheduling unit
    • High cost of system calls
    • Kernel resource consumption
  • Implement user thread (1: N): the creation of threads, handover, scheduling and synchronizing completion by the user
    • Operation Fast
    • Small resource consumption
    • The program is too complex
  • User threads lightweight processes to achieve + (N: M)
    • Create a user thread switching is still done by the user
    • Lightweight processes as a bridge, using a kernel thread scheduling and processor mapping function

Java-based "green thread" user threads implemented before JDK1.2, after that implement an operating system based on the native threading model.

(2) The Scheduling

Collaborative

The execution time of the thread is controlled by the thread itself, proactive notification system thread after thread switch to another job

  • Simple
  • Unable to control thread execution time
Preemptive (JAVA)

Execution time for a thread assigned by the system, the system schedules threads.

  • By setting the priority allocation time
  • Priority inconsistent on different platforms, the priority may be to change the system

State (2) threads

  • New New: created, has not started
  • Run Runable: Ready or performing
  • Wait indefinitely Waiting: waiting to be awakened by other threads
    • Object.wait()
    • Thread.join()
    • LockSupport.park()
  • Deadline waiting Timed Waiting: automatic wake-up after a period of time
    • Thread.sleep()
    • Thread.join(x)
    • LockSupport.parkNanos()
    • LockSupport.parkUtil ()
  • Blocked Blocked: waiting to acquire an exclusive lock
  • End Terminated: end of the thread execution

II: thread safety

1. thread-safe concept

(1) the definition of

When multiple threads access an object, if not consider these thread scheduling in the operating environment and alternately perform and does not require additional synchronization, or perform any other operation in coordination caller, a single call this object's behavior you can get the right result, then the object is thread-safe.

Thread-safe (2) Java in

  • Immutable objects: it must be thread-safe, regardless of the method of realization of the object or the caller method, do not require additional security measures thread
    • Basic data types: final
    • Object Type: Any object's behavior will not change their status
  • Absolutely thread-safe objects: fully meet the definition of thread-safe, call the object does not need additional synchronization means
  • Relative thread-safe objects: the object of a single operation when thread-safe, if it is a continuous call, you will need additional synchronization means
  • Thread-compatible objects: The object itself is not thread safe, you can use the synchronization means calling the end to ensure that the object thread-safe
  • Thread-hostile objects: whether or not to take synchronization means, they can not guarantee the object thread-safe

2. Thread safe implementation

(1) + sync mutex

definition

Synchronization: a plurality of concurrent threads accessing shared data, shared data at the same time is only a portion of the threads or
mutually exclusive: A method to achieve synchronization

Feature
  • Pessimistic concurrency strategy
  • Heavyweight, large overhead of system calls
method
synchronized keyword

Forming monitorenter and monitorexit bytecodes requires a reference parameter indicates the type of locking and unlocking objects

  • synchronized specified object: the object type as an object reference
  • Unspecified synchronized: an object instance or a modified method Class object type as the object reference
process:
  • Monitorenter instruction, when executed, acquire lock of the object
    • If the object is not locked, or has been the thread (reentrant), the lock counter +1
    • Otherwise, wait for obstruction
  • When monitorexit instruction execution, the counter lock -1
  • Lock counter = 0, the object lock is released
java.util.concurrent reentrant lock ReentrantLock
  • Interruptible wait: thread holding the lock releases the lock for long periods of time, the waiting thread to give up waiting instead for other things
  • Can achieve a fair lock: wait for multiple threads, a first-come first-served acquire the lock
  • Lock can bind multiple conditions

(2) non-blocking synchronization +

First to do if there is no other threads using shared data, the operation was successful. If not, then compensatory measures (retry)

Feature
  • Based on optimistic concurrency strategy collision detection
  • Operation and the need to ensure atomicity includes a collision detection (an instruction completion is guaranteed by hardware multiple operations)
    • Test and set (Test-and-Set)
    • Obtain and increase (Fetch-and-Increment)
    • Swap (Swap)
    • Compare and Swap (Compare-and-Swap)
    • Load link (Load-Linked)
    • Storage conditions (Store-Conditional)
method

By JUC atoms inside the package using a reflective type or acquired Unsafe.getUnsafe () using

(3) no synchronization scheme

If a method does not involve sharing data without the use of synchronous measures

  • Reentrant code may be: not dependent data stored in the stack and a common resource, not a non-reentrant method call, the state of the parameter successor
  • Thread Local Storage: Thread a save operation data sharing

3. Lock optimization

(1) Spin Lock

Multiple processors allow multiple threads to execute in parallel, so that the back lock request thread to wait (to perform a busy cycle), without success to obtain a lock If you wait a period of time, pending thread

  • Avoid the overhead of thread switch
  • Consumption of processor resources
Adaptive spin lock

Thread wait time in the lock and the lock spin time is determined by the owner

  • Easy access lock: wait
  • Not easy to get the lock: Direct hung thread

(2) elimination of lock

Virtual machine time compiler at run time, some of the code requires synchronization, but will eliminate the presence of the detected data race can not be shared locks.
If it is determined the piece of code, the data in the heap do not escape out access by other threads, it can be treated as data on the stack, there is no need to lock

(3) lock coarsening

  • In general, the range of sync blocks should be as small as possible, only the actual scope synchronizing shared data, to facilitate the release of the lock
  • If a series of operations are repeatedly lock and release the lock on the same object, it should expand the scope of sync

(4) lightweight lock

Memory layout object HotSpot virtual machine

Object header

  • Mark World: storage of runtime data object itself, such as a hash value, GC generational age
  • The method area stores a pointer of the object pointer data type (and the length of the array)
Lightweight lock features
  • For most of the locks, there is no competition in the entire synchronization cycle
  • If there is lock contention, lock the lightweight slower than the heavyweight lock (requires additional CAS operation)
Lightweight lock implementation
Lock:
  • If the synchronization object is not locked (flag = 01), the virtual machine will first build a space called lock recorded in the current thread's stack frame for storing the lock object current copy Displaced Mark World
  • Virtual machine operating with a CAS try to copy the object is updated to point to Mark World Pointer Lock Record
    • The update is successful, the thread that owns the object lock (flag = 00)
    • Update fails, check Mark World virtual machine is pointing to the current thread's stack frame
      • They are: proof thread already owns the lock
      • No: proof lock has been preempted by other threads, if there are multiple threads compete for locks, lock the thread is blocked heavyweight lock (flag = 10), Mark World store a pointer to a pointer heavyweight lock, waiting
Unlock:
  • If the object is still pointing to Mark World record-locking threads, the current object to Mark World and thread copy of Displaced Mark World replaced with the CAS operation
    • Replace success: the synchronization is complete
    • Replace failed: while releasing the lock, wake other suspended thread

(5) biased locking

Eliminating data synchronization primitives in the absence of competition, to further improve the operational performance of the program

Lock
  • If the thread acquires the lock for the first time, the object head toward enabled mode (flag = 01)
  • Operating with a CAS, the thread ID of the recording head in a subject in Mark World
    • The operation was successful: after the thread holding the lock when entering each sync block, no synchronization
  • Thread the end of the operation, heavy bias
Revocation of bias

Another request thread acquires the lock, ending bias mode

  • Lock locked by another thread: return to the lightweight lock status (flag = 00)
  • Lock is not locked: to restore to the unlocked state (flag = 01)
Feature
  • Improve with synchronous, non-competitive program performance
  • If the same lock multiple threads compete, the biased locking is superfluous

Guess you like

Origin blog.csdn.net/weixin_36904568/article/details/90301235