Concurrent programming - synchronized

Atomicity, orderliness, visibility

atomicity

Atomicity of database transactions: It is the smallest unit of execution. Multiple operations of a transaction either succeed or fail.

Atomicity of concurrent programming: one or more instructions do not allow interruption during CPU execution.

i++; Is the operation atomic?

Definitely not: there are three instructions in total for i++ operations

image.png

getfield: pull data from main memory to CPU registers

iadd: +1 the data inside the register

putfield: Update the result in the CPU register with the main memory

How to ensure that i++ is atomic?

Use synchronized, lock, Atomic (CAS) to ensure

image.png

There will be a similar concept when using lock, that is, before operating the three instructions of i++, the state must be successfully modified based on AQS before the operation can be performed.

When using synchronized and lock locks, the operation of suspending the thread may be triggered, and this operation will trigger the switch between the kernel mode and the user mode, thus consuming resources.

The CAS method is more efficient than synchronized and lock locks, because CAS does not trigger thread suspension operations!

CAS:compare and swap

The way threads modify data based on CAS: first obtain the main memory data, and before modifying, compare whether the data is consistent. If they are consistent, modify the main memory data. If they are inconsistent, abandon the modification.

CAS is comparison and exchange, and comparison and exchange is an atomic operation

image.png

CAS is a native method provided in the Unsafe class at the Java level. This method only provides CAS to return true if successful and false if failed. If you need a retry strategy, you need to implement it yourself.

CAS questions:

  • CAS can only achieve atomicity for modifications to one variable.
  • CAS has an ABA problem.
    • Thread A modifies the main memory data from 1 to 2 and gets stuck after acquiring 1.
    • Thread B modifies the main memory data from 1 to 2 and completes.
    • The C thread modifies the main memory data from 2 to 1 and completes.
    • Thread A performs a CAS operation and finds that the main memory is 1. No problem. Modify it directly.
    • Solution: Add version number
  • CAS has been executed too many times, but the data modification is still not possible. The CPU will keep scheduling this thread, causing a performance loss to the CPU.
    • How synchronized is implemented: After CAS spins a certain number of times, if it still fails, the thread is suspended.
    • How LongAdder is implemented: When CAS fails, the value of the operation is stored and added later.

Orderliness

When instructions are scheduled for execution by the CPU, the CPU will reorder the CPU instructions in order to improve execution efficiency without affecting the results. However, this may cause data inconsistency.

What if you don't want the CPU to reorder the assignment?

You can add volatile modification to the attribute, so that the instructions for the operation of the current attribute will not be reordered.

visibility

When the CPU is processing, it needs to get the main memory data into the register machine before executing the instruction. After executing the instruction, it needs to throw the register data back to the main memory. However, the synchronization of register data to main memory follows the MESI protocol. Simply put, the CPU cache data is not synchronized to main memory at the end of each operation. This will cause the data seen by multiple threads to be different.

Therefore, synchronized and volatile are usually needed to solve this problem:

  • After each volatile operation, the data is immediately synchronized to the main memory.
  • synchronized, only one thread operates this data.

synchronized use

Usage: Use synchronized when declaring a method or use synchronized in a code block.
Lock type:

  • Class lock: Class lock based on the current class
  • Object lock: lock based on this object

Synchronized is a mutex lock. When each thread acquires synchronized, it acquires the lock based on the object bound to synchronized!

Synchronized is how to implement a mutex lock based on objects. First understand how objects are stored in memory.

image.png

View the storage of objects in Java:

Import dependencies:

<dependency>
    <groupId>org.openjdk.jol</groupId>
    <artifactId>jol-core</artifactId>
    <version>0.9</version>
</dependency>

View object information

image.png

synchronized lock upgrade

Synchronized has always been a heavyweight lock before jdk1.6: as long as the thread fails to acquire the lock resource, the thread will be suspended directly.

The efficiency of synchronized was very low before jdk1.6. In addition, Doug Lea introduced ReentrantLock, which is much faster than synchronized. As a result, the JDK team had to optimize synchronized in jdk1.6.

Lock upgrade:

  • Lock-free state, anonymous bias state : no thread takes the lock.
  • Biased lock state : There is no thread competition, only one thread is acquiring the lock resource.
    When threads compete for lock resources, it is found that no currently synchronized thread occupies the lock resource, and the lock is a biased lock. Use the CAS method to set the thread ID to the current thread and obtain the lock resource. The next time the current thread obtains it again, you only need to determine whether Bias lock, and the thread ID is the current thread ID, and the lock resource is obtained directly.
  • Lightweight lock : When there is competition for biased locks, it will be upgraded to a lightweight lock.
    In the state of lightweight lock, the thread will try to acquire the lock resource based on CAS. The number of CAS is implemented based on the adaptive spin lock. The JVM will automatically decide this acquisition based on whether the last acquisition of the lock was successful. How many times does it take CAS to lock a resource.
  • Heavyweight lock : After the lightweight lock CAS for a period of time, the lock resource is not obtained and is upgraded to a heavyweight lock (in fact, the CAS operation is performed during the heavyweight lock). A heavyweight lock means that the thread hangs if it cannot get the lock.

Bias locks are delayed, and after bias locks are turned on, there is no lock-free state by default, only anonymous bias synchronized exists because there is no downgrade from heavyweight locks to biased or lightweight locks.

When synchronized upgrades a biased lock to a lightweight lock, it will involve the cancellation of the biased lock. It needs to wait until a safe point, stw, before it can be canceled. Concurrent biased lock cancellation consumes more resources. When the program starts, the bias lock has a delayed opening operation, because when the project starts, the ClassLoader will load the .class file, which involves the synchronized operation. In order to avoid bias lock cancellation during startup, which will slow down startup efficiency, the bias lock is not turned on by default when the program starts.

As a result of compiler optimization, the following effects appear

  • Lock elimination : When the thread executes a synchronized code block and finds that there is no shared data operation, it will automatically remove the synchronization for you.

  • Lock coarsening : Frequently acquiring and releasing lock resources in a multi-loop operation, synchronized may be optimized outside the loop at compile time.

synchronized-ObjectMonitor

ObjectMonitor is generally involved only when heavyweight locks are reached. After reaching the heavyweight lock, the pointer of the heavyweight lock will point to the ObjectMonitor object.

  ObjectMonitor() {
    _header       = NULL;
    _count        = 0;     // 抢占锁资源的线程个数
    _waiters      = 0,     // 调用wait的线程个数。
    _recursions   = 0;     // 可重入锁标记,
    _object       = NULL; 
    _owner        = NULL;  // 持有锁的线程
    _WaitSet      = NULL;  // wait的线程  (双向链表)
    _WaitSetLock  = 0 ;
    _Responsible  = NULL ;
    _succ         = NULL ;  // 假定的继承人(锁释放后,被唤醒的线程,有可能拿到锁资源)
    _cxq          = NULL ;  // 挂起线程存放的位置。(单向链表)
    FreeNext      = NULL ;
    _EntryList    = NULL ;  // _cxq会在一定的机制下,将_cxq里的等待线程扔到当前_EntryList里。  (双向链表)
    _SpinFreq     = 0 ;
    _SpinClock    = 0 ;
    OwnerIsThread = 0 ;
    _previous_owner_tid = 0;
  }

Guess you like

Origin blog.csdn.net/qq_28314431/article/details/133099433