Java locks---bias locks, lightweight locks, spin locks, heavyweight locks

I have done a test before and executed it many times, and found that the results are the same: 
1. The efficiency of synchronized under a single thread is the highest (at the time, I felt that its efficiency should be the worst); 
2. The efficiency of AtomicInteger is the most unstable, with different concurrency The performance is different in different situations: under short-term and low concurrency, the efficiency is higher than synchronized, and sometimes even a little higher than LongAdder, but under high concurrency, the performance is not as good as synchronized, and the performance is very unstable under different circumstances; 
3. LongAdder has stable performance , performs well in various concurrency situations, and the overall performance is the best. The performance is slightly worse than that of AtomicInteger under short-term low concurrency, and the highest performance under long-term high concurrency (which can make AtomicInteger step down);

Understand the basics of locks

If you want to thoroughly understand the ins and outs of java locks, you need to understand the following basic knowledge first.

One of the basics: types of locks

Locks are classified macroscopically into pessimistic locks and optimistic locks.

optimistic locking

Optimistic locking is an optimistic idea, that is, it believes that there are more reads and fewer writes, and the possibility of encountering concurrent writes is low. To determine whether others have updated the data during this period, read the current version number when writing, and then lock the operation (compare the version number with the previous version, if it is the same, update it), if it fails, repeat the read- Compare-write operations.

Optimistic locks in java are basically implemented through CAS operations. CAS is an atomic operation for updating. It compares whether the current value is the same as the incoming value. If it is the same, it will be updated, otherwise it will fail.

pessimistic lock

Pessimistic locking is pessimistic thinking, that is, it thinks that there are too many writes, and the possibility of concurrent writing is high. Every time I go to get data, I think that others will modify it, so every time I read and write data, I will lock it, so that others think Reading and writing this data will block until the lock is obtained. The pessimistic lock in java is Synchronized, and the lock under the AQS framework is to first try the cas optimistic lock to obtain the lock, and if it cannot be obtained, it will be converted to a pessimistic lock, such as RetreenLock.

Basic knowledge 2: The cost of java thread blocking

The java thread is mapped to the native thread of the operating system. If you want to block or wake up a thread, the operating system needs to intervene, and you need to switch between the user state and the kernel state. This switching will consume a lot of system resources, because the user Mode and kernel mode have their own dedicated memory space, dedicated registers, etc. Switching from user mode to kernel mode needs to pass many variables and parameters to the kernel, and the kernel also needs to protect some register values ​​and variables when switching from user mode. , so that after the kernel mode call ends, switch back to user mode to continue working.

  1. If thread state switching is a high frequency operation, it will consume a lot of CPU processing time;
  2. If, for simple blocks of code that need to be synchronized, the lock-acquisition pending operation takes longer than the user code executes, this synchronization strategy is obviously very bad.

Synchronized will cause threads that cannot compete for locks to enter the blocking state, so it is a heavyweight synchronization operation in the java language, called heavyweight locks. In order to alleviate the above performance problems, JVM has introduced lightweight since 1.5. Locks and biased locks, spin locks are enabled by default, they are all optimistic locks.

Clarifying the cost of java thread switching is one of the foundations for understanding the advantages and disadvantages of various locks in java.

Basic knowledge three: markword

Before introducing the java lock, let’s talk about what markword is. Markword is part of the data structure of java objects. To learn more about the structure of java objects, you can click here . Types of locks are closely related;

The length of markword data is 32bit and 64bit respectively in 32-bit and 64-bit virtual machines (without opening the compressed pointer), and its last 2 bits are the lock status flag bit, which is used to mark the status of the current object and the status of the object. , which determines the content of markword storage, as shown in the following table:

condition flag bit store content
not locked 01 Object Hash Code, Object Generation Age
Lightweight Lock 00 pointer to lock record
Swell (heavyweight lockout) 10 A pointer that performs a heavyweight lock
GC markers 11 Empty (no logging information required)
can be biased 01 Bias thread ID, bias timestamp, object generation age

The markword structure of a 32-bit virtual machine in different states is shown in the following figure:

write picture description here

Understanding the markword structure will help you understand the locking and unlocking process of java locks later;

summary

The four types of locks in java are mentioned above. They are heavyweight locks, spin locks, lightweight locks and biased locks. 
Different locks have different characteristics. Each lock can only be excellent in its specific scenario. performance, there is no lock in java that can have excellent efficiency in all cases, the reason why so many locks are introduced is to deal with different situations;

As mentioned earlier, heavyweight locks are a type of pessimistic locks. Spin locks, lightweight locks and biased locks are optimistic locks, so now you can roughly understand their scope of application, but how to use these types of locks? , depends on the specific analysis of their characteristics later;

lock in java

spin lock

The principle of spin locks is very simple. If the thread holding the lock can release the lock resource in a very short time, then those threads waiting for the competing lock do not need to switch between the kernel mode and the user mode to enter the blocking pending state. You only need to wait (spin), and the lock can be acquired immediately after the thread holding the lock releases the lock, thus avoiding the consumption of switching between user threads and the kernel.

But the thread spin needs to consume the cup. To put it bluntly, the cup is doing useless work. If the lock cannot be obtained all the time, the thread cannot always occupy the cup spinning for useless work, so it is necessary to set a maximum time for spin waiting.

If the execution time of the thread holding the lock exceeds the maximum time of spin waiting and the lock is not released, other threads contending for the lock will still not be able to acquire the lock within the maximum waiting time. At this time, the contending thread will stop spinning. into blocking state.

Advantages and disadvantages of spin locks

Spin locks reduce thread blocking as much as possible, which can greatly improve performance for code blocks that do not have intense lock competition and occupy a very short lock time, because the consumption of spin will be less than that of thread blocking and suspending and then waking up. operations that cause two context switches for the thread!

However, if the competition for the lock is fierce, or the thread holding the lock needs to occupy the lock for a long time to execute the synchronization block, it is not suitable to use the spin lock, because the spin lock always occupies the CPU for useless work before acquiring the lock. If XX is not XX, and there are a large number of threads competing for a lock at the same time, it will take a long time to acquire the lock. The consumption of thread spin is greater than the consumption of thread blocking and suspending operations, and other threads that need CPU cannot acquire CPU, resulting in CPU of waste. So in this case we have to close the spin lock;

Spinlock Time Threshold

The purpose of the spin lock is to not release the resources occupying the CPU, and to process it immediately after the lock is acquired. But how to choose the execution time of the spin? If the spin execution time is too long, a large number of threads will be in the spinning state and occupy CPU resources, which will affect the performance of the overall system. Therefore, the period selection of the spin is extra important!

JVM's choice of spin period, jdk1.5 is a certain limit, and adaptive spin lock was introduced in 1.6. Adaptive spin lock means that the spin time is no longer fixed, but determined by It is determined by the previous spin time on the same lock and the state of the owner of the lock. It is basically considered that the time for a thread context switch is the best time. At the same time, the JVM also does more for the current CPU load. optimization

  1. Spins all the time if load average is less than CPUs

  2. If more than (CPUs/2) threads are spinning, then the thread blocks directly later

  3. If the spinning thread finds that the Owner has changed, delay the spin time (spin count) or enter the block

  4. Stop spinning if CPU is in power save mode

  5. The worst case of spin time is the storage delay of the CPU (CPU A stores a data, and the time difference between CPU B knowing the data directly)

  6. Differences between thread priorities are appropriately discarded when spinning

Turn on the spin lock

In JDK1.6, -XX:+UseSpinning is enabled; 
-XX:PreBlockSpin=10 is the number of spins; 
after JDK1.7, this parameter is removed and controlled by jvm;

Heavyweight Lock Synchronized

The role of Synchronized

Before JDK1.5, the synchronized keyword was used to ensure synchronization, and the role of Synchronized is believed to be familiar to everyone;

It can treat any non-NULL object as a lock.

  1. When acting on a method, the lock is the instance of the object (this);
  2. When used for static methods, the Class instance is locked, and because the related data of Class is stored in the permanent band PermGen (jdk1.8 is metaspace), the permanent band is globally shared, so the static method lock is equivalent to a class. A global lock, which locks all threads calling this method;
  3. When synchronized acts on an object instance, it locks all code blocks that use the object as a lock.

Implementation of Synchronized

The realization is as shown in the figure below;

write picture description here

It has multiple queues, and when multiple threads access an object monitor together, the object monitor will store these threads in different containers.

  1. Contention List: competition queue, all threads requesting locks are first placed in this competition queue;

  2. Entry List: Those threads in the Contention List that are eligible to be candidate resources are moved to the Entry List;

  3. Wait Set: which threads that call the wait method are blocked are placed here;

  4. OnDeck: At any time, at most one thread is competing for lock resources, and this thread is called OnDeck;

  5. Owner: The thread that has currently obtained the resource is called Owner;

  6. !Owner: The thread currently releasing the lock.

The JVM takes a piece of data from the tail of the queue each time for the lock competition candidate (OnDeck), but in the case of concurrency, the ContentionList will be accessed by a large number of concurrent threads for CAS. In order to reduce the competition for the tail element, the JVM will move some of the threads. into the EntryList as a candidate contending thread. The Owner thread will migrate some threads in the ContentionList to the EntryList when it is unlocked, and designate a thread in the EntryList as the OnDeck thread (usually the thread that goes first). The Owner thread does not directly pass the lock to the OnDeck thread, but passes the right of lock competition to OnDeck, and OnDeck needs to compete for the lock again. Although this sacrifices some fairness, it can greatly improve the throughput of the system. In the JVM, this selection behavior is also called "competitive switching".

After the OnDeck thread acquires the lock resource, it will become the Owner thread, and the thread that does not get the lock resource remains in the EntryList. If the Owner thread is blocked by the wait method, it will be transferred to the WaitSet queue until it is woken up by notify or notifyAll at a certain time, and it will re-enter the EntryList.

Threads in ContentionList, EntryList, and WaitSet are all blocked, and the blocking is done by the operating system (implemented by the pthread_mutex_lock kernel function under the Linux kernel).

Synchronized is an unfair lock. Synchronized When a thread enters the ContentionList, the waiting thread will first try to acquire the lock by spin. If it can't get it, it will enter the ContentionList, which is obviously unfair to the thread that has entered the queue. Another unfair thing is the spin acquisition. The lock thread may also directly preempt the lock resource of the OnDeck thread.

Bias lock

Java Biased Locking is a multi-threading optimization introduced in Java 6. 
Biased lock, as the name implies, it will be biased towards the first thread that accesses the lock. If the synchronization lock is accessed by only one thread during operation, and there is no multi-thread contention, the thread does not need to trigger synchronization. In this case, a biased lock will be added to the thread. 
If other threads preempt the lock during operation, the thread holding the biased lock will be suspended, and the JVM will remove the biased lock on it and restore the lock to a standard lightweight lock.

It further improves program performance by eliminating synchronization primitives in the absence of resource contention.

Implementation of Biased Lock

Biased lock acquisition process:
  1. Whether the flag of the biased lock in the access mark word is set to 1, and whether the lock flag is 01, it is confirmed as a biasable state.

  2. If it is in the deflectable state, test whether the thread ID points to the current thread, if so, go to step 5, otherwise go to step 3.

  3. If the thread ID does not point to the current thread, the lock is contested through the CAS operation. If the competition succeeds, set the thread ID in the Mark Word to the current thread ID, and then execute 5; if the competition fails, execute 4.

  4. If CAS fails to acquire a biased lock, it indicates contention. When the global safepoint (safepoint) is reached, the thread that obtains the biased lock is suspended, the biased lock is upgraded to a lightweight lock, and then the thread blocked at the safepoint continues to execute the synchronization code. (Revoking the bias lock will result in stop the word)

  5. Execute synchronous code.

Note: Reaching the safe point in the fourth step will result in stop the word, and the time is very short.

Biased lock release:

The revocation of the biased lock is mentioned in the fourth step above. The biased lock will only release the lock when other threads try to compete for the biased lock, and the thread will not take the initiative to release the biased lock. The revocation of the biased lock needs to wait for the global safety point (no bytecode is being executed at this point in time), it will first suspend the thread that owns the biased lock, determine whether the lock object is in a locked state, and recover to the unlocked state after revoking the biased lock. Status of locked (flag is "01") or lightweight lock (flag is "00").

Applicable scenarios for biased locks

There is always only one thread executing the synchronized block. Before it finishes executing and releasing the lock, no other thread executes the synchronized block. It is used when there is no lock competition. Once there is competition, it will be upgraded to a lightweight lock, and then upgraded to a light lock. When the magnitude lock is used, the bias lock needs to be revoked. When the bias lock is revoked, it will cause the stop the word operation; 
when there is a lock competition, the bias lock will do a lot of extra operations, especially when the bias is revoked, it will lead to entering a safe point. , the security point will cause stw, resulting in performance degradation, in which case it should be disabled;

View Stalls – Safepoint Stall Logs

To view the safe point stop, you can open the safe point log, by setting the JVM parameter -XX:+PrintGCApplicationStoppedTime will print the time the system stopped, adding -XX:+PrintSafepointStatistics -XX:PrintSafepointStatisticsCount=1 These two parameters will print out detailed information, You can see the pauses caused by the use of biased locks. The time is very short, but in the case of serious contention, the number of pauses will also be very large;

Note: The security point log cannot be opened all the time: 
1. The security point log is output to stdout by default. One is the cleanliness of the stdout log, and the other is that the file redirected by stdout may be locked if it is not in /dev/shm. 
2. For some very short pauses, such as canceling the bias lock, the printing consumption is larger than the pause itself. 
3. The security point log is printed in the security point, which increases the pause time of the security point.

So the security log should only be turned on for troubleshooting. 
If you want to open it on the production system, add the following four parameters: 
-XX:+UnlockDiagnosticVMOptions -XX: -DisplayVMOutput -XX:+LogVMOutput -XX:LogFile=/dev/shm/vm.log 
Open Diagnostics (only open More flags are optional, a certain flag will not be activated actively), turn off the output of VM logs to stdout, and output to a separate file, /dev/shm directory (memory file system).

write picture description here

This log is divided into three parts: 
the first part is the timestamp, the type of VM Operation The 
second part is the thread overview, which is enclosed in square brackets 
total: the total number 
of threads in the 
safepoint : The number of threads to wait for the VM Operation to pause before starting

The third part is the various stages when reaching the safe point and the time it takes to execute the operation, the most important of which is the vmop

  • spin: time to wait for the thread to respond to the safepoint call;
  • block: the time it takes to suspend all threads;
  • sync: equal to spin+block, which is the time from the start to entering the safe point, which can be used to judge the time taken to enter the safe point;
  • cleanup: time spent cleaning up;
  • vmop: The actual execution time of VM Operation.

It can be seen that those many but short safety points are all RevokeBias, and high-concurrency applications will disable biased locks.

jvm on/off bias lock

  • Enable Biased Lock: -XX:+UseBiasedLocking -XX:BiasedLockingStartupDelay=0
  • Turn off biased locking: -XX:-UseBiasedLocking

Lightweight lock

The lightweight lock is upgraded by the bias. The biased lock runs when a thread enters the synchronized block. When the second thread joins the lock contention, the biased lock will be upgraded to a lightweight lock 
; The locking process of the level lock:

  1. When the code enters the synchronization block, if the lock state of the synchronization object is a lock-free state (the lock flag is "01", whether it is a biased lock is "0"), the virtual machine will first establish a stack frame in the current thread. The space named Lock Record is used to store a copy of the current Mark Word of the lock object, officially called Displaced Mark Word. At this time, the state of the thread stack and the object header is 
      write picture description hereshown in the figure:

  2. Copy the Mark Word in the object header to the lock record;

  3. After the copy is successful, the virtual machine will use the CAS operation to try to update the Mark Word of the object to a pointer to the Lock Record, and point the owner pointer in the Lock record to the object mark word. If the update is successful, go to step 4, otherwise go to step 5.

  4. If the update action is successful, then the thread owns the lock of the object, and the lock flag of the object Mark Word is set to "00", which means that the object is in a lightweight lock state. At this time, the thread stack and the object header status as shown in the figure. 
      write picture description here

  5. If the update operation fails, the virtual machine first checks whether the Mark Word of the object points to the stack frame of the current thread. If it does, it means that the current thread already owns the lock of the object, and it can directly enter the synchronization block to continue execution. Otherwise, it means that multiple threads are competing for locks, and the lightweight lock will expand into a heavyweight lock. The status value of the lock flag becomes "10". The pointer to the heavyweight lock (mutex) is stored in the Mark Word. Threads waiting for the lock are also blocked. The current thread tries to use spin to acquire the lock. Spin is the process of using a loop to acquire the lock in order not to block the thread.

Lightweight lock release

Release lock thread perspective: Switching from a lightweight lock to a weight lock occurs during the period when the lightweight lock releases the lock. When it acquired the lock before, it copied the markword of the lock object header. During the lock holding period, another thread tries to acquire the lock, and the thread modifies the mark word. If the two are inconsistent, switch to the weight lock.

Because the heavyweight lock has been modified, all display mark words are not the same as the original mark words.

How to remedy is to compare the markword state of obj before entering mutex. Confirm whether the markword is held by other threads.

At this time, if the thread has released the markword, it can directly enter the thread after passing CAS, without entering the mutex, which is the effect.

Try to acquire the lock thread perspective: If the thread tries to acquire the lock, and the lightweight lock is being occupied by other threads, then it will modify the markword and modify the heavyweight lock, indicating that it is time to enter the weight lock.

There is one more note: the thread waiting for the lightweight lock will not block, it will keep spinning waiting for the lock, and modify the markword as mentioned above.

This is the spin lock. The thread that tries to acquire the lock will not be suspended when the lock is not acquired, but will instead execute an empty loop, that is, spin. After several spins, if the lock has not been acquired, it is suspended, and the code is executed when the lock is acquired.

Summarize

write picture description here

The execution process of synchronized: 
1. Check whether the ID of the current thread is in the Mark Word. If it is, it means that the current thread is in a biased lock 
. 2. If not, use CAS to replace the ID of the current thread with the Mard Word. If successful, it means the current thread. Acquire the biased lock and set the biased flag bit 1 
3. If it fails, it means that there is a competition, cancel the biased lock, and then upgrade to a lightweight lock. 
4. The current thread uses CAS to replace the Mark Word of the object header with the lock record pointer. If successful, the current thread acquires the lock 
. 5. If it fails, it means that other threads compete for the lock, and the current thread tries to use spin to acquire the lock. 
6. If the spin is successful, it is still in a lightweight state. 
7. If spin fails, escalate to heavyweight lock.

The above locks are implemented internally by the JVM. When we execute the synchronized block, the JVM will decide how to perform the synchronization operation according to the enabled locks and the contention of the current thread;

When all locks are enabled, the thread will first acquire the biased lock when entering the critical section. If there is already a biased lock, it will try to acquire the lightweight lock, enable the spin lock, and if the spin does not acquire the lock , the heavyweight lock is used, and the thread that has not acquired the lock is blocked and suspended until the thread holding the lock executes the synchronization block to wake them up;

The biased lock is used in the case of no lock contention, that is, the synchronization is opened before the current thread is executed, and no other thread will execute the synchronization block. Once there is contention by the second thread, the biased lock will be executed. Upgrade to a light-weight lock. If the light-weight lock does not acquire the lock after the spin reaches the threshold, it will be upgraded to a heavy-weight lock;

Biased locking should be disabled if thread contention is high.

lock optimization

The locks described above are not controllable in our code, but we can optimize the locking operation of our own threads by learning from the above ideas;

Reduce lock time

Codes that do not need to be executed synchronously should not be placed in the synchronous fast if they can be executed in the synchronous fast, so that the lock can be released as soon as possible;

Reduce lock granularity

Its idea is to split a physical lock into multiple logical locks to increase the degree of parallelism, thereby reducing lock competition. Its thought is also to exchange space for time;

Many data structures in java use this method to improve the efficiency of concurrent operations:

ConcurrentHashMap

ConcurrentHashMap in java before jdk1.8 uses a Segment array

Segment< K,V >[] segments
  • 1

Segment inherits from ReenTrantLock, so each segment is a reentrant lock. Each segment has a HashEntry<K,V> array to store data. When put operation, first determine which segment to put data in, and only need to lock this segment , execute put, other segments will not be locked; so as many segments in the array allow as many threads to store data at the same time, which increases the concurrency capability.

LongAdder

The implementation idea of ​​LongAdder is also similar to ConcurrentHashMap. LongAdder has a Cell array that changes dynamically according to the current concurrency situation. The Cell object has a long value to store the value; 
when there is no concurrent contention at the beginning or when the cells array is being initialized, It will use cas to accumulate the value to the base of the member variable. In the case of concurrent contention, LongAdder will initialize the cells array, select a Cell in the Cell array to lock, and the number of cells in the array can be allowed at the same time. The thread is modified, and finally the value in each cell in the array is added, and the value of the base is added, which is the final value; the cell array can also be expanded according to the current thread contention, the initial length is 2, and each expansion It will double, and it will not expand until the expansion is greater than or equal to the number of CPUs. This is why LongAdder is more efficient than cas and AtomicInteger. The latter two are implemented by volatile+cas, and their competition dimension is 1. The competition dimension of LongAdder is "Number of Cells + 1". Why should it be +1? Because it also has a base, if there is no lock competition, it will try to add the value to the base;

LinkedBlockingQueue

LinkedBlockingQueue also embodies this idea, enqueue at the head of the queue, dequeue at the end of the queue, use different locks for enqueue and dequeue, and only one lock is more efficient than LinkedBlockingArray;

The granularity of unlocking cannot be unlimited, and at most one lock can be split into the current cup number of locks;

Chain coarsening

In most cases, we want to minimize the granularity of the lock, and the coarsening of the lock is to increase the granularity of the lock; 
in the following scenarios, the granularity of the lock needs to be coarsened: 
if there is a loop, the operations in the loop need to be locked , we should put the lock outside the loop, otherwise each time we enter and exit the loop, we will enter and exit the critical section once, and the efficiency is very poor;

Use read-write locks

ReentrantReadWriteLock is a read-write lock, read operation plus read lock, can read concurrently, write operation uses write lock, only single-threaded write;

read-write separation

CopyOnWriteArrayList, CopyOnWriteArraySet 
The CopyOnWrite container is the copy-on-write container. The popular understanding is that when we add elements to a container, we do not directly add to the current container, but first copy the current container, copy a new container, and then add elements to the new container. After adding elements, Then point the reference of the original container to the new container. The advantage of this is that we can read the CopyOnWrite container concurrently without locking, because the current container will not add any elements. Therefore, the CopyOnWrite container is also a kind of separation of reading and writing, reading and writing different containers. 
 The CopyOnWrite concurrent container is used for concurrent scenarios with more reads and fewer writes, because there is no lock when reading, but it will be locked when changing it, otherwise multiple threads will copy multiple copies at the same time, and each will modify its own of;

use cas

If the operation that needs to be synchronized is executed very fast, and the thread competition is not fierce, it will be more efficient to use cas, because locking will cause the context switch of the thread. If the time-consuming of context switching is more time-consuming than the synchronization operation itself, And the thread competition for resources is not fierce, using volatiled+cas operation will be a very efficient choice;

Eliminate false sharing of cache lines

In addition to the synchronization locks we use in the code and the jvm's own built-in synchronization locks, there is a hidden lock that is cache lines, which are also known as performance killers. 
In a multi-core CPU processor, each CPU has its own exclusive L1 cache, L2 cache, and even a shared L3 cache. In order to improve performance, the CPU reads and writes data in the smallest unit of cache behavior. Yes; the 32-bit CPU cache line is 32 bytes, and the 64-bit CPU cache line is 64 bytes, which causes some problems. 
For example, multiple variables that do not need to be synchronized are stored in contiguous 32 bytes or 64 bytes. When one of the variables is needed, they are loaded together as a cache line into a cup-1 private cache (Although only one variable is required, the cpu read will use the smallest unit of cache behavior, and its adjacent variables will be read together), the variable read into the cpu cache is equivalent to a copy of the main memory variable, which is also equivalent to a disguised form A lock is added to several variables in the same cache line. If any variable in this cache line changes, when cup-2 needs to read this cache line, it needs to first The entire cache line that has changed is updated back to main memory (even if other variables have not changed), and then cup-2 can read it, and cup-2 may need to change the variable of this cache line to the one in the cache line that cpu-1 has changed. Variables are different, so this is equivalent to adding a synchronization lock to several unrelated variables; 
in order to prevent false sharing, the implementation methods of different jdk versions are different: 
1. Before jdk1.7, the required Add a set of long-type variables before and after the variables of exclusive cache line, and rely on the filling of these meaningless arrays to make a variable own a cache line; 
2. In jdk1.7, because jvm will optimize these unused variables out , so it is implemented by inheriting a class that declares many long variables; 
3. In jdk1.8, this problem is solved by adding the sun.misc.Contended annotation. To make the annotation effective, the following parameters must be added to the jvm : 
-XX:-RestrictContended

The sun.misc.Contended annotation will add a 128-byte padding in front of the variable to isolate the current variable from other variables; 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324821238&siteId=291194637