Key knowledge of thread safety and lock optimization

Thread-safe

When multiple threads access an object, if you do not need to consider the scheduling and alternate execution of these threads in the runtime environment, there is no need for additional synchronization, or any other coordinated operation on the caller, the behavior of calling this object is You can get the correct result, then this object is thread safe.

Thread safety in java

In order to gain a deeper understanding of thread safety, here we can not treat thread safety as a binary exclusion option that is either true or false. We can sort the Java language according to the "safety degree" of thread safety from weak to strong. The data shared by the various operations in is divided into the following 5 categories:

1. Immutable

In the Java language (specifically after JDK1.5, that is, the Java language after the java memory model has been revised), immutable objects must be thread-safe, regardless of the method implementation of the object or the caller of the method. No need to take any thread safety protection measures.

If the shared data is a basic data type, as long as it is modified with the final keyword at the time of definition, it can be guaranteed to be immutable. If the shared data is an object, you need to ensure that the behavior of the object does not have any effect on its state. There are many ways to ensure that the behavior of the object does not affect its state. The simplest is to put the state in the object. Variables are declared as final. Examples: java.lang.String class, java.lang.integer class

2. Absolute thread safety

Marking yourself as a thread-safe class in the java API, most of them are not absolute thread-safe.

3. Relative thread safety

Relative thread safety is thread safety in our usual sense. It needs to ensure that the individual operations of this object are thread safe. We do not need to make additional safeguards when calling, but for continuous calls in some specific order, It may be necessary to use additional synchronization means at the calling end to ensure the correctness of the call. For example: Vector, HashTable, Collectiongs's SynchronizedCollection () method wrapped collection, etc.

4. Thread compatible

Thread compatible means that the object itself is not thread-safe, but you can ensure that the object can be used safely in a concurrent environment by using synchronization methods at the calling end. We usually say that a class is not thread-safe. This is the case.

5. Thread opposition

Thread opposition refers to code that cannot be used concurrently in a multi-threaded environment regardless of whether the calling end uses synchronization measures. An example of thread opposition is the suspend () and resume () methods of the Thread class. It is for this reason that it was abandoned by the jdk statement. Common thread-opposed operations include System.setIn, System.setOut, and System.runFinalizersOnExit ().

Implementation method of thread safety

1. Mutually exclusive synchronization

Synchronization means that when multiple threads access shared data concurrently, ensure that the shared data is only used by one (or some, when using semaphores) thread at the same time. Mutual exclusion is a means to achieve synchronization. Critical Section, Critical Section, Mutex, and Semaphore are the main methods of mutual exclusion.

synchronized keyword

After the synchronized keyword is compiled, two bytecode instructions, monitorenter and monitorexit, are formed before and after the synchronization block. Both bytecodes require a parameter of Reference type to indicate the object to be locked and unlocked. If the object parameter is specified, it is the Reference of this object; if it is not explicitly specified, then the corresponding object instance or class object is taken as the lock object according to whether the modification is an instance method or a class method.

There are two points to note in the description of the behavior of monitorenter and monitorexit in the virtual machine specification :

  • The synchronized block is reentrant for the same thread, and there is no problem of locking yourself
  • The synchronization block will block the entry of other threads after the entered thread is executed.
    • Java threads are mapped to the native threads of the operating system. If you want to block or wake up a thread, you need the operating system to help you complete. This requires switching from user mode to core mode, so state switching will consume processor time.
    • For simple sync blocks, the time consumed by the state transition may be longer than the user code execution time
    • The virtual machine itself will also perform some optimizations, such as adding a spin waiting process before notifying the operating system to block the thread, to avoid frequent switching into the core state

Reentrant Lock

Compared with synchornized, ReentranLock adds some advanced functions, mainly including the following 3 items:

  • Waiting can be interrupted : if the thread currently holding the lock does not release the lock for a long time, the waiting thread can choose to give up waiting and handle other things instead
  • Fair lock : When multiple threads are waiting for the same lock, they must acquire the locks in sequence according to the time sequence of applying for the lock; the locks in synchronized are unfair, ReentrantLock is also unfair by default, but can be constructed with Boolean Function requires fair lock
  • Lock binding multiple conditions : A ReentrantLock object can bind multiple Condition objects at the same time, and in synchronized, the lock object's wait () and notify () or notifyAll () methods can implement an implicit condition, if necessary and redundant A conditional association, you have to add an additional lock
2. Non-blocking synchronization

With the development of the hardware instruction set, in addition to mutual exclusion and synchronization, this "pessimistic" concurrency strategy, we have another option: an optimistic concurrency strategy based on conflict detection. In layman's terms, it is the first operation, if there are no other threads Contention for shared data, then the operation is successful; if the shared data contention, conflicts, then take other compensation measures (the most common compensation measure is to continue to retry until you know success), this optimistic concurrency Many implementations of the strategy do not need to suspend the thread, so this synchronization operation is called non-blocking synchronization (Non-Blocking Synchronization).

CAS atomic operation : https://blog.csdn.net/cringkong/article/details/80533917

3. No synchronization scheme

To ensure thread safety, it is not necessary to synchronize, there is no causal relationship between the two.

Reentrant code

This kind of code is also called Pure Code. It can be interrupted at any time when the code is executed, and instead execute another piece of code (including recursively calling itself), and after the control returns, the original program will not appear. Any errors.

Reentrant code has some common characteristics, such as not relying on public system resources stored on the heap, the amount of state used is passed in by parameters, and non-reentrant methods are not called. A simple judgment method: if a method, its return result is predictable, as long as the same data is entered to return the same result, then he meets the requirement of reentrancy.

Thread local storage

If you can guarantee that the code that shares data can be executed in the same thread, this way, you can ensure that there is no data contention between threads without synchronization.

This is in line with the characteristics of the application is not uncommon, most of the use of architectural patterns of consumption queues - consumption process (such as "producer-consumer" mode) will speak as much as possible the consumption of finished products in one thread, one of the most important application examples It is the processing method of "one request corresponds to one server thread" (Thread-per-Request) in the classic web interaction model. The wide application of this processing method allows many web server applications to use thread local storage to solve thread safety problems.

A variable in java is to be exclusively enjoyed by a thread, and the function of thread local storage can be realized through the java.lang.ThreadLocal class.


Lock optimization

1. Spin lock and adaptive lock

Spin lock

If the physical machine has more than one processor, which can allow two or more threads to execute in parallel at the same time, we can let the thread that requested the lock "wait", but do not give up the execution time of the processor. Whether the locked thread will release the lock soon. In order to make the thread wait, we only need to let the thread perform a busy loop (spin) , this technique is called spin lock .

Spin lock has been introduced in jdk 1.4.2, but it is turned off by default. You can use the -XX: + UseSpinning parameter to turn it on. In jdk 1.6, it has been turned on by default.

Note : The waiting time of the spin must have a certain limit. If the spin exceeds the limited number of times and the lock is not successfully obtained, the traditional method should be used to suspend the thread. The default value of the number of spins is 10, the user can use the parameter -XX: PreBlockSpin to change.

Adaptive lock

Adaptive spin lock was introduced in jdk1.6. Adaptive means that the spin time is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner.

If on the same lock object, the spin wait has just successfully obtained the lock, and the thread holding the lock is running, then the virtual machine will think that this spin is also likely to succeed again, and then he will allow the spin The wait lasts a relatively long time, such as 100 cycles. In addition, if the spin is rarely successfully acquired for a certain lock, it may be possible to omit the spin process when acquiring this lock in the future to avoid wasting processor resources.

2. Lock elimination

Lock elimination means that the virtual machine real-time compiler requires synchronization on some code while it is running, but it is detected that there can be no locks for shared data contention to eliminate.

The main judgment of lock elimination is based on data support from escape analysis. If it is judged that in a piece of code, all the data on the heap will not escape and be accessed by other threads, then they can be treated as data on the stack. They are thread-private, and synchronization locks are naturally unnecessary.

3, roughening

In principle, when we write code, we always recommend to limit the scope of the synchronization block as small as possible-only synchronize in the actual scope of the shared data, this is to make the number of operations that need to be synchronized as variable as possible Small, if there is lock contention, then the waiting thread can get the lock as soon as possible.

4. Lightweight lock

Lightweight lock is a new lock mechanism added in jdk1.6. The "lightweight" in its name is relative to the traditional lock implemented using the operating system mutex. The first thing to emphasize is that lightweight locks are not used to replace heavyweight locks. It is intended to reduce the performance consumption of traditional heavyweight locks using the operating system mutex without multi-thread competition. .

image

image

5. Bias lock

image

The specific explanation and understanding can refer to: https://blog.csdn.net/zq1994520/article/details/84175573

Published 8 original articles · Like1 · Visits 261

Guess you like

Origin blog.csdn.net/qq_40635011/article/details/105495705