Thread Safety and Lock Optimization

This article introduces thread safety and lock optimization from the following

1. Thread Safety

2. Lock optimization

 

1. Thread Safety

 

1. Definition of thread safety

     Brian Goetz's definition of thread safety:

     When multiple threads access an object, if the scheduling and alternate execution of these threads in the runtime environment are not considered, additional synchronization is not required, or the caller does not need to do any other coordination operations to call the object. The behavior of the object can get the correct result, the object is thread-safe.

 

    Characteristics of thread-safe code:

    The code itself encapsulates the correctness guarantee means (such as mutual exclusion synchronization), and the caller does not need to pay attention to the problem of multi-threading, and does not need to implement any measures to ensure the correct invocation of multi-threading.

 

2. Thread safety in Java language

      The degree of thread security, from strong to weak, the shared data of various operations in the Java language is divided into five categories: immutable, absolute thread safety, relative thread safety, thread compatibility, thread opposition

 

     (1) Immutable

              Immutable objects must be thread-safe, and neither the object's method implementation nor the caller's invocation of the method require any thread-safety protection.

              In the java language, shared data is a basic data type, as long as it is modified with final, it can be guaranteed to be immutable;

              If the shared data is an object, the behavior of the object needs to have no effect on its state (attributes), and it can be guaranteed to be immutable, such as: String, substring() methods, which will not change the original string, and will return after the method is executed. a new value;

              There are many ways to ensure that the behavior of an object does not affect its own state: for example: declare a variable with state in the object as final, such as: the constructor in Integer declares a member variable with final

              private final int value;

              public Integer(int value){

                  this.value = value;

              }

             

       (2) Absolute thread safety

                Absolute thread safety is in line with the definition of thread safety given by Brain Goetz. Most of the classes marked as thread-safe by the Java API are not absolutely thread-safe, such as Vector

              

package net.oschina.tkj.jvmstu.thread;

import java.util.Vector;

public class TestVector {

	private static final Vector v = new Vector();

	public static void main(String[] args) {

		Thread t = new Thread(new Runnable() {

			@Override
			public void run() {
				// TODO Auto-generated method stub
				for (int i = 0; i < v.size(); i++) {
					v.remove(i);
				}
			}
		});

		Thread t1 = new Thread(new Runnable() {

			@Override
			public void run() {
				// TODO Auto-generated method stub
				for (int i = 0; i < v.size(); i++) {
					System.out.println(v.get(i));

				}
			}
		});

		t.start();
		t1.start();
		while (Thread.activeCount() > 20)
			;

		while (true) {
			for (int i = 0; i < 10; i++) {
				v.add(i);
			}
		}

	}

}

  An unusual problem occurred:

   

 

The solution is to add synchronized in the run() method.

Therefore, APIs declared thread-safe in Java are not absolutely thread-safe.

 

        (3) Relative thread safety

                  A single operation on an object is thread-safe. If it is called by multiple threads, additional thread-safety methods need to be used to ensure the correctness of the call. Such as: the above-mentioned Vector, Hashtable, etc.

 

        (4) Thread compatibility

                 Thread-compatible objects are not inherently thread-safe and can be guaranteed to be thread-safe by using synchronized operations.

        (5) Thread opposition

                Code cannot be used concurrently in a multithreaded environment, regardless of whether the caller uses synchronization.

 

3. Implementation of thread safety

      The reason for the thread safety problem: there are multiple threads, and multiple threads have shared data.

      Solutions to thread safety issues:

      (1) Mutual Exclusion & Synchronized

        The means of mutual exclusion synchronization in Java is the synchronization concern word Synchronized.

 

1> Synchronized keyword Synchronized

   

①After the synchronization keyword is compiled, two bytecode instructions, monitorenter and monitorexit     , will be generated before and after the synchronization block . Both of these bytecode instructions require a reference type parameter to specify the object to be locked and unlocked;

   

    ②The JVM specification points out that when executing the monitorenter instruction, first try to acquire the lock of the object. If the object is not locked, or the current thread already owns the lock on that object, increment the lock counter by 1. Correspondingly, when the monitorexit instruction is executed, the lock counter will be decremented by 1. When the value of the counter is 0, the lock will be released. If the lock of the object fails to be acquired, the current thread is in a blocking waiting state until the lock of the object is released by the occupied thread;

   

    ③In the description of the JVM specification, note about the monitorenter and monitorexist instructions:

        "1" synchronized is reentrant for a thread, and it will not lock itself up;

        "2" The synchronized block will block the entry of other threads behind before the thread that has entered is executed.

 

    ④The thread of java should be mapped to the native thread of the operating system. Therefore, the blocking and wake-up of the thread requires the participation of the operating system, which requires the transition from the user state to the core state, so the state transition requires a lot of processor Time, therefore, Synchronized is a heavyweight operation of the java language;

 

      (2) java.util.concurrent.ReentrantLock realizes synchronization

        ReentrantLocak is similar to synchronizd, but ReentrantLock explicitly declares locking and unlocking operations (lock and unlock methods are used together with try and finally statement blocks)

 

        The added functions of ReentrantLock over synchronizd are as follows:

        "1" Waiting can be interrupted: the thread holding the lock does not release the lock for a long time, and the waiting thread can choose to give up waiting to process other things, which is useful for synchronization blocks whose execution time is too long;

        《2》Fair lock: When multiple threads wait for the same lock, they will obtain the locks in the order in which they apply for the lock; if the lock is not fair, this is not guaranteed, any thread waiting for the lock has the opportunity to obtain the lock; synchronized is unfair Lock, ReentrantLock is an unfair lock by default, and can be converted to a fair lock through the boolean value of the constructor;

        《3》The lock can be bound to multiple conditions: it means that a ReentrantLock object can be bound to multiple Condition objects.

 

       (3) Non-blocking synchronization

        The main problem of mutual exclusion synchronization is the performance of thread blocking and wake-up, so this kind of synchronization is also called blocking synchronization .

         Pessimistic concurrency strategy: This strategy believes that if you do not do the correct synchronization, there will be problems. Locking operations are required regardless of whether threads actually compete for shared data.

        Optimistic concurrency strategy: An optimistic concurrency strategy based on conflict detection. Simply put, the operation is performed first. If there is no thread contention for shared data, the operation is successful; if there is contention for shared data and a conflict occurs, other compensation measures will be implemented. This optimistic concurrency strategy does not require thread suspension, so it is also called non-blocking synchronization (Non-blocking synchronization)

 

2. Lock optimization

       Important topics of high concurrency JDK1.6, technologies to achieve lock optimization: adaptive spin, lock elimination, lock coarsening, lightweight lock, biased lock, these technologies are designed to solve efficient data sharing between threads, solve competition problems, improve Program execution efficiency.

 

1. Spin lock and adaptive lock 

     The biggest impact of synchronous mutual exclusion on performance is blocking implementation. Suspending threads and restoring threads need to be transferred to kernel mode to complete, which puts a lot of pressure on the concurrent performance of the operating system.

     Spin lock : The physical machine has more than one processor, which can handle multiple threads concurrently, in order to realize the waiting of the thread requesting the lock, without giving up the execution time of the processor. Only the thread waiting for the lock executes a busy loop (spin), the technique is a spin lock.

     Advantages of spin locks : avoid the overhead of thread state transitions

     Disadvantages of spin locks : It takes time for the processor. If the thread holding the lock holds the lock for too long, it will cause the thread waiting for the lock to hold the processor for too long and waste resources. So the number of spins set to the spinlock defaults to 10. If the lock is not acquired after this number of times, the thread is suspended by conventional means.

     Adaptive spin: The spin time is not fixed, it is determined by the previous spin time on the same lock and the state of the owner of suo.

 

2. Lock removal

The jvm's just-in-time compiler requires synchronization for some code at runtime, but eliminates the lock when it detects that there is no possible contention for shared data.

 

3, chain coarsening

      Under normal circumstances, synchronized blocks are used in the smallest possible range, so that the thread waiting for the lock can get the lock as soon as possible.

      In a special case: an operation repeatedly locks and unlocks, even if the lock operation is in a loop, even if there is no competing data, it will cause unnecessary waste of performance. Therefore, at this time, the scope of the lock will be expanded (roughened) to the outside of the entire operation, so as to ensure that only one lock operation is added. Such as: StringBuffer's append() is the lock coarsening operation. For multiple append(), the lock is placed before the first append(), and the unlock is placed after the last append().

 

4. Lightweight lock

      Compared to traditional locks that use the system's mutex.

       Lightweight locks improve synchronization performance basis: For most locks, there is no competition during the entire synchronization cycle. If there is no competition, the lightweight lock uses the CAS operation to avoid the overhead of using the mutex. If there is competition, in addition to the overhead of the mutex, an additional CAS operation is performed, so the lightweight lock is slower in the case of competition. .

 

5. Bias lock

      Eliminate synchronization without data contention and improve program performance.

      The lightweight lock eliminates the mutex used for synchronization through CAS without competing data, then the biased lock eliminates the entire synchronization without competition, and the CAS operation does not need to be performed.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326202767&siteId=291194637