Interview (2)---synchronized

I. Introduction 

      I originally planned to compare ConcurrentHashMap and HashMap, but the source code of Naihe is a bit confusing. I am thinking about it, and I have a clear idea. Let's talk about synchronized first, mainly from usage, JVM two Let's talk about it;

2. Usage

      To talk about usage, we must first understand when we need to use it. There are two main reasons for thread safety problems in concurrent programming: 1. Shared resources; 2. Simultaneous operations; at this time, we need to ensure that only allowed at the same time A thread accesses or operates a shared resource. Java provides us with a lock. In this way, only one thread can operate the shared resource at the same time. In addition, other threads accessing the changed resource at the same time are in a waiting state. This lock is also called a mutex lock. , to ensure that only one thread accesses at the same time, while ensuring memory visibility;

     Next, we also introduce our focus, synchronized:

     1. Decorate static methods

      When synchronized modifies a static method, the lock is the class object lock of the current class, see the following code

public  class SyncClass implements Runnable {
     static  int i=0 ;
     // The current class is locked 
    public  static  synchronized  void test(){
        i++;
    }
    

    public void run() {
        for (int j=0;j<1000000;j++){
            test();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        Thread thread1=new Thread(new SyncClass());
        Thread thread2=new Thread(new SyncClass());

        thread1.start();
        thread2.start();

        thread1.join();
        thread2.join();

        System.out.println(i);
    }
}

public  class SyncClass implements Runnable {
     static  int i=0 ;
     // Current instance 
    public   synchronized  void test(){
        i++;
    }


    public void run() {
        for (int j=0;j<1000000;j++){
            test();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        Thread thread1=new Thread(new SyncClass());
        Thread thread2=new Thread(new SyncClass());

        thread1.start();
        thread2.start();

        thread1.join();
        thread2.join();

        System.out.println(i);
    }
}
View Code

      The above method, if executed, will find a big difference. Here we analyze it. In SyncClass, the i variable belongs to a shared variable. When the synchronization method modified by the test() static method is called in a multi-threaded situation, it does not happen. The i output caused by resource sharing is less than 2000000, and when the non-static decorated synchronization method is called, the value that is different from what we expected because of thread sharing resources occurs, so at this time we will find that when synchronized When modifying a static method, it is the current class of the lock. When modifying a non-static class method, it is the current instance of the modification. Of course, this needs to be proved below.

    2. Decorate non-static methods

       When synchronized modifies a non-static method, the lock is the instance of the object, see the following code

public class SyncClass implements Runnable {
    static int i=0;
    public synchronized void test(){
        i++;
    }


    public void run() {
        for (int j=0;j<1000000;j++){
            test();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        SyncClass insatance=new SyncClass();
        Thread thread1=new Thread(insatance);
        Thread thread2=new Thread(insatance);

        thread1.start();
        thread2.start();

        thread1.join();
        thread2.join();

        System.out.println(i);
    }
}
View Code

      In the above method, when we pass in instance, there is no i output that is less than 2000000 due to shared resources. Compared with the other non-static method of passing in class modification above, we can conclude that, When synchronized modifies a non-static method, the locked object is an instance of the current class;

   3. Decorate code blocks

       This is to improve the efficiency of the lock, there is no need to synchronize the method every time, see the following code

public class SyncClass implements Runnable {
    static SyncClass instance=new SyncClass();
    static int i=0;
    public  void test(){
        i++;
    }


    public  void run() {
 // The given instance 
        synchronized (instance){
             for ( int j=0;j<1000000;j++ ){
               test();
            }
        }
//当前实例
//        synchronized (this){
//            for (int j=0;j<1000000;j++){
//                test();
//            }
//        }
//当前类
//        synchronized (SyncClass.class){
//            for (int j=0;j<1000000;j++){
//                test();
//            }
//        }
    }
    public static void main(String[] args) throws InterruptedException {
        SyncClass insatance=new SyncClass();
        Thread thread1=new Thread(insatance);
        Thread thread2=new Thread(insatance);

        thread1.start();
        thread2.start();

        thread1.join();
        thread2.join();

        System.out.println(i);
    }
}
View Code

      The usage is as above, there are mainly 3 situations, I have shown above, 1. The locked is a specific object, 2. The locked is the current instance, 3. The locked current class

3. Deliberation principle

     Synchronized is a problem of ensuring the correctness of concurrency through mutual exclusion. After synchronized is compiled, two bytecodes, monitorenter and monitorexit, will be formed before and after the synchronization block. The monitorenter instruction points to the start of the synchronized code block, and the monitorexit instruction indicates At the end of the synchronization code block, when the monitorenter instruction is executed, first try to acquire the lock of the object. If the current object is not locked, or the current object already owns the lock of the object, then the counter of the lock is incremented by 1, and the corresponding execution is executed. When monitorexit, the counter of the lock will be decremented by 1. When the counter is 0, the lock will be released. If the object to acquire the lock fails, the current thread will block and wait until the object lock is released by another thread lock-- --The above is from the book In-depth Java Virtual Machine;

     Here we think about how to do it when we get this scene. We need to figure out 3 questions to realize the above scene:

     1. Counter problem; 2. Object state problem

      These two problems are relatively simple to deal with. Just add a count attribute to the object to record the number after locking and unlocking. In addition, the status problem is also processed. When the number of counts is 0, it is unlocked and greater than 0 is in the locked state;

     3. Thread problem

     First of all, the thread problem has two states, one is in the state of waiting for the lock, and the other is in the waiting state. The analysis is very simple. We can use two queues to deal with this problem. As long as the thread ID can be recorded in the queue, Yes, it is guaranteed to know which thread we want to wake up. When the queue in the waiting state is empty, the object is also in the locked state, and the thread counter is also 0;

     At this point, I believe that everyone is more clear. I will not write the implementation. It is good to consider this kind of problem. The idea is king. Java is implemented through C++, and the basic idea is the same. We mainly look at it. The Java object header includes those, the optimization of the relationship lock in this place, etc. As long as you understand this thing, I believe it is easy to completely grasp the synchronized;

     Java object header

     The lock used by synchronized is stored in the Java object header. What is the Java object header? The object header of the HotSpot virtual machine is divided into two parts. The first part is used to store the data of the object itself at runtime, including hash code, GC Generational age, etc., officially called Mark Word; the other part is used to store the pointer to the object type of the method area. The virtual machine uses this pointer to determine which class instance the object is. This was introduced when the virtual machine was introduced earlier. has said;

     Mark Word

     The default storage structure of Mark Word in the unlocked state of the 32-bit HotSpot virtual machine object processing:

   

    Mark Word is designed to be a non-fixed data structure in order to store more effective data, it will reuse its own storage space according to the state of the object itself, such as under 32-bit JVM, in addition to the default storage structure of Mark Word listed above In addition, there are the following structures that may vary:

    

    After analyzing the object header, let's go back to the above and consider that scenario. Don't worry about lightweight locks and biased locks. These are all optimizations done by the JVM. Let's talk about this later, and focus on heavyweight locks. We have just considered the design of the object, that is, the design of the monitor object of the pointer pointed to by the heavyweight lock. Of course, our consideration may not be as comprehensive, but there are some important points. When we create an object, we will create an association with the monitor. After the monitor is created, the life cycle is the same as the life cycle of the object creation. We live and die together. I believe that you can fully understand the implementation principle of heavyweight locks. If you don't understand, look at the decompiled source code and consider it. Considering the 3 issues I considered when designing, I think it will definitely be integrated;

Fourth, lock optimization

     JDK 1.6 introduces a large number of optimizations to the implementation of locks, such as spin locks, adaptive spin locks, lock elimination, lock coarsening, biased locks, lightweight locks and other technologies to reduce the overhead of lock operations. 
     There are four main states of locks, which are: no lock state, biased lock state, lightweight lock state, and heavyweight lock state. They will be gradually upgraded with the fierce competition. Note that locks can be upgraded but not downgraded. This strategy is to improve the efficiency of acquiring and releasing locks.

     1. Spin lock

     Switching between threads is mainly handled by the CPU, and switching back and forth will definitely put a lot of pressure on the server. However, sometimes the lock operation does not last for a long time. At this time, there is no need to switch threads back and forth. For this situation, a spin lock is introduced; what is spin? It is to let the thread wait in a loop to see whether the thread holding the lock is released within a certain period of time. Of course, this is based on multi-core. One needs to execute the operation of the current thread, and the other needs to judge whether the thread is executed or not. Spin avoidance The performance consumption caused by switching between threads, but it needs to occupy the time of the processor. At this time, if the thread holding the lock executes for a long time, it is a waste of performance, so there must be a spin wait. degree; spin lock was introduced in JDK 1.4.2 and is disabled by default, but can be enabled with -XX:+UseSpinning, which is enabled by default in JDK1.6. At the same time, the default number of spins is 10 times, which can be adjusted by the parameter -XX:PreBlockSpin; after JDK1.6, the adaptive spin lock has been introduced, which has become intelligent and can be judged by itself according to the operating conditions;

    2. Lock Elimination

     This is also judged by the virtual machine itself, mainly for optimization of resources without competition. The basis of judgment is mainly based on the data support of escape analysis. Here is a brief introduction. escape analysis is to analyze the scope of the object. I want to use this. To put it bluntly, it is not an object used inside this class. What the virtual machine does is that if a variable cannot be accessed by other threads, then there will be no competition for this variable, and the synchronization modified by this variable can be locked. eliminate;

   3. Chain coarsening

     This is also judged by the virtual machine itself, and multiple consecutive locking and unlocking operations are connected together to expand into a lock with a larger range;

   4. Lightweight lock

     Lightweight lock is also to solve the problem of heavyweight lock thread switching performance, mainly through the operation of CAS. Next, the execution process is mainly analyzed:

     Locking process:

     1). Determine whether the current object is in a lock-free state. If it is a lock-free state, the JVM will create a space named Lock Record in the stack frame of the current thread to store the current Mark Word of the lock object. Copy (the official added a Displaced prefix to this copy, that is, Displaced Mark Word);

     2). The JVM uses the CAS operation to try to update the Mark Word of the object to the correction pointing to the Lock Record. If it successfully means that the lock is contested, the lock flag will be changed to 00 (indicating that the object is in a lightweight lock state), and synchronization is performed. operate; 

     3). Determine whether the Mark Word of the current object points to the stack frame of the current thread. If so, it means that the current thread already holds the lock of the current object, and the synchronization code block is executed directly; otherwise, it can only mean that the lock object has been preempted by other threads. At this time, the lightweight lock needs to be expanded into a heavyweight lock, the lock flag becomes 10, and the waiting thread will enter the blocking state;

     Release lock process:

     1). If the Mark Word of the object is still the lock of the pointed thread, then take out the data saved in the Displaced Mark Word when the lightweight lock is obtained;

     2). Use the CAS operation to replace the extracted data in the Mark Word of the current object. If it is successful, it means that the lock is released successfully;

     3). If the CAS operation replacement fails, it means that other threads try to acquire the lock, and the suspended thread needs to be awakened while releasing the lock;

     In this way, I feel that everyone will still have doubts. Now let's take the A and B threads as an example, and let's talk about the process:

     1). When the A and B threads enter the state of nothing at the same time, the copy Mark Word operation will be performed at this time;

     2). When the CAS operation is performed, only one thread can be guaranteed to execute successfully. Assuming that the A thread executes successfully, the A thread will execute the synchronization method body. When the B thread executes the CAS operation, it will find that the pointer is different. , Mark Word becomes a lightweight lock state. At this time, the CAS operation fails, and the B thread enters the state of spin acquiring the lock;

     3). The B thread fails to acquire the spin lock. At this time, the Mark Word becomes a heavyweight lock, and the thread is blocked; when the A thread executes the synchronization method body, and then executes the CAS operation, it also fails to execute. At this time, the A thread enters the Waiting state; at this time, everyone will return to the state of heavyweight lock;

    5. Bias lock

     The biased lock is mainly to deal with the need to move to the lightweight lock when there is no thread competition. In order to reduce the CAS operation of the lightweight lock, let's take a look at the specific processing flow:

     acquire lock

     1). Detect whether the Mark Word is in a deflectable state, if so, the lock is marked as 01;

     2). If it is a biased lock, check whether the thread ID is the current ID, and if so, execute the synchronization code;

     3). If not, carry out the process of lightweight lock;

      release lock

      Biased locks will only release locks in case of race conditions;

V. Conclusion

      The above article mainly refers to the in-depth understanding of the Java virtual machine . If there is something you don’t understand, you can contact me, QQ group: 438836709; the next notice: volatile;

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324689269&siteId=291194637