[Java Advanced] In-depth understanding of Synchronized

        Using multi-threading in the actual application process can greatly improve the performance of our program, but at the same time, if our thread usage is unreasonable, it will also bring a lot of uncontrollable problems. The most common problem is Thread safety issues.

That is to say, when multiple threads access a method at the same time, this method cannot be executed according to our expected behavior, then this method is considered thread-unsafe.

        In fact, there are three main reasons why our threads are not safe: atomicity , orderliness , and visibility. When it comes to Synchronized synchronization locks, it is undoubtedly related to atomicity

Atomicity problem in multi-threaded environment

What is atomicity?

There is atomicity in the ACID characteristics of database transactions. It means that multiple database transaction operations included in the current operation either all succeed or all fail. Partial success and partial failure are not allowed. In multithreading, atomicity is the same as that of database transactions, which means that one or more instruction operations are not allowed to be interrupted during CPU execution .

We can demonstrate with a piece of code:

public class AtomicExample {
    volatile int i = 0;
    public void incr(){
        i++;
    }

    public static void main(String[] args) throws InterruptedException {
        AtomicExample atomicExample = new AtomicExample();
        Thread[] threads = new Thread[2];
        for (int j = 0; j < 2; j++) {
            threads[j] = new Thread(()->{
                for (int k = 0; k < 10000; k++) {
                    atomicExample.incr();
                }
            });
            threads[j].start();
        }
        threads[0].join();
        threads[1].join();
        System.out.println(atomicExample.i);
    }
}

        In the above code, two threads are started, and each thread accumulates the variable i 10,000 times, and then prints out the accumulated result. We found from the results that our expected value was originally 20000, but the printed i values ​​were all numbers less than 20000, which was inconsistent with the expected results. The reason for this phenomenon was the atomicity problem.

        In fact, in essence, there are two reasons for the atomicity problem. The switching of CPU time slices and the atomicity of executing instructions (that is to say, whether the programs or instructions run by threads are atomic)

        Let's take a look at CPU time slice switching. When the CPU is idle for any reason, the CPU will allocate its own time slice to other threads for processing, as shown in the figure: CPU improves resource utilization through context switching.

         Atomicity of i++ instructions

        In a Java program, the i++ operation looks more like an indivisible instruction, but it is not. We can view the bytecode file of the incr() method in the AtomicExample class through the Javap -v command.

         We can find that the i++ operation is actually three instructions: getfield, iadd, putfield.

  • getfield, loads the variable i from memory into a CPU register.
  • iadd, perform a +1 operation on a register.
  • putfiled, save the result to memory.

        However, these three instructions are not actually atomic, that is to say, the CPU will be interrupted during execution, which will lead to atomicity problems.

        Assuming that there are two threads modifying the i variable, the possible execution process will be as follows:

  •  Thread 1 first obtains the execution right of the CPU, and the thread switch occurs after the CPU loads i=0 into the register, and the CPU hands over the execution right to thread 2 and retains the current CPU context.
  • Thread 2 also goes to memory to load i into a register for calculation, and then puts the result back into memory.
  • Thread 2 releases CPU resources, and thread 1 restores the CPU context after regaining the right to execute, but the i value is still 0. Because the i value at this time is the i value loaded at the beginning of the thread, not the one after thread 2 changed it. i value.
  • In the end the result is smaller than expected.

        How to solve the atomicity problem?

        Through the analysis of the above problems, we actually found that the parallelism or switching of threads in a multi-threaded environment will cause the final execution results to fail to meet expectations. The solution to the problem can be considered from two aspects.

  • The current non-atomic instruction is not allowed to be interrupted during execution, that is to say, there is no context switch during the execution of i++ operation.
  • The atomicity problem caused by parallel execution of multiple threads can be executed serially through a mutual exclusion condition.

        In Java, the synchronized keyword provides such a function. After adding the synchronized keyword to our incr() method, it can ensure that the following i variable will not be affected by other threads.

 public synchronized void incr(){
        i++;
    }

Synchronized synchronization lock in Java

introduce

        The root cause of the thread safety problem is that there are multiple threads operating a shared resource at the same time. To solve this problem, it is necessary to ensure the exclusive access to the shared resource. Therefore, people provide a synchronized keyword in Java, which we call As a synchronization lock, he can guarantee that only one thread is allowed to execute a certain method or code block at the same time.

        A synchronized synchronization lock is mutually exclusive. This is equivalent to changing threads from parallel execution to serial execution. Because of this, the system loses performance. The following is an example of how to use synchronized.

Instructions

        Acting at the method level, it means locking the m1() method. When multiple threads access the m1() method at the same time, only one thread can execute it at the same time.

public synchronized void m1(){
}

        Acting at the code level, it means that a certain thread-unsafe code is locked. Only when the synchronized (this) line of code is accessed, the lock resource will be competed.

public void m2(){
sychronized(this){
    }
}

        The following figure is the execution process after adding synchronized synchronization lock

From this, we can see that when multiple threads access the method modified with the synchronized keyword at the same time, they need to preempt a lock mark          first , and only the thread that grabs the lock mark is eligible to call the incr() method. This allows only one thread to perform i++ operations at the same time, thereby solving the atomicity problem.

Scope of action

        After we add the sychronized keyword to a method, when multiple threads access the method, the entire execution process will become serial execution. This execution method has also been mentioned above that greatly affects the performance of the program. So what method do we have? Can you guarantee the balance between security and performance?

        In fact, the synchronized keyword only needs to protect code that may be thread-safe. Therefore, we can achieve this balancing mechanism by controlling the scope of the synchronization lock. In sychronized, two kinds of locks are provided, one is a class lock and the other is an object lock.

        class lock

        The class lock is actually a global lock. When multiple threads call the synchronization methods of different object instances, mutual exclusion will be generated. The specific implementation method is as follows.

  • Modified static method:
public static sychronized void m1{
}
  • Modified code block, the lock object in sychronized is a class , that is, lock.class.
public class Lock{
    public void m2(){
    sychronized(Lock.class){
        }
    }
}

The following shows the use of class locks to implement object instances to achieve mutual exclusion.

public class SynchronizedExample {
    public void m1(){
        synchronized (SynchronizedExample.class) {
            while (true) {
                System.out.println("当前访问的线程:" + Thread.currentThread().getName());
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }
    }
    public static void main(String[] args) {
        SynchronizedExample set1 = new SynchronizedExample();
        SynchronizedExample set2 = new SynchronizedExample();
        new Thread(()->set1.m1(),"t1").start();
        new Thread(()->set2.m1(),"t2").start();
    }
}
  • An m1() method is defined in this program, which implements a logic that prints the name of the current thread in a loop, and this logic is protected by a class lock.
  • In the main() method, two SynchronizedExample object instances set1 and set2 are defined, and two threads are defined to call the m1() method of these two instances.

        According to the scope of action of the class lock , it can be known that even multiple object instances can achieve the purpose of mutual exclusion, so the final output result is, which thread grabs the lock first , and which thread will continue to print its own thread name.

        object lock

        The object lock is an instance lock. When multiple threads call the synchronization method of the same object instance, mutual exclusion will be generated. The specific implementation method is as follows;

  • Modified common method:
public synchroized void m1(){
}
  • Modified code block, the lock object in sychronized is an ordinary object instance .
public class Lock{
    Object lock = new Object();
    public void m2(){
        synchronized(lock){
        }
    }
}

        The following program demonstrates the use of object locks:

public class SynchronizedForObjectExample {
    Object lock = new Object();
    public void m1(){
        synchronized (lock){
            while (true){
                System.out.println("当前获得锁的线程:"+Thread.currentThread().getName());
            try {
                Thread.sleep(1000);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
         }
      }
    }

    public static void main(String[] args) {
        SynchronizedForObjectExample set1 = new SynchronizedForObjectExample();
        SynchronizedForObjectExample set2 = new SynchronizedForObjectExample();

        new Thread(()->set1.m1(),"t1").start();
        new Thread(()->set2.m1(),"t2").start();
    }
}

         operation result:

当前获得锁的线程:t1
当前获得锁的线程:t2
当前获得锁的线程:t1
当前获得锁的线程:t2
当前获得锁的线程:t1
当前获得锁的线程:t2
当前获得锁的线程:t1
当前获得锁的线程:t2
当前获得锁的线程:t1
当前获得锁的线程:t2
当前获得锁的线程:t2
当前获得锁的线程:t1

        In fact, we can find from the above two codes that for two almost identical codes, when the object lock is used , when two threads access the m1() method of different object instances respectively, the mutual interaction between the two is not achieved. For the purpose of reprimanding, it seems that the lock does not take effect. In fact, it is not that the lock does not take effect. The root of the problem is that the scope of the lock object lock in sychronized (lock) is too small.

        Class is loaded during the JVM startup process. After each .class file is loaded, a Class object will be generated. The Class object is globally unique in the JVM process. The life cycle of member objects and methods modified by static belongs to the class level. They will be allocated and loaded into memory along with the definition of the class, and will be recycled as the class is unloaded.

        Therefore, the biggest difference between a class lock and an object lock is that the life cycle of the lock object lock is different . If multiple threads are to be mutually exclusive, then multiple threads must compete for the same object lock.

        In the above code, Object lock = new Object(); the life cycle of the lock object of the component is determined by the instance of the SychronizedForObjectExample object. Different instances of SychronizedForObjectExample will have different lock objects. Since there is no competition, they will not interact with each other. Rebuke. If we want the above program to be synchronized, then we can add the static keyword to the lock object.

static Object lock = new Object();

Thoughts on Synchronized synchronization lock

        After the previous analysis, we probably have some basic understanding of synchronization locks. The essence of synchronization locks is to achieve mutual exclusion of multiple threads, ensuring that only one thread can access the code with synchronization locks at the same time, so that thread security is guaranteed. . We can think about it, in order to achieve this goal, what should we do?

  • The core feature of a synchronization lock is exclusiveness , which is also called an exclusive lock. To achieve this goal, multiple threads must seize the same resource.
  • Only one thread can execute the code with synchronization lock at the same time, which means that only one thread is allowed to seize the shared resource (lock) at the same time, and other threads that have not grabbed it can only wait.
  • If a lot of threads are blocked, then we need a container to store the threads. When the thread that acquires the lock completes the task and releases the lock, a thread needs to be woken up from this container, and the awakened thread will try to preempt again Lock.

 Synchronized synchronization lock tag storage analysis

        If the synchronized synchronization lock wants to achieve mutual exclusion in multi-threaded access, it must ensure that multiple threads compete for the same resource. This resource is somewhat similar to the traffic light on the parking space in life. The green light indicates that the parking space is idle and you can park. Red light is the opposite. In synchronized, this shared resource is the lock object in synchronized (lock).

        This is why object locks and class locks can affect the scope of locks. If multiple threads access multiple lock resources, there will be no competition, and the effect of mutual exclusion will not be achieved.

        Therefore, from this level, the realization of lock mutual exclusion must meet the following two conditions:

  • must compete for the same shared resource
  • There needs to be a flag to identify whether their current lock is free or busy.

        The first condition can be realized through the lock object. The second condition needs to have a place to store the preemption lock mark. Otherwise, when other threads come to preempt resources, they don’t know whether they should be executed normally or queued up. In fact, This lock is marked in the object header. Let's briefly analyze the object header below.

Storage structure of Mark Word

Reference books:

"Java Concurrent Programming In-depth Analysis and Practice"

"Java Programming Thoughts"

"Java Concurrent Programming Practice"

Guess you like

Origin blog.csdn.net/weixin_43918614/article/details/123815348