Thread safety issues (2) --- synchronized, volatile keywords

Table of contents

1. Characteristics of synchronized

1.1 Mutual exclusivity

1.2 Reentrancy

2. Deadlock

2.1 Causes of deadlock

3. volatile keyword

3.1 Can guarantee memory visibility

3.2 Atomic 


1. Characteristics of synchronized

1.1 Mutual exclusivity

When two threads lock the same object, the thread that locks the lock later will "block and wait" until the first thread's lock is released.

1.2 Reentrancy

We all know that when two different threads are "locking" the same object, lock competition will occur. So what happens when one thread is "locking" the same object continuously? , such as the following code:

public class Demo2 {
    public static void main(String[] args) throws InterruptedException {
        Object locker = new Object();
        Thread t1 = new Thread(() -> {
            synchronized (locker){
                synchronized (locker){
                    System.out.println("t1执行");
                }
            }
        });
        t1.start();
        t1.join();
    }
}

According to what was said before, when the t1 thread is "locked" successfully for the first time, the locker at this time is in the "locked" state. Then when the t1 thread is "locked" for the second time, in principle it is "locked" "Blocking wait" waits until the lock (locker) is released before it can be "locked" again. However, in the above code, as long as the second lock (locker) is not locked, the first lock (locker) will not be locked. Release, but the first lock (locker) is not released, and the second lock (locker) cannot be locked. In this case, logically, thread t1 in the above code will generate a deadlock and become stuck. Obviously, this It's a BUG.

But in daily development, this BUG is difficult to avoid. At this time, some people will say that the code above is not visible at a glance. Isn’t this easy to avoid? Let me give another example here:

class Test1{
    Object locker = new Object();
    public void fun1(){
        synchronized (locker){
            fun2();
        }
    }
    public void fun2(){
        fun3();
    }
    public void fun3(){
        fun4();
    }
    public void fun4(){
        synchronized (locker){
            System.out.println("4444");
        }
    }
}
//当出现上述代码时,当我们调用 fun1 方法,根本看不出来有什么问题!!!

So in order to solve the above-mentioned BUG, ​​the people who designed Java designed synchronized as a "reentrant lock", which means that when a thread continuously locks the same object, the lock will record which thread gave it." Lock", so that when the lock is locked again later, if the locking thread is the thread currently holding the lock, the lock will be successful directly.

Here I want to ask a question: Although synchronized is a reentrant lock and avoids deadlock, when a thread adds N layers of locks to the same object, when will our lock be released? And how does the computer determine when the lock is released? The first question is very simple. The lock must be released only after the execution of {} locked for the first time. The second question: the lock object will not only count who got the lock, but also record how many times the lock was added. Every time you lock, the counter +1; every time you unlock, the counter -1. When the last {} comes out, the counter is exactly 0 and the lock is released.

2. Deadlock

Deadlock occurs:

1. If synchronized does not have reentrancy, the same object is continuously locked.

2. Two threads, two locks, synchronized nesting (possible!!!). For example:

public class Demo2 {
    public static void main(String[] args) throws InterruptedException {
        Object locker1 = new Object();
        Object locker2 = new Object();
        Thread t1 = new Thread(() -> {
            synchronized (locker1){
                try {
                    Thread.sleep(1);//为了让t1拿到locker1,t2拿到locker2
                } catch (InterruptedException e) {
                    throw new RuntimeException(e);
                }
                synchronized (locker2){
                    System.out.println("t1结束");
                }
            }
        });
        Thread t2 = new Thread(() -> {
            synchronized (locker2){
                synchronized (locker1){
                    System.out.println("t2结束");
                }
            }
        });
        t1.start();
        t2.start();
        t1.join();
        t2.join();
    }
}

3. M threads, N locks (equivalent to the corollary of 2)

2.1 Causes of deadlock

Four necessary conditions (as long as one of the following four conditions is not true, then a deadlock will not form) :

  1. The use of mutual exclusion (the basic characteristic of locks) means that when two threads lock the same object, the thread that locks the latter will "block and wait" until the first thread's lock is released.
  2. Non-preemptible (the basic characteristic of the lock) is the same as the first point
  3. Request retention, that is, locks and locks can be nested
  4. Loop waiting/loop waiting, that is, the waiting dependency relationship forms a loop, for example: the car key is locked at home, and the home key is locked in the car. A situation with the above code

So how do we solve the deadlock? 

Because among the above four conditions, the first two conditions are the properties of the lock and cannot be intervened, so we can only start with the last two conditions. For condition 3: As long as we avoid writing lock nesting logic. (But in some cases, this is unavoidable) Regarding condition 4: Number the locks and agree on the order of locking. For example, it is agreed to add the lock with the larger number first, and then add the lock with the smaller number. All threads must abide by it. .

3. volatile keyword

3.1 Can guarantee memory visibility

What is memory visibility?

When the computer runs the code, it needs to access data frequently, which is often stored in the memory (define a variable, and the variable is stored in the memory). The operation of the CPU reading the memory is tens of thousands of times slower than the CPU reading the register. At this time, it will appear that the CPU is very fast when solving most situations. Once the memory is read, the speed slows down instantly. Case.

In order to solve the above problems and improve operating efficiency, the compiler will optimize the code at this time, optimizing some operations that originally read memory into reading registers, thus reducing the number of memory reads and thus improving the overall performance. efficiency. As an example:

public class Demo3 {
    static boolean flag = true;
    public static void main(String[] args) throws InterruptedException {
        Thread t1 = new Thread(() -> {
            while(flag){

            }
            System.out.println("循环结束!");
        });
        t1.start();
        Thread.sleep(10);
        flag = false;
        t1.join();
    }
}

 Obviously, even if we change the flag to false, the thread does not stop the loop. This is caused by "memory visibility". Since the loop does not have any other operations, the loop speed is very fast, and a large number of loads will be performed (reading the flag fetch into memory), cmp operation. At this time, the compiler discovered that although multiple load operations were performed, the value of the flag did not change, and the load operation was a waste of time, so the compiler directly read the memory and stored the flag in the register during the first loop. , the memory will not be read in the future, and the flag will be taken directly from the register. This will lead to the "memory visibility" problem.

The volatile keyword can solve this problem. As long as the variable is modified by volatile, the compiler cannot optimize it, that is, the variable must be read from memory. For example:

public class Demo3 {
    volatile static boolean flag = true;
    public static void main(String[] args) throws InterruptedException {
        Thread t1 = new Thread(() -> {
            while(flag){

            }
            System.out.println("循环结束!");
        });
        t1.start();
        Thread.sleep(10);
        flag = false;
        t1.join();
    }
}

 Another thing to note is that the triggering of compiler optimization is uncertain. We don't know when it will trigger and when it will not. So it's better to use volatile keyword! ! !

3.2 Atomic 

The volatile keyword cannot make operations like count++ atomic like the synchronized keyword! ! For example:

public class Test {
    static volatile int count = 0;
    public static void main(String[] args) throws InterruptedException {
        Thread t1 = new Thread(() -> {
            for (int i = 0; i < 10000; i++) {
                count++;
            }
        });
        Thread t2 = new Thread(() -> {
            for (int i = 0; i < 10000; i++) {
                count++;
            }
        });
        t1.start();
        t2.start();
        t1.join();
        t2.join();
        System.out.println("count = " + count);
    }
}

 3.3 Disable instruction reordering

For example, the new operation we often use can be roughly divided into three steps: 1. Apply for a space in the memory 2. Instantiate the variable 3. Return a reference to the space, which can be 1 -> 2 -> 3, or it can be 1 -> 3 -> 2. In some cases, we must maintain the order of 1 -> 2 -> 3. At this time we can use the volatile keyword.

Guess you like

Origin blog.csdn.net/m0_74859835/article/details/132779308