Java Concurrency Basics Learning

This article has authorized the original launch of the WeChat public account "Hongyang", please be sure to indicate the source when reprinting.

three basic concepts

  1. atomicity . An operation or a series of operations, either all of them are performed or none of them are performed. A "thing" in a database is a typical yard operation.
  2. visibility . When a thread modifies the value of a shared property, other threads can immediately see the change in the shared property value. For example: Since JMM (Java Memory Model) is divided into main memory and working memory, the modification process of shared attributes is to read from the main memory and copy it to the working memory. After the modification in the working memory is completed, the main memory is refreshed. value in . If thread A completes the modification in working memory but has not refreshed the value in main memory, the value seen by thread B is still the old value. Thus visibility is not guaranteed.
  3. orderliness . Programs appear to run in the same order as we write our logic, but the actual execution of the computer is not necessarily the same. To improve performance, both the compiler and the processor reorder the code. But there is a premise that the reordered results must be in the same order as the single-threaded executor.
int a = 0;      // 语句A
int b = 1;      // 语句B
int c = a + b;  // 语句C

Since statement A and statement B have no data dependencies. After reordering, the execution order of statement A and statement B in the computer may be AB or BA, but both AB and C have data dependencies, so AB is executed before C.

Several ways to control concurrency in Java

  1. volatile
  2. synchronized
  3. CAS/AQS
  4. concurrent package

volatile

Volatile is used to guarantee visibility and ordering, not atomicity.

volatile guaranteed visibility

The volatile-modified property ensures that the latest value can be read every time it is read, but it does not update the value that has been read, and it cannot update the value that has been read. When thread A modifies the shared attribute value in the working memory, it will be immediately refreshed to the main memory. Thread B/C/D reads and writes the fence every time to achieve something similar to reading the attribute value directly from the main memory. Note that it is similar, there are some online It is not correct to say that volatile-modified variable reads and writes operate directly in main memory, but exhibit similar behavior. The read-write barrier is a CPU instruction. Inserting a read-write barrier is equivalent to telling the CPU and the compiler that the command must be executed first, and the command must be executed after the command (orderly). Another role of the read-write fence is to force a different CPU cache update. For example, a write fence will flush data written before the fence to the cache to ensure visibility.

ordering guaranteed by valatile

When a read/write operation is performed on a volatile-modified property, the preceding code must have been executed and the result visible to subsequent operations. When reordering, the code behavior of read/write operations modified with volatile attributes is demarcated. The code in front of the read/write operation cannot be sorted to the back, and the latter cannot be sorted to the front in the same way. This ensures orderliness.

{   // 线程A
    bean = new Bean();     // 语句A
    inited = true;         // 语句B
}

{   // 线程B
    if(inited){            // 语句C
        bean.getAge();     // 语句D
    }
}

Statement AB does not have any data dependencies in thread A, so it may be reordered to execute statement B first, then statement A. Suppose that thread A is suspended after executing statement B first (at this time, statement A has not yet been executed), and the CPU turns to execute thread B. Since the bean object is not initialized, an error occurs when statement D is executed. This wrong reordering does not occur if the inited property is decorated with volatile.

volatile does not guarantee atomicity

Since volatile guarantees visibility and ordering, the shared attributes modified by volatile generally have no problem with concurrent read/write, which can be regarded as a lightweight synchronized implementation. But some cases are special, such as i++ auto-increment. Take a chestnut.

volatile int a = 0; // 语句A
a++;                // 语句B

a++. In fact, there are two steps involved. Read a, execute a+1 and assign the result of a+1 to a. Suppose thread A is suspended after executing the first step. Thread B executes a++. Then the value of a in main memory is 1. However, the working memory of thread A is still 0. Since thread A has read the value of a before, it refreshes the value of a to the main memory again after executing a++, that is to say, a++ is executed twice, but twice All change from 0 to 1. So the value of a ends up being 1. There is a slot here. I said that the attribute modified by volatile is the latest value every time it is read. Here, after thread B executes a++, why is there still 0 in thread A? Should be 1! I think this is a relatively tasteless place of volatile. If the attribute modified by volatile has been read before modification, the value that has been copied to the working memory cannot be changed after modification. Feel it~

synchronized

synchronized guarantees atomicity, visibility and ordering. Used to decorate methods or code blocks. Here are some rules for synchronized.

  1. Depending on the lock object, a lock can only be held by at most one thread at a time.
  2. If the target lock is already held by the current thread, other threads can only block and wait for other threads to release the target lock.
  3. If the current thread already holds the target lock, other threads can still call methods in the target class that are not modified by synchronized.

The above rules also apply to the Lock below.

Example of lock object

synchronized modified method or synchronized(this)


public class Test {

    static SyncTest test1 = new SyncTest();
    static SyncTest test2 = new SyncTest();

    public static void main(String[] args) {

        new Thread(new Runnable() {
            @Override
            public void run() {
                test1.syncTwo();
            }
        }).start();

        try {
            Thread.sleep(1000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        test1.syncOne();
    }
}

public class SyncTest {

    public synchronized void syncOne(){
        System.out.println("ThreadId : " + Thread.currentThread().getId());
        System.out.println("one");
    }

    public void syncTwo(){
        synchronized (this) {
            int a =0;
            while(true){
                a++;
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println("ThreadId : " + Thread.currentThread().getId());
                if(a == 5) break;
            }
            System.out.println("two");
        }
    }
}

控制台输出:
ThreadId : 10
ThreadId : 10
ThreadId : 10
ThreadId : 10
ThreadId : 10
two
ThreadId : 1
one

In order to call first test1.syncTwo(), here the main thread is suspended for 1s. It can be found from the console output that the child thread is indeed blocked, indicating that the synchronized modification method or synchronized(this) obtains the lock of the instance object.

If it will be test1.syncOne()replaced test2.syncOne();then the main thread will not block. The console output is:

ThreadId : 1
one
ThreadId : 10
ThreadId : 10
ThreadId : 10
ThreadId : 10
ThreadId : 10
two

synchronized(xxx.class)/synchronized static

public class SyncTest {

    public synchronized static void syncOne() {
        System.out.println("ThreadId : " + Thread.currentThread().getId());
        System.out.println("one");
    }

    public void syncTwo() {
        synchronized (SyncTest.class) {
            int a = 0;
            while (true) {
                a++;
                try {
                    Thread.sleep(1000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println("ThreadId : " + Thread.currentThread().getId());
                if (a == 5)
                    break;
            }
            System.out.println("two");
        }
    }
}

synchronized(this)If you replace SyncTest with synchronized(SyncTest.class)/synchronized static. Then regardless of the call , test1.syncOne()the test2.syncOne()main thread will block. synchronized(SyncTest.class)/synchronized staticThis way of writing ensures that access to the TestSync class can only be held by one thread at a time.

Synchronized implements blocking concurrency. The larger the scope of synchronized modification, the higher the bottleneck. In order to solve this kind of problem, it is said that the range of locks, the granularity of locks and the segmentation of locks are reduced. In view of the space, please check the details by yourself.

CASE

CAS (compare and swap), that is, compare and exchange. A block of code locked by synchronized can only be accessed by one thread at a time. It is a pessimistic lock. Compared with this pessimistic lock that needs to suspend threads, there is also an optimistic lock implemented by CAS. CAS consists of three parts:

  1. memory address A
  2. Expected old value B
  3. Expected new value C

When doing CAS operation, first compare A and B, if they are equal, update the value of C in A and return true. Otherwise, return false. Usually CAS is accompanied by an infinite loop, which achieves concurrency by constantly trying to update. The pseudo code is as follows:

public boolean compareAndSwap(long memoryA, int oldB, int newC){
    if(memoryA.get() == oldB){
        memoryA.set(newC);
        return true;
    }
    return false;
}

Compared with synchronized, it saves the overhead of suspending threads and restoring threads, but if the update is delayed, the infinite loop is also a waste of CPU resources.

Using CAS has a "check first and then execute" operation, which is a typical unsafe operation in Java, so CAS is actually implemented by C++ by calling CPU instructions. CAS is embodied in Java as the Unsafe class. The Unsafe class will directly obtain the memory address of the attribute through C++, and then CAS is Atomic::cmpxchgimplemented by the C++ method. This method will add a lock instruction to the CPU instruction, and the instruction with the lock prefix will lock the bus during execution, making other processors temporarily unable to access memory through the bus. For details, please refer to this article . I think CAS is essentially a blocking implementation relative to synchronize. It's just that the granularity (CPU instruction level) of blocking is smaller.

AQS

AQS (AbstractQueuedSynchronizer) maintains a volatile-modified attribute "state" and a doubly linked list, and implements exclusive locks and shared locks by using some column operations on the "state" attribute of CAS in Unsafe (actually using state as a flag) , exclusive locks and shared locks are divided into fair locks and unfair locks.

  1. Exclusive lock: Only one thread holds the same lock at the same time, and the remaining threads are queued in the linked list.
  2. Shared lock: Multiple threads can hold the same lock at the same time.
  3. Fair lock: After the lock is held by a thread, the remaining threads are queued for execution. Locks are placed in a linked list according to FIFO.
  4. Unfair lock: After the lock is held by a thread, the remaining threads are queued for execution. Locks are placed in a linked list according to FIFO. But after the lock is just released, if there is a new thread competing for the lock, then the new thread will compete for the lock with the next thread in the linked list that is about to wake up.

Regarding AQS, I found two relatively good articles, so I won't go into details here. If you want to know more, you can look at the source code.

Deep Analysis of Java 8: Implementation Analysis of JDK1.8 AbstractQueuedSynchronizer (Part 1)

Deep Analysis of Java 8: Implementation Analysis of AbstractQueuedSynchronizer (Part 2)

concurrent

JDK provides many common concurrent classes and concurrent container classes in java/util/concurrent. Concurrent classes are basically implemented through lock (CAS/AQS), and concurrent containers are basically implemented through synchronize and lock (CAS/AQS). This is a foundation, and if there is a chance, I will gradually fill in these. The author's level is limited, if there is any error, accept paid corrections~

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325568780&siteId=291194637