Three problems of concurrency in Java

 Reprinted from: https://javadoop.com/post/java-memory-model#toc10

1.  Reordering

Please run the following code first

public class Test {

    private static int x = 0, y = 0;
    private static int a = 0, b =0;

    public static void main(String[] args) throws InterruptedException {
        int i = 0;
        for(;;) {
            i++;
            x = 0; y = 0;
            a = 0; b = 0;
            CountDownLatch latch = new CountDownLatch(1);

            Thread one = new Thread(() -> {
                try {
                    latch.await();
                } catch (InterruptedException e) {
                }
                a = 1;
                x = b;
            });

            Thread other = new Thread(() -> {
                try {
                    latch.await();
                } catch (InterruptedException e) {
                }
                b = 1;
                y = a;
            });
            one.start();other.start();
            latch.countDown();
            one.join();other.join();

            String result = "第" + i + "次 (" + x + "," + y + ")";
            if(x == 0 && y == 0) {
                System.err.println(result);
                break;
            } else {
                System.out.println(result);
            }
        }
    }
}

After a few seconds, we can get the result x == 0 && y == 0, which is impossible without the reordering if you look closely at the code.

Reordering is caused by several mechanisms:

  1. Compiler optimization: For operations without data dependencies, the compiler will perform a certain degree of rearrangement during the compilation process.

    Let's take a closer look at the code in thread 1. The compiler can change the order of a = 1 and x = b, because there is no data dependency between them. Similarly, the same is true for thread 2, so it is not difficult to get x == y == 0 this result.

  2. Instruction reordering: CPU optimization behavior also reorders instructions that do not have data dependencies to a certain extent.

    This is similar to compiler optimization. Even if the compiler does not rearrange the instructions, the CPU can also rearrange the instructions. Needless to say.

  3. Memory system reordering: The memory system is not reordered, but due to the existence of the cache, the program as a whole will show out-of-order behavior.

    Assuming that there is no compiler rearrangement and instruction rearrangement, thread 1 modifies the value of a, but after the modification, the value of a may not be written back to main memory, so it is natural for thread 2 to get a == 0 . Similarly, the assignment operation of thread 2 to b may not be flushed to the main memory in time.

2. Memory Visibility

When talking about reordering earlier, I also talked about the problem of memory visibility, so I will repeat it here.

The problem of visibility of shared variables between threads is not directly caused by multiple cores, but by multiple caches. If each core shares the same cache, then there is no memory visibility problem.

In modern multi-core CPUs, each core has its own L1 cache or L1 cache plus L2 cache, etc. The problem occurs in the exclusive cache of each core. Each core will read the data it needs into the exclusive cache. After the data is modified, it is also written to the cache, and then waits to be flushed into the main memory. So it will cause some cores to read the value is an expired value.

As a high-level language, Java shields these low-level details, and uses JMM to define a set of specifications for reading and writing memory data. Although we no longer need to care about the first-level cache and the second-level cache, JMM abstracts the main memory and local memory. the concept of.

All shared variables exist in main memory, each thread has its own local memory , and threads read and write shared data are also exchanged through local memory, so the visibility problem still exists. The local memory mentioned here is not really a piece of memory allocated to each thread, but an abstraction of JMM, which is an abstraction of registers, first-level caches, and second-level caches.

3. Atomicity

In this article, atomicity is not the point, it will be introduced as part of concurrent programming to consider.

When it comes to atomicity, everyone should be able to think of long and double. Their values ​​need to occupy 64-bit memory space. As mentioned in the Java programming language specification, the writing of 64-bit values ​​can be divided into two 32 bit operation to write. Originally, a whole assignment operation was divided into two operations: low 32-bit assignment and high 32-bit assignment. If another thread reads this value in the middle, it will inevitably read a strange value.

At this time, we need to use the volatile keyword for control. JMM stipulates that for volatile long and volatile double, the JVM needs to ensure the atomicity of write operations.

In addition, read and write operations on references are always atomic, regardless of whether it is a 32-bit machine or a 64-bit machine.

The Java Programming Language Specification also mentions that JVM developers are encouraged to guarantee the atomicity of operations on 64-bit values, and users are encouraged to use volatile or use proper synchronization as much as possible. The key word is "encouragement".


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325765352&siteId=291194637