Java concurrent programming, volatile memory implementation and principle

The previous blog post talked about the java memory model and introduced the foundation of the java memory model . This article will talk about the volatile keyword, which plays an important role in concurrent programming.

Before java5.0 it was a controversial keyword, after 5 it was reborn. The role of the volatile keyword is to ensure the visibility of variables in multiple threads, and it is the core of the JUC package.

In the basics of the memory model, it has been mentioned that the JVM is divided into heap memory and stack memory. The heap memory is shared between threads, while the stack memory is private to the thread and invisible to other threads. In order to ensure the visibility of variables, volatile modification can be used, then why the visibility can be guaranteed after the keyword volatile modification is used, and the following analysis will be made.

Volatile can be seen as a lightweight lock, which may be inaccurate, but it does have some of the characteristics of locks. The difference with locks is that locks guarantee atomicity, which may be a variable or a piece of code, while volatile-modified variables can only ensure the transfer of variables between threads , and only ensure visibility, which is not available in some aspects. atomicity .

So the above words have two layers of semantics:

  • Guaranteed visibility, not atomicity
  • Disallow reordering of instructions (reordering would break the memory semantics of volatile)

The read/write of volatile variables enables communication between threads.

volatile memory semantics

  • Write memory semantics: When writing a volatile variable, JMM will flush the shared variable in the local memory corresponding to the thread to the main memory.
  • Read memory semantics: When reading a volatile variable, JMM invalidates the thread's corresponding local memory, and the thread then reads the shared variable from main memory.

At the beginning, the flag and a in the local memory of the two threads are the initial state. After thread A writes the flag variable, the values ​​of the two shared variables updated in the local memory A are refreshed to the main memory. After reading the flag variable After the value contained in the local memory has been invalidated, thread B must read the shared variable from the main memory.

volatile memory semantics implementation

In order to achieve the memory semantics of volatile, the compiler inserts memory barriers in the instruction sequence when generating bytecode to prohibit specific types of processor reordering. JMM adopts a conservative strategy, the rules are as follows:

  • Insert a StoreStore barrier in front of each volatile write
  • Insert a StoreLoad barrier after each volatile write
  • Insert a LoadLoad barrier in front of each volatile read
  • Insert a LoadStore barrier after each volatile read

Looking at the volatile keyword from the assembly level

I remember reading an article about variables modified by the volatile keyword. After programming the assembly code, a LOCK instruction will be inserted in front of the variable.

Java代码: instance = new Singleton();//instance是volatile变量

汇编代码: 0x01a3de1d: movb $0x0,0x1104800(%esi);0x01a3de24: lock addl $0x0,(%esp);

Through the above code, it is found that the volatile modified variable will have an additional lock instruction, and the LOCK instruction belongs to the system level: The LOCK prefix will cause the processor to generate a LOCK# signal when executing the current instruction, which shows the locked bus.

Let's see what the LOCK instruction does:

  • Lock bus: The read and write requests of other CPUs to memory will be blocked until the lock is released, but because the overhead of the lock bus is too high, the lock cache is later used instead of the lock bus
  • The write operation after the lock will write back the modified data and invalidate other CPU-related cache lines, thereby reloading the latest data from main memory
  • It is not a memory barrier but performs a function similar to a memory barrier, preventing execution reordering on both sides of the barrier

an example

First start a thread thread, because isOver=fasle, the while in the run method in the thread is an infinite loop, and then try to change the flag bit isOver=true in the main thread, so as to terminate the while infinite loop in the thread, making the thread The thread can finish exiting. However, it backfired, the experimental phenomenon did not stop, that is, isOver in the thread is still false or in an infinite loop.

public class VolatileDemo {
    private static boolean isOver = false;

    public static void main(String[] args) {
        Thread thread = new Thread(new Runnable() {
            @Override
            public void run() {
                while (!isOver) ;
            }
        });
        thread.start();
        try {
            Thread.sleep(500);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        isOver = true;
    }
}

So why? How to solve this problem.

The corrected code is:

public class VolatileDemo {
    private static volatile boolean isOver = false;

    public static void main(String[] args) {
        Thread thread = new Thread(new Runnable() {
            @Override
            public void run() {
                while (!isOver) ;
            }
        });
        thread.start();
        try {
            Thread.sleep(500);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        isOver = true;
    }
}

Note the difference, isOver has now been set to a volatile variable , so after changing isOver to true in the main thread, the variable value in the thread's working memory will be invalid, so the value needs to be read from the main memory again, now The latest value of isOver can be read to be true to end the infinite loop in the thread, so that the thread thread can be stopped smoothly. Now the problem is solved and knowledge is learned :).

Summarize

Volatile is a more lightweight mechanism for communication between threads than locks. Volatile only guarantees the atomicity of reading/writing to a single volatile variable, while the mutually exclusive execution characteristics of locks can ensure that the entire critical section code is executed with Atomicity, in terms of function, locks are more powerful than volatile, and volatile has more advantages in scalability and execution performance, but volatile cannot replace locks.

Application scenarios

  • state tag variable
  • double check

refer to:

java concurrent programming art

http://blog.csdn.net/eff666/article/details/67640648

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325260739&siteId=291194637