How volatile is to achieve visibility

For example, now we have this piece of code: thread wait for another thread to finish loading data output success, but the last program has been stuck in the while loop did not execute down.

public class VolatileDemo {
    private static boolean flag = false;
    //private static volatile boolean flag = false;

    public static void main(String[] args) throws Exception{
        new Thread(()->{
            System.out.println ( "waiting for loading data ...." );
             the while (! In Flag) {
            }
            System.out.println("====== SUCCESS =====");
        }).start();
        Thread.sleep(2000);
        new Thread(()->{
            System.out.println ( "start loading" );
            flag = true;
            System.out.println ( "Load Done" );
        }).start();
    }
}
/ * Console output
        Waiting to load data. . . .
        Start loading
        Completion of the loading
 */

Cause of this problem is caused by jmm atomic operation. jmm memory model is the java memory model, accurate to say that java thread memory model. It cpu cache model type, cpu cache model is based on the established.
jmm a total of 8 atomic operations: Read (Read): Read data from the main memory   load (load): the working memory to read memory data   use (use): retrieve data in the working memory calculated   assign (assignment) : reassign the computed values into the working memory   store (store): working memory data is written to the main memory   write (write): store the last variable value assigned to a variable in main memory   lock (lock): the main memory locked variable, identified as a thread exclusive state   unlock (unlock): this variable will unlock the main memory variables, other threads can lock unlock
  






  

 

We can see that thread 1 has a copy of the variable is loaded into the working memory, and after the value stored in the main memory 2 will be calculated after the thread, but there is no way to tell threads 1, so there have been thread-safety issues. In fact, the main memory and cpu interaction will go through such a concept "bus", in order to solve this cpu data inconsistencies there are two options:
the bus lock (low performance)
  Early cpu bus is locked, lock the data live, so other threads can not read or write to it, in order to be operational until other threads that thread runs out after this data unlock. That after the start until the end of the read write only release the lock.
MESI cache coherency protocol
  after post multiple threads to read the same data to the respective buffer, a cpu modify the cached data will be synchronized immediately to the main memory, which is implemented in assembly language. Cpu can be perceived by the other bus sniffing mechanism (can be understood as a listener) to change the data which will own the cache data is invalidated so go read the value of main memory. So mesi agreement is to start with the store locked, lock granularity smaller and shorter. In fact, it is so volatile achieve visibility. This intermediate process and because there are few steps to store and write, but also to other cpu cache data blanking are to be time-consuming, this process may change the data by others, so he is non-atomic operations.

Guess you like

Origin www.cnblogs.com/wlwl/p/11920689.html