java-way (multi-threaded) architecture of the JMM and the volatile keyword (II)

  Looks like more than two months did not write the blog, do not know years ago, this time what went busy.

  Wrote a long time ago and volatile related blog , the feeling did not write it so deep, this time we continued our volatile keyword.

review:

  First-come, easy go over things written before, the last time we say that the three major characteristics of the memory coherency protocol M (modified) E (exclusive) S (shared) I (failure) four states, and our concurrent programming atomicity, consistency and visibility. Then there are the simple mention of our volatile keyword, he can assure us visibility, that is to say if the volatile keyword modified variables produced a change in brush can immediately go to the main memory of them. Next we look at the content of our blog it.

Thread:

  What is the thread of it? This is why we often ask to interview them. According to the official statement is: a modern operating system when you run a program that will create a process. For example, start a Java program, the operating system will create a Java process. Modern operating systems CPU is the smallest unit of scheduling threads. For example, we start the QQ, is that we start a process, we launched QQ voice, this action is a thread.

  Here is a thread any more into kernel-level threads and user-level thread, we thread in the java virtual machine is generally user-level threads, that is, by our jvm virtual machine to call our CPU time slice to apply to complete our thread operations. Our kernel-level threads by our system to schedule the CPU to complete, in order to ensure the safety of the general thread is controlled by the virtual machine.

  User thread: that thread does not need kernel support implemented in the user program, which is independent of the operating system kernel, using the threads library provides application process creation, synchronization, thread scheduling, and management functions to control user thread. In addition, the user threads are created and managed by an application process using the threads library, independent of the operating system kernel. It does not require user mode / kernel mode switching speed. The operating system kernel does not know the existence of multi-threaded, so a thread blocks would make the whole process (including all its threads) blocked. Since the processor time slice allocation process here is the basic unit, so the time of each thread of execution relative reduction.
  Kernel threads: threads of all management operations are done by the operating system kernel. The state of preservation kernel threads and context information, when a thread is executing the system caused by blocking call, the kernel can schedule other threads of the process. On a multiprocessor system, the kernel can assign multiple threads belonging to the same processes running on multiple processors to improve the degree of parallelism process execution. Because creating, scheduling, and management needs to complete kernel threads, user-level threads and so these operations compared to much slower, but still faster than the process of creating and managing operations. Most operating systems on the market, such as Windows, Linux are all supported kernel-level threads.
  User-level threads is that we often say ULT, kernel-level threads is what we say KLT. Performance and consume a great time to thread switch from user mode kernel mode, said rear expansion sychronized lock upgrade will talk process.

Context switching:

  We said above, the thread is by our virtual machines to CPU time slice to apply to complete our operations, but not necessarily immediately executed, then you had a context switch. It is roughly like this:

  A running thread is not complete, but the time slice is over, we need to suspend our thread A, CPU B to go execution threads, and runs out of thread B, in order to continue to run our thread A, then it comes to a context switching, we have temporarily suspended this process to run again, it can be understood as context switches (the easiest way to understand).

Visibility:

   Modified with the volatile keyword variables, you can ensure visibility, which is volatile variable is modified, immediately brush into the main memory, so that other threads perception variables have been modified, we look at a case

public class VolatileVisibilitySample {
    private volatile boolean initFlag = false;

    public void refresh(){
        this.initFlag = true;
        String threadname = Thread.currentThread().getName();
        System.out.println("线程:"+threadname+":修改共享变量initFlag");
    }

    public void load(){
        String threadname = Thread.currentThread().getName();
        int i = 0;
        while (!initFlag){

        }
        System.out.println("线程:"+threadname+"当前线程嗅探到initFlag的状态的改变"+i);
    }

    public static void main(String[] args){
        VolatileVisibilitySample sample = new VolatileVisibilitySample();
        Thread threadA = new Thread(()->{
            sample.refresh();
        },"threadA");

        Thread threadB = new Thread(()->{
            sample.load();
        },"threadB");

        threadB.start();
        try {
             Thread.sleep(2000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        threadA.start();
    }

}

我们想创建一个全局的由volatile修饰的boolean变量,refresh方法是修改我们的全局变量,load方法是无限循环去检查我们全局volatile修饰过的变量,我们开启两个线程,开始运行,我们会看到如下结果。

 也就是说,我们的变量被修改以后,我们的另外一个线程会感知到我们的变量已经发生了改变,也就是我们的可行性,立即刷回主内存。

有序性:

  说到有序性,不得不提到几个知识点,指令重排,as-if-serial语义和happens-before 原则。

  指令重排:java语言规范规定JVM线程内部维持顺序化语义。即只要程序的最终结果与它顺序化情况的结果相等,那么指令的执行顺序可以与代码顺序不一致,此过程叫指令的重排序。指令重排序的意义是什么?JVM能根据处理器特性(CPU多级缓存系统、多核处理器等)适当的对机器指令进行重排序,使机器指令能更符合CPU的执行特性,最大限度的发挥机器性能。

  指令重排一般发生在class翻译为字节码文件和字节码文件被CPU执行这两个阶段。

  as-if-serial语义的意思是:不管怎么重排序(编译器和处理器为了提高并行度),(单线程)程序的执行结果不能被改变。编译器、runtime和处理器都必须遵守as-if-serial语义。 为了遵守as-if-serial语义,编译器和处理器不会对存在数据依赖关系的操作做重排序,因 为这种重排序会改变执行结果。但是,如果操作之间不存在数据依赖关系,这些操作就可能被 编译器和处理器重排序。

  happens-before 原则内容如下

  1. 程序顺序原则,即在一个线程内必须保证语义串行性,也就是说按照代码顺序执行。

  2. 锁规则 解锁(unlock)操作必然发生在后续的同一个锁的加锁(lock)之前,也就是说,如果对于一个锁解锁后,再加锁,那么加锁的动作必须在解锁动作之后(同一个锁)。

  3. volatile规则 volatile变量的写,先发生于读,这保证了volatile变量的可见性,简单的理解就是,volatile变量在每次被线程访问时,都强迫从主内存中读该变量的值,而当该变量发生变化时,又会强迫将最新的值刷新到主内存,任何时刻,不同的线程总是能够看到该变量的最新值。

  4. 线程启动规则 线程的start()方法先于它的每一个动作,即如果线程A在执行线程B的start方法之前修改了共享变量的值,那么当线程B执行start方法时,线程A对共享变量的修改对线程B可见

  5. 传递性A先于B ,B先于C,那么A必然先于C

  6. 线程终止规则 线程的所有操作先于线程的终结,Thread.join()方法的作用是等待当前执行的线程终止。假设在线程B终止之前,修改了共享变量,线程A从线程B的join方法成功返回后,线程B对共享变量的修改将对线程A可见。

  7. 线程中断规则 对线程 interrupt()方法的调用先行发生于被中断线程的代码检测到中断事件的发生,可以通过Thread.interrupted()方法检测线程是否中断。

  8. 对象终结规则 对象的构造函数执行,结束先于finalize()方法。

  上一段代码看看指令重排的问题。

public class VolatileReOrderSample {
    private static int x = 0, y = 0;
    private static int a = 0, b = 0;

    public static void main(String[] args) throws InterruptedException {
        int i = 0;

        for (; ; ) {
            i++;
            x = 0;
            y = 0;
            a = 0;
            b = 0;
            Thread t1 = new Thread(new Runnable() {
                public void run() {
                    a = 1;
                    x = b;
                }
            });
            Thread t2 = new Thread(new Runnable() {
                public void run() {
                    b = 1;
                    y = a;
                }
            });
            t1.start();
            t2.start();
            t1.join();
            t2.join();

            String result = "第" + i + "次 (" + x + "," + y + ")";
            if (x == 0 && y == 0) {
                System.err.println(result);
                break;
            } else {
                System.out.println(result);
            }
        }

    }
}

我们来分析一下上面的代码

情况1:假设我们的线程1开始执行,线程2还没开始,这时a = 1 ,x = b = 0,因为b的初始值是0,然后开始执行线程2,b = 1,y = a = 1,得到结论x = 0 ,y = 1.

情况2:假设线程1开始执行,将a赋值为1,开始执行线程2,b赋值为1,并且y = a = 1,这时继续运行线程1,x = b = 1,得到结论 x = 1,y = 1.

情况3:线程2优先执行,这时b = 1,y = a = 0,然后运行线程1,a = 1,x = b = 1,得到结论 x = 1,y = 0。

不管怎么谁先谁后,我们都是只有这三种答案,不会产生x = 0且y = 0的情况,我们在下面写出来了x = 0 且 y = 0 跳出循环。我们来测试一下。

运行到第72874次结果了0,0的情况产生了,也就是说,我们t1中的a = 1;x = b;和t2中的b = 1;y = a;代码发生了改变,只有变为

Thread t1 = new Thread(new Runnable() {
    public void run() {
        
        x = b;
        a = 1;
    }
});
Thread t2 = new Thread(new Runnable() {
    public void run() {
        
        y = a;
        b = 1;
    }
});

这种情况才可以产生0,0的情况,我们可以把代码改为

private static volatile int a = 0, b = 0;

继续来测试,我们发现无论我们运行多久都不会发生我们的指令重排现象,也就是说我们volatile关键字可以保证我们的有序性

至少我这里570万次还没有发生0,0的情况。

就是我上次博客给予的表格

Required barriers 2nd operation
1st operation Normal Load Normal Store Volatile Load Volatile Store
Normal Load       LoadStore
Normal Store       StoreStore
Volatile Load LoadLoad LoadStore LoadLoad LoadStore
Volatile Store     StoreLoad StoreStore

我们来分析一下代码

线程1的。

public void run() {
    a = 1;
    x = b;
}

  a = 1;是将a这个变量赋值为1,因为a被volatile修饰过了,我们成为volatile写,就是对应表格的Volatile Store,接下来我们来看第二步,x = b,字面意思是将b的值赋值给x,但是这步操作不是一个原子操作,其中包含了两个步骤,先取得变量b,被volatile修饰过,就成为volatile load,然后将b的值赋给x,x没有被volatile修饰,成为普通写。也就是说,这两行代码做了三个动作,分别是Volatile Store,volatile load和Store写读写,查表格我们看到volatile修饰的变量Volatile Store,volatile load之间是给予了StoreLoad这样的屏障,是不允许指令重排的,所以达到了有序性的目的。

扩展:

  我们再来看一个方法,不用volatile修饰也可以防止指令重排,因为上面我们说过,volatile可以保证有序性,就是增加内存屏障,防止了指令重排,我们可以采用手动加屏障的方式也可以阻止指令重排。我们来看一下事例。

public class VolatileReOrderSample {
    private static int x = 0, y = 0;
    private static int a = 0, b =0;

    public static void main(String[] args) throws InterruptedException {
        int i = 0;

        for (;;){
            i++;
            x = 0; y = 0;
            a = 0; b = 0;
            Thread t1 = new Thread(new Runnable() {
                public void run() {
                    a = 1;
                    UnsafeInstance.reflectGetUnsafe().storeFence();
                    x = b;
                }
            });
            Thread t2 = new Thread(new Runnable() {
                public void run() {
                    b = 1;
                    UnsafeInstance.reflectGetUnsafe().storeFence();
                    y = a;
                }
            });
            t1.start();
            t2.start();
            t1.join();
            t2.join();

            String result = "第" + i + "次 (" + x + "," + y + ")";
            if(x == 0 && y == 0) {
                System.err.println(result);
                break;
            } else {
                System.out.println(result);
            }
        }

    }

}

storeFence就是一个有java底层来提供的内存屏障,有兴趣的可以自己去看一下unsafe类,一共有三个屏障 

UnsafeInstance.reflectGetUnsafe().storeFence();//写屏障
UnsafeInstance.reflectGetUnsafe().loadFence();//读屏障
UnsafeInstance.reflectGetUnsafe().fullFence();//读写屏障

通过unsafe的反射来调用,涉及安全问题,jvm是不允许直接调用的。手写单例模式时在超高并发记得加volatile修饰,不然产生指令重排,会造成空对象的行为。后面我会科普这个玩意。

最进弄了一个公众号,小菜技术,欢迎大家的加入

Guess you like

Origin www.cnblogs.com/cxiaocai/p/12186229.html