JAVA multi-threaded base (three) - thread-safe basis

The reason is not thread safe

A, Volatile maintaining visibility

The following code thread1 would have been executed, it can not get the latest updates flag values, when combined with volatile, thread gets to the latest value, end execution

public class VolatitleDemo {
    private /*volatile*/ static boolean flag = false;

    public static void main(String[] args) throws InterruptedException {
        Thread thread1 = new Thread(()->{
            int i = 0;
            while (!flag){
                i++;
            }
            System.out.println("i:"+i);
        });
        thread1.start();
        System.out.println("begin start thread");
        Thread.sleep(100);
        flag = true;
    }
}

In the bytecode compiler to find the next lock command, you will find, when you modify the member variable with volatile modifications, it will be one more lock instruction. a lock control instruction in a multiprocessor environment, lock mechanisms may be based on a bus assembler instruction cache lock or a lock to achieve the effect of visibility.

Second, understand the nature of the thread from the hardware level visibility

Cup comprising a computer, memory, disk (I / O devices), the processing speed difference between them is very large, and the final rate depends on the implementation of the "short barrel plate", along with the emergence of multi-core CPU, the computer design many solutions:

  • Increased CPU cache
  • Operating system adds processes, threads, switching the efficient use of CPU upgrade will be the maximum time slice through the CPU
  • Optimizing compiler directive, to more rational use of CPU cache

Each optimization will bring new problems, we now know that as long as the basic logic;

1, cup cache

  • In a modern computer system will increase the speed as close as possible one read operation speed of the processor cache as a buffer between the processor and memory: the need to use the operational data copied to the cache, so that operation can be performed rapidly when after the end of the operation from the cache synchronized into memory;

  • By storing a cache of interaction good solution to the contradiction between the processor and memory speed, but it also brings greater complexity of computer systems, because it introduces a new problem, cache coherency.

2, cache coherency

  • Added amount cache, each data previously written in the CPU cache, the cache is written to the completion of the calculation, but present in each thread may run in different CPU, there is the same data a plurality of work the CPU memory, if the CPU has been modified to run in, it will lead to inconsistent data between the CPU cache problem; CPU level, there have been clues and total cache lock two solutions;

  • Bus key : When one of the processors to the shared memory operation, issue a LOCK # signal on the bus, this signal so that other processors can not gain access to the data to the shared memory by a bus, the CPU and memory bus lock communication between the locked, which makes during locking, other processors can not operate the other data memory address, the bus lock overhead is relatively large, this mechanism is clearly inappropriate, is necessary to reduce the particle size to improve the utilization of the lock the emergence of cache coherency protocol

  • Cache coherency protocol: the need for each processor accessing the cache, follow some protocols to operate under the agreement at the time of reading and writing, there is common agreement

    MSI, MESI, MOSI and so on. The most common is the MESI protocol.

    1. M (Modify) represents only the shared data cached in the current CPU cache, and is a modified state, i.e. inconsistent data cache and the main memory data

    2. E (Exclusive) exclusive state indicates the cache, the data cache only the current CPU cache and is not modified

    3. S (Shared) may be data representing a plurality of CPU cache and main memory, and consistent data for each data cache

    4. I (Invalid) represents the cache has failed in MESI protocol, each cache cache controller not only know their own reading and writing, but also to monitor (snoop) read and write operations of other Cache

Occurs when the CPU and main memory is not the same data is present, the data becomes inactive

When a data value is present in the same CPU, the data in the exclusive state, the operation can be modified

When the cup pushing the changes to make other CPU, a plurality of data between the CPU in the shared state

For MESI protocol, CPU read and write from the point of view will follow the following principles:

  • CPU read request: the cache is M, E, S status may be read, I status of the CPU can only read data from main memory
  • CPU write request: the cache is in M, E status before they can be written. For the S-write state, the other needs to be set after the CPU cache line as invalid before a write cache using the bus key and lock mechanism,

For CPU memory operation is probably in a structure can be abstracted below. So as to achieve cache coherency effects:

由于 CPU 高速缓存的出现使得 如果多个 cpu 同时缓存了相同的共享数据时,可能存在可见性问题。也就是 CPU0 修改了自己本地缓存的值对于 CPU1 不可见。不可见导致的后果是 CPU1 后续在对该数据进行写入操作时,是使用的脏数据。使得数据最终的结果不可预测。

3、MESI 优化带来的可见性问题

  • 是各个 CPU 缓存行的状态是通过消息传递来进行的。如 果 CPU0 要对一个在缓存中共享的变量进行写入,首先需要发送一个失效的消息给到其他缓存了该数据的 CPU。并且要等到他们的确认回执。CPU0 在这段时间内都会处于阻塞状态。为了避免阻塞带来的资源浪费。在 cpu 中引入了 Store Bufferes。

​ CPU0 只需要在写入共享数据时,直接把数据写入到 storebufferes 中,同时发送 invalidate 消息,然后继续去处理其他指令。当收到其他所有 CPU 发送了 invalidate acknowledge 消息时,再将 store bufferes 中的数据数据存储至 cache line中。最后再从缓存行同步到主内存。

​ 但是这种优化存在两个问题:

  1. 数据什么时候提交是不确定的,因为需要等待其他 cpu给回复才会进行数据同步。这里其实是一个异步操作
  2. 引入了 storebufferes 后,处理器会先尝试从 storebuffer中读取值,如果 storebuffer 中有数据,则直接从storebuffer 中读取,否则就再从缓存行中读取
value = 3;
isFinsh ;
void cpu0(){
    value = 10;
    isFinsh = true;
}
void cpu1(){
    if(isFinsh){
        assert value == 10;
    }
}
  • 可能出现的结果:isFinsh是true,但是value的值不是10;

    isFinish 状态为(E)、而 value可能是(S),isFinish 修改后直接同步到主内存,在执行value

=10时,会先把 value=10 的指令写入到storebuffer中。并且通知给其他缓存了该value变量的 CPU。在等待其他 CPU 通知结果的时候,CPU0 会继续执行 isFinish=true 这个指令。导致数据的不一致导致进入if语句,但是value的值不是10。这种情况我们可以认为是 CPU 的乱序执行,也可以认为是一种重排序,而这种重排序会带来可见性的问题

- 硬件层面无法判断代码的先后执行顺序,所以硬件提供了自己的解决方案memory barrier(内存屏障),让软件开发者(编译器)根据在硬件提供的这种方式,决定编译的结果;

4、内存屏障

  • 什么是内存屏障?从前面的内容基本能有一个初步的猜想,内存屏障就是将 store bufferes 中的指令写入到内存,从而使得其他访问同一共享内存的线程的可见性。

  • X86 的 memory barrier 指令包括 lfence(读屏障) sfence(写屏障) mfence(全屏障)

    • Store Memory Barrier(写屏障) 告诉处理器在写屏障之前的所有已经存储在存储缓存(store bufferes)中的数据同步到主内存,简单来说就是使得写屏障之前的指令的结果对屏障之后的读或者写是可见的;
    • Load Memory Barrier(读屏障) 处理器在读屏障之后的读操作,都在读屏障之后执行。配合写屏障,使得写屏障之前的内存更新对于读屏障之后的读操作是可见的;
    • Full Memory Barrier(全屏障) 确保屏障前的内存读写操作的结果提交到内存之后,再执行屏障后的读写操作;
    value = 3;
    isFinsh ;
    void cpu0(){
        value = 10;
        storeMemoryBarrier();
        isFinsh = true;
    }
    void cpu1(){
        if(isFinsh){
            loadMomoryBarrier()
            assert value == 10;
        }
    }

    通过在value赋值时,对后面的代码,该操作可见,在取值是,要求从主内存中强制读取最新值,达到代码的顺序执行,volatile也是通过产生Lock的汇编指令,实现的一种内存屏障;

三、JMM(JAVA内存模型)

1、JMM

  • JMM 把底层的问题抽象到 JVM 层面,再基于 CPU 层面提供的内存屏障指令,以及限制编译器的重排序来解决并发问题;JMM 属于语言级别的抽象内存模型,可以简单理解为对硬件模型的抽象,它定义了共享内存中多线程程序读写操作的行为规范:在虚拟机中把共享变量存储到内存以及从内存中取出共享变量的底层实现细节通过这些规则来规范对内存的读写操作从而保证指令的正确性,它解决了 CPU 多级缓存、处理器优化、指令重排序导致的内存访问问题,保证了并发场景下的可见性。
  • JMM 抽象模型分为主内存、工作内存
    • 主内存是所有线程共享的,一般是实例对象、静态字段、数组对象等存储在堆内存中的变量。
    • 工作内存是每个线程独占的,线程对变量的所有操作都必须在工作内存中进行,不能直接读写主内存中的变量,线程之间的共享变量值的传递都是基于主内存来完成。
  • JMM 是如何解决可见性有序性问题的

    简单来说,JMM 提供了一些禁用缓存以及进制重排序的方法,来解决可见性和有序性问题。这些方法大家都很熟悉:volatile、synchronized、final;

  • JMM 如何解决顺序一致性问题

    为了提高程序的执行性能,编译器和处理器都会对指令做重排序,其中处理器的重排序在前面已经分析过了。所谓的重排序其实就是指执行的指令顺序。编译器的重排序指的是程序编写的指令在编译之后,指令可能会产生重排序来优化程序的执行性能。从源代码到最终执行的指令,可能会经过三种重排序:

    源代码-->1:编译器优化重排序-->2:指令集并行重排序-->3:内存系统重排序-->最终执行的指令序列

    2 和 3 属于处理器重排序。这些重排序可能会导致可见性问题。编译器的重排序,JMM 提供了禁止特定类型的编译器重排序。处理器重排序,JMM 会要求编译器生成指令时,会插入内存屏障来禁止处理器重排序;

  • as-if-serial

    并不是所有的程序都会出现重排序问题编译器的重排序和 CPU 的重排序的原则一样,会遵守数据依赖性原则,编译器和处理器不会改变存在数据依赖关系的两个操作的执行顺序,比如下面的代码:

    a=1; b=a; 
    
    a=1;a=2;
    
    a=b;b=1;

    这三种情况在单线程里面如果改变代码的执行顺序,都会导致结果不一致,所以重排序不会对这类的指令做优化。这种规则也成为 as-if-serial。不不管怎么重排序,对于单个线程来说执行结果不能改变。比如

    int a=2; //1
    
    int b=3; //2
    
    int rs=a*b; //3

    1 和 3、2 和 3 存在数据依赖,所以在最终执行的指令中,3 不能重排序到 1 和 2 之前,否则程序会报错。由于 1 和 2不存在数据依赖,所以可以重新排列 1 和 2 的顺序;

  • JMM的四种内存屏障,会插入内存屏障来禁止特定类型的处理器的重排序,在 JMM 中把内存屏障分为四类

2、HappensBefore规则

​ 它的意思表示的是前一个操作的结果对于后续操作是可见的,所以它是一种表达多个线程之间对于内存的可见性。所以我们可以认为在 JMM 中,如果一个操作执行的结果需要对另一个操作课件,那么这两个操作必须要存在happens-before 关系。这两个操作可以是同一个线程,也可以是不同的线程,JMM中的6个happens-before规则

    1. 一个线程中的每个操作,happens-before 于该线程中的任意后续操作; 可以简单认为是 as-if-serial。单个线程中的代码顺序不管怎么变,对于结果来说是不变的顺序规则表示 1 happenns-before 2; 3 happens-before 4
    1. volatile 变量规则,对于 volatile 修饰的变量的写的操作,一定 happen-before 后续对于 volatile 变量的读操作;根据 volatile 规则,2 happens before 3
    1. 传递性规则,如果 1happens-before 2; 3happensbefore 4; 那么传递性规则表示: 1 happens-before 4;
    2. start 规则,如果线程 A 执行操作 ThreadB.start(),那么线程 A 的 ThreadB.start()操作 happens-before 线程 B 中的任意操作
    3. join 规则,如果线程 A 执行操作 ThreadB.join()并成功返回,那么线程 B 中的任意操作 happens-before 于线程A 从 ThreadB.join()操作成功返回。
    4. 监视器锁的规则,对一个锁的解锁,happens-before 于随后对这个锁的加锁

Guess you like

Origin www.cnblogs.com/liweiweicode/p/12395587.html