Concurrent Java programming knowledge to learn early next

Concurrent Java programming knowledge to learn early next

Previous through "concurrent Java programming knowledge on early learning chapter," We know what visibility in Java Concurrency is? The definition of volatile and the definition of JMM. Let's take a look at a few real manufacturers face questions:

0fEfBKFLOSG

edit

0fEfBLV1Rke

edit

0fEfBLvMuH2

edit

From several real interview questions above, we can see the maker of the interview will ask questions related to concurrency. and so

Java concurrency, whether it is this interview or at work, complicated by all encounter. Java and contracting JUC (java.util.concurrent) have to find out about what? And what is most important is to achieve contract? The principle is that you know what? What is the visibility of the JMM? volatiile key variable is how to achieve visibility? If you want to learn concurrency, understand thoroughly understand the words, Kaige feel less computer knowledge or to learn to understand. The "Java Concurrent Programming - preparatory knowledge" Kaige ready to use two reports, including the following: a brief memory visibility between what is volatile keyword in the Java language specification is defined how?? But you know what JVM know JMM is not it? Computer CPU is how to deal with the data? Data processed by the CPU to a deep understanding of visibility between threads. There is volatile is how to ensure the visibility of it? What are two principles that achieve?

CPU-related knowledge

Let's look at Kaige computer configuration:

0fEfBNaTMh6

edit

0fEfBOaXCyG

edit

0fEfBP7KDku

edit

From the graph, you can see Kaige computer

4-core CPU processing is 8 threads,

There are three levels cache.

Wherein an instruction cache and data cache are 32K,

Secondary cache 256K,

Three-level cache is 6M.

Computer memory is 24G

Why say this?

因为JVM运行程序的实体其实就是线程,而每个线程在创建的时候JVM都会给其创建一个工作内存(有些地方称之为:栈空间)。工作内存是每个线程自己的私有数据区域。Java内存模型中规定所有的变量都是存储在主内存中(也就是凯哥24G内内存中),主内存是共享内存区域,所有的线程都可以访问的(也就是说主内存中的数据,任意线程都可以访问)。但是线程对变量的操作,如读取,修改赋值操作是在从中内存中进行的。因此,一个线程要想操作一个变量,首先是要讲变量从主内存copy到自己的工作内存空间,然后再对自己工作空间中对变量操作,操作完成之后再将变量写回到主内存中去。线程是不能够直接操作主内存中的变量的。各个线程中的工作内存存储的其实就是主内存的一个变量副本拷贝。因此不同线程之间是无法访问到对方的工作内存的。线程间的通讯(值转递)必须通过主内存来完成的。

上面这么大一段话,可以简单对应凯哥电脑配置:

线程:其实就是凯哥CPU的4核8线程中的线程

主内存:就是凯哥本子上的24G物理内存条

线程工作内存空间:就是缓存(一二三级缓存区域)

线程工作原理,如下图:

0fEfBPRmd9c

编辑

说明:

主内存中变量int i= 0;

cpu1中的线程1获取到i变量的时候,会将i从主内存中copy一份到自己的工作区域,也就是cpu1 cache中,然后更新i的值为10;

cpu2中的线程2同样获取到i变量,从主内存中copy一份之后,在自己的工作区cpu2 cache中将i修改成了15;这种情况下就是多核多线程问题。

线程之间可见性深度理解

在这种情况下主内存中的i就是两个线程之间的共享变量了。那么怎么能确保cpu1的线程1修改i的值之后,通知cpu2中的线程2的工作区缓存无效呢?这个操作就是线程之间的可见性。

再举个现实生活中常用的例子:

比如,凯哥现在在和大家分享。今天我发布之后,你们大家在自己手机或者是PC网友上都能看到凯哥分享的知识点。这个时候有个网友A在看到凯哥分享的东西,感觉有点不好或者是举个其他的例子或者更容易理解。于是他把凯哥这个文章进行了修改。然后给凯哥留Y。告诉凯哥,凯哥看后,觉得很不错。等明天,凯哥发文章通知大家,如果用xxx的案例就跟容易让大家理解了。于是,你们大家知道,哦原来昨天的案例不是最新的了。放弃昨天的,看看今天最新的案例。

如果上面案例看着是多线程那么可以这么分析:

主内存:凯哥

共享变量:凯哥分享的知识点

多个线程:各位看凯哥分享的网友

其中网友A修改了知识点的内容(网友A修改的是自己手机上的(工作区的)知识点)后通知了凯哥,然后凯哥又通知了各位。各位知道原来自己手里的不是最新的了,然后放弃重新获取最新的。

这样来理解的话,就更容易理解线程的可见性

Volatitle是如何保证可见性的呢?

可以通过JIT编译器生成的汇编指令来查看对volatile进行写操作时候,CPU都做了哪些事情?

如下代码:

Volatile Singleton instance = new Singleton();

Instance是被volatitle修饰的。

在使用JIT编译器生成的汇编指令后,有一个重要的代码:

0x01a2de1d:xxxx:lock addl $0X0,(%esp);

我们可以看到,当一个共享变量被volatile修饰之后,在进行写操作的时候,会多出一些汇编代码Lock.在IA-32架构软件开发手册中,Lock前缀的指令当在多核处理器的时候会引发出两件事情:

1:将当前的处理器缓存行的数据写回的主内存中(也就是系统的物理缓存中);

2:同时这个写回内存的操作也会使其他CUP里缓存了内存地址的数据被置为无效。

Cpu处理数据方法:

为了提高处理数据,CPU不会直接从内存中获取数据操作的。

Here we need a computer processing speed of data sorting: Disk (HDD) <Memory <cache <CPU. We can see that the operating speed of the CPU is much faster than memory. So, if the CPU direct access to the memory will not only affect the processing speed of the memory it is also possible to make life shorter. This time, the cache will solve this problem.

Therefore, CPU in processing data will be like will expire in-memory data cache to advanced (that is, one hundred twenty-three cache), and then the operation of the cache.

When multi-core processors, in order to ensure cache variables between each processor is the same, we need to implement cache coherency protocol. Its operation is this: each CPU by sniffing data traveling in the bus to check the value of their own real-time caching is not already expired. If issued their own data in the cache has been modified, it will be the current processor cache data status to inactive, when the processor needs to operate on this data seems, will again from the main memory, to read the latest data into their own cache.

Volatile two implementation principles

1: assembly code lock prefix instructions cause the processor cache is written back to main memory.

When there is a cache lock instruction, during which profess, can do to ensure that the processor can monopolize any shared memory. Cache coherency while simultaneously prevent two or more modified by the processor cache data memory area

2: When a processor's cache is written back to main memory, it will lead to other processor's cache is invalid

This is a common control protocol processor to maintain internal cache

to sum up:

By these two "Java Concurrency preparatory knowledge" of understanding, we know that knowledge sharing data between JMM, thread, so then the next learning Java concurrent programming will be a simpler. Next Welcome to Java Concurrency learning!


0fEfuvcb5ge

edit


Guess you like

Origin blog.51cto.com/kaigejava/2480263