[Reprint] volatile and synchronized in the end what difference? And more graphic explanation tell you

volatile and synchronized in the end what difference? And more graphic explanation tell you

https://segmentfault.com/a/1190000021928013

 

  • You have an idea, I have an idea, we exchange, a person will have two ideas
  • If you can NOT explain it simply, you do NOT understand it well enough

Demo is now phasing out the code and organize technical articles together  Github selection practice  , facilitate reading view, this article is also included in this, I feel good, please Star

Prior written several  Java programming concurrent series of  articles, a friend asked me where micro-group, still can not understand  volatile and the  synchronized difference between the two, his main problem can be summarized as these:

  • volatile and synchronized in processing what issues are relatively equivalent?
  • Why volatile is the way synchronized weak synchronization?
  • In addition to the visibility of the volatile issue, but also to solve the problem?
  • How do I choose to use them?

If you can not answer a few questions above, it shows that you distinguish between the two and some vague. In this paper, good to talk about their subtle relationship by graphic way

Have heard [in heaven one day a year underground], assumed that the CPU executes a general instruction a day, then the CPU read and write memory, you have to wait a year.

[Barrel] by the principle of limitation, in the eyes of the CPU, the overall performance of the program memory efficiency have been pulled down, in order to solve this short board, hardware students also use our software to make common Speeding - Use Cached Cache (actually a classmate hardware to software, the students dug pit)

Java Memory Model (JMM)

CPU cache increases balances the difference between the speed of memory, this increase or several layers.

At this time, the short plate is no longer so clear memory, CPU Jinki. But it will bring a lot of problems

FIG fancy, each core has its own cache (L1 Cache), there is also some architecture common to all nuclear secondary cache (L2 Cache). After using the cache, when a thread to access shared variables, if the shared variable exists in L1, will not step by step until the main memory of the visit. So, in this way, to make up the short board slow access memory

Specifically, the thread read / write shared variables such steps are:

  1. Copy the shared variable from main memory into your working memory
  2. The variables are processed in the working memory
  3. After processing, the updated variable values ​​back to main memory

Suppose now that the main memory has a shared variable X, which is the initial value of 0

Thread 1 to access the variable X, apply the above procedure is this:

  1. L1 and L2 are not found variables X, until it found in the main memory
  2. Copy variable X to L1 and L2
  3. Modify the value of X in the L1 is 1, and written back to main memory layer by layer

At this time, in the eyes of the thread 1, the value of X is this:

Next, the thread 2 according to the above procedure to access the same variable X

  1. L1 is not found in the variable X
  2. L2 variable X found
  3. L2 from L1 is copied into variables
  4. Will modify the value of X is 2 L1, layer by layer, and written back to main memory

In this case, the thread 2 eyes, the value of X is this:

Combine two operations just when the thread 1 and then access the variable x, we see what the problem:

At the moment, if the thread again 1 x = 1 write-back, thread 2 will cover the results of x = 2, the same shared variable, the result is not the same to get the thread (the thread 1 in the eyes of x = 1; x = Thread 2 eyes 2), which is the shared memory variables are not visible problem.

Hang it up how? Today, the two main characters debut, but before explaining the volatile keyword, Let's say you are most familiar with the synchronized keyword

synchronized

Problems encountered thread-safe, habitual think with the synchronized keyword to solve the problem, let us first, whether the approach is reasonable, we look at the synchronized keyword is how to solve the shared variables mentioned above, the visibility of memory problems

  • [] Into the semantic memory is synchronized blocks the variables used in the synchronized block is cleared from the working memory of the thread, the read from main memory
  • Exit to semantic memory block to modify things synchronized shared variables in the synchronized block flushed to main memory

Apart from anything else, to see volatile relentless downward

volatile

When a variable is declared as volatile:

  • Thread [read] in the shared variables, it will first clear the local memory variable value, and then obtain the latest values ​​from the main memory
  • [Written] in the thread shared variable, the value is not cached in registers or other places (that is, just say so-called "working memory"), but the value will refresh back to main memory

Kind of feeling a new name, you look at all true

Therefore, when using synchronized or volatile, multithreaded steps shared variable becomes so:

Is no longer a simple point, the reference value L1 and L2 of the shared variable, but direct access to main memory

To the practical point, the examples

public class ThreadNotSafeInteger {
    /** * 共享变量 value */ private int value; public int getValue() { return value; } public void setValue(int value) { this.value = value; } }

After the preamble bedding analysis, it is clear that the above code, the shared variable value there is a big risk to try to make some changes to its

First use of the volatile keyword renovation:

public class ThreadSafeInteger {
    /** * 共享变量 value */ private volatile int value; public int getValue() { return value; } public void setValue(int value) { this.value = value; } }

Then use the synchronized keyword reform

public class ThreadSafeInteger {
    /** * 共享变量 value */ private int value; public synchronized int getValue() { return value; } public synchronized void setValue(int value) { this.value = value; } }

These two results are identical, the [current] on the issue of the visibility of the shared variable data solutions, both regarded as equivalent

If synchronized and volatile are exactly the same, then we need to design two keywords, and continue to look at an example

@Slf4j
public class VisibilityIssue { private static final int TOTAL = 10000; // 即便像下面这样加了 volatile 关键字修饰不会解决问题,因为并没有解决原子性问题 private volatile int count; public static void main(String[] args) { VisibilityIssue visibilityIssue = new VisibilityIssue(); Thread thread1 = new Thread(() -> visibilityIssue.add10KCount()); Thread thread2 = new Thread(() -> visibilityIssue.add10KCount()); thread1.start(); thread2.start(); try { thread1.join(); thread2.join(); } catch (InterruptedException e) { log.error(e.getMessage()); } log.info("count 值为:{}", visibilityIssue.count); } private void add10KCount(){ int start = 0; while (start ++ < TOTAL){ this.count ++; } } }

In fact, the above setValue simple assignment operator (this.value = value;) becomes (this.count ++;) form, if you run the code, you will find that the value of count is always in between 1w and 2w

The above method and then make changes in the form of synchronized

@Slf4j
public class VisibilityIssue { private static final int TOTAL = 10000; private int count; //... 同上 private synchronized void add10KCount(){ int start = 0; while (start ++ < TOTAL){ this.count ++; } } }

Run the code again, count result is 2w

Two sets of code are modified by volatile and synchronized keyword in the same form, how some can bring the same results, but some can not it?

We will talk about the difference between the two

count ++ program code is a line, but translated into the CPU instruction is really three lines (do not believe you used  javap -c  to try command)

synchronized is an exclusive lock / exclusive lock (that is, have you not what I mean), while only one thread calls the  add10KCount method, the other calling thread will be blocked. So three lines of CPU instructions are the same thread after the execution of another thread to continue execution, this is the usual talk of atomic (threads execute multiple instructions are not interrupted)

But volatile non-blocking algorithm (that is, not exclusively), when faced with three lines of CPU instruction naturally no guarantee that another thread does not get involved, and this is commonly referred to, volatile memory to ensure visibility, but can not guarantee atomicity

In short, that when they could use the volatile keyword it? (Do keep in mind, the important thing to say it three times, I feel this sentence out of date)

If the writing does not depend on the current value of the variable value variable, it can be a volatile 
value if the write-independent variable, the current value of the variable, it can be a volatile 
if the variable value is written does not depend on the current value of the variable, you can use volatile

For example, the above count ++, is to obtain - computing - write three-step operation, which is dependent on the current value, it can not solve the problem by volatile

Here, the first question beginning of the article [volatile and synchronized in processing what issues are relatively equivalent? ] The answer has been unveiled

To mend their own brain, if you let the same period of time to write a few lines of code [] [amount of money] will go to number a few money to go [singing], [song over and over again to write code], repeatedly frequently this operation, but also to connect the last operation (and then write the code, the money accumulated number of songs and then sing) also need to ensure that no mistakes, you tired?

synchronized is exclusive, queuing thread will have to switch, this switch is like the example above, to complete the switch, have to remember the last quasi-threaded operation, CPU tired brain, which is often said that a context switch will bring it large overhead

volatile is not the case, it is non-blocking mode, so when shared variables to solve the visibility problem, is synchronized volatile reflecting the weak synchronization

With this, the second question of the article [Why volatile is the way synchronized weak synchronization? ] You should also get the point

In addition to volatile visibility can solve the problem, but also solve the problem of compiler optimization reordering, previous articles have been introduced, please click on the link to see themselves like (the interview frequently asked to double check the lock Singleton Why is not thread safe too the answer can be found on the inside oh):

After reading these two articles, I believe that the third issue will be solved

Knowing this, I believe you would know how to use the

Carefully selected, and finally finished sorting the first edition of Java technology stack hard core data, Behind the Scenes reply [information] on private letters / [666] it

Soul questioning

  1. You understand the life cycle of the thread you? Different state flow is what?
  2. Why thread has notification wake-up mechanism?

The next article, we say it [Why not wake up the thread suggested notifyAll notify suggested it? ]
Personal blog: https: //dayarch.top

I welcome the attention of the public number "arch day a soldier", fun original Java technology stack to resolve the issue, will simplify complex issues, problems graphical abstract floor
if interested in my topic content, or more Behind the Scenes, welcome visit my blog  dayarch.top

Guess you like

Origin www.cnblogs.com/jinanxiaolaohu/p/12491731.html