Understanding atomic-ness, visibility and reordering of synchonized blocks vs volatile variables in Java

anir :

I am trying to understand volatile keyword from the book Java Concurrency in Practice. I compares synchronized keyword with volatile variables in three aspects: atomic-ness, volatility and reordering. I have some doubts about the same. I have discussed them one by one below:

1) Visibility: `synchronized` vs `volatile`

Book says following with respect to visibility of synchronized:

Everything thread A did in or prior to a synchronized block is visible to B when it executes a synchronized block guarded by the same lock.

It says following with respect to visibility of volatile variables:

Volatile variables are not cached in registers or in caches where they are hidden from other processors, so a read of a volatile variable always returns the most recent write by any thread.
The visibility effects of volatile variables extend beyond the value of the volatile variable itself. When thread A writes to a volatile variable and subsequently thread B reads that same variable, the values of all variables that were visible to A prior to writing to the volatile variable become visible to B after reading the volatile variable. So from a memory visibility perspective, writing a volatile variable is like exiting a synchronized block and reading a volatile variable is like entering a synchronized block.

Q1. I feel second paragraph above (of volatile) corresponds to what book said about synchronized. But is there synchronized-equivalent to volatile's first paragraph? In other words, does using synchronized ensures any / some variables not getting cached in processor caches and registers?

Note that book also says following about visibility for synchronized:

Locking is not just about mutual exclusion; it is also about memory visibility.

2) Reordering: `synchornized` vs `volatile`

Book says following about volatile in the context of reordering:

When a field is declared volatile, the compiler and runtime are put on notice that this variable is shared and that operations on it should not be reordered with other memory operations.

Q2. Book does not say anything about reordering in the context of synchronized. Can someone explain what can be said of reordering in the context of synchronized?

3) Atomicity

Book says following about atomicity of synchronized and volatile.

the semantics of volatile are not strong enough to make the increment operation (count++) atomic, unless you can guarantee that the variable is written only from a single thread.

Locking can guarantee both visibility and atomicity; volatile variables can only guarantee visibility.

Q3. I guess this means two threads can see volatile int a together, both will increment it and then save it. But only one last read will have effect, thus making whole "read-increment-save" non atomic. Am I correct with this interpretation on non-atomic-ness of volatile?

Q4. Does all locking-equivalent are comparable and have same visibility, ordering and atomicity property: synchronized blocks, atomic variables, locks?

PS: This question is related to and completely revamped version of this question I asked some days back. Since its full revamp, I havent deleted the older one. I wrote this question in more focused and structured way. Will delete older once I get answer to this one.

rzwitserloot :

The key difference between 'synchronized' and 'volatile', is that 'synchronized' can make threads pause, whereas volatile can't.

'caches and registers' isn't a thing. The book says that because in practice that's usually how things are implemented, and it makes it easier (or perhaps not, given these questions) to understand the how & why of the JMM (java memory model).

The JMM doesn't name them, however. All it says is that the VM is free to give each thread its own local copy of any variable, or not, to be synchronized at some arbitrary time with some or all other threads, or not... unless there is a happens-before relationship anywhere, in which case the VM must ensure that at the point of execution between the two threads where a happens before relationship has been established, they observe all variables in the same state.

In practice that would presumably mean to flush caches. Or not; it might mean the other thread overwrites its local copy.

The VM is free to implement this stuff however it wants, and differently on every architecture out there. As long as the VM sticks to the guaranteed that the JMM makes, it's a good implementation, and as a consequence, your software must work given solely those guarantees and no other assumptions; because what works on your machine might not work on another if you rely on assumptions that aren't guaranteed by the JMM.

Reordering

Reordering is also not in the VM spec, at all. What IS in the VM spec are the following two concepts:

  1. Within the confines of a single thread, all you can observe from inside it is consistent with an ordered view. That is, if you write 'x = 5; y = 10;' it is not possible to observe, from within the same thread, y being 10 but x being its old value. Regardless of synchronized or volatile. So, anytime it can reorder things without that being observable, then the VM is free to. Will it? Up to the VM. Some do, some don't.

  2. When observing effects caused by other threads, and you have not established a happens-before relationship, you may see some, all, or none of these effects, in any order. Really, anything can happen here. In practice, then: Do NOT attempt to observe effects caused by other threads without establishing a happens-before, because the results are arbitrary and untestable.

Happens-before relationships are established by all sorts of things; synchronized blocks obviously do it (if your thread is frozen trying to acquire a lock, and it then runs, any synchronized blocks on that object that finished 'happened before', and anything they did you can now observe, with the guarantee that what you observe is consistent with those things having run in order, and where all data they wrote you can see (as in, you won't get an older 'cache' or whatnot). Volatile accesses do as well.

Atomicity

Yes, your interpretation of why x++ is not atomic even if x is volatile, is correct.

I'm not sure what your Q4 is trying to ask.

In general, if you want to atomically increment an integer, or do any of many other concurrent-esque operations, look at the java.util.concurrent package. These contain efficient and useful implementations of various concepts. AtomicInteger, for example, can be used to atomically increment something, in a way that is visible from other threads, while still being quite efficient (for example, if your CPU supports Compare-And-Set (CAS) operations, Atomicinteger will use it; not something you can do from general java without resorting to Unsafe).

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=385735&siteId=1