Java Memory Model (JMM) and volatile principles

Table of contents

1. Java memory model

2. Visibility

3. Orderliness

Fourth, the principle of volatile

 1. Visibility guarantee

2. Orderly guarantee

5. Thread-safe singleton


1. Java memory model

JMM is the Java Memory Model. It defines the abstract concepts of main memory (shared data) and working memory (private data) . The bottom layer corresponds to CPU registers, cache, hardware memory, CPU instruction optimization, etc.

JMM reflects the following aspects

  • Atomicity - guarantees that instructions will not be affected by thread context switches
  • Visibility - guarantees that instructions will not be affected by the cpu cache
  • Orderedness - guarantees that instructions will not be affected by parallel optimization of cpu instructions

2. Visibility

Now there is a run shared variable that is true, the main thread is in sleep, and the child thread is in the while loop condition. Run is true, and then run becomes false when the main thread wakes up, but we find that the child thread is still running.

Because Java divides memory into main memory and shared memory

In the initial state, the t1 thread has just read the value of run from the main memory to the working memory.

Because t frequently reads the value of run from the memory, the JIT compiler will cache the value of run to the cache in its own working memory (the bottom layer of the working memory is associated with the cpu cache) , reducing the need for run in the main memory access to improve efficiency

After 1 second, main changed to run, but the t1 thread took it from its own cache, so I don’t know

solution:

volatile (volatile keyword)

Used to modify member variables and static member variables (not allowed locally), it means that the variable is fetched from the main memory, to avoid finding the value of the variable from the working cache on site, it must be obtained from the main memory, and the thread operation is volatile Variables are direct access to main memory

Synchronized can also be solved, but the lock of the monitor operating system is required, and the performance is even worse

In fact, adding System.out.println to the infinite loop can also stop

    private void newLine() {
        try {
            synchronized (this) {
                ensureOpen();
                textOut.newLine();
                textOut.flushBuffer();
                charOut.flushBuffer();
                if (autoFlush)
                    out.flush();
            }
        }
        catch (InterruptedIOException x) {
            Thread.currentThread().interrupt();
        }
        catch (IOException x) {
            trouble = true;
        }
    }

Because the print method adds the synchronized keyword (of course println is also added, and the newline operation will also hold the lock)
and the synchronized method can also ensure the visibility of the variables in the synchronization block, so the next stop reads from the main memory When it is false, it jumps out of the while.
Here is a record of the operations done by synchronized:
1. Obtain a synchronization lock;
2. Clear the working memory;
3. Copy the object copy from the main memory to the working memory;
4. Execute the code (calculation or output, etc.);
5. Refresh the data in the main memory;
6. Release the synchronization lock.
To sum up one sentence: sychronized can guarantee the visibility of variables

Visibility and Atomicity

The previous example is visibility, which ensures that among multiple threads, the modification of a volatile variable by one thread is visible to another thread. Atomicity cannot be guaranteed. When only one thread writes and multiple threads read, this situation is very suitable for volatile.

Synchronized can guarantee both atomicity and visibility. But the disadvantage is that the synchronized heavyweight operation has relatively low performance.

3. Orderliness

The JVM will adjust the execution order of the statements without affecting the correctness. This kind of operation in which the JVM adjusts the execution order by itself is called instruction rearrangement . However, instruction reshooting under multi-threading may affect the correctness.

Instruction rearrangement may have problems under multi-threading

For example, in the above situation, thread 2 assigns a value to num, and then wakes up thread 1 when ready is true. At this time, if the instruction is rearranged, it first executes the label change, then wakes up, and then assigns a value at the end, and there will be a problem. Will directly use the value of 0

 

If you want to ensure that there will be no instruction rearrangement, add volatile to the ready variable, and he can ensure that the previous instructions are executed sequentially.

Fourth, the principle of volatile

The underlying implementation principle of volatile is the memory barrier, Memory Barrier

  • A write barrier will be added after the write instruction to the volatile variable
  • A read barrier will be added after the read instruction of the volatile variable

 1. Visibility guarantee

The write barrier ensures that changes to shared variables before the barrier are synchronized to main memory. In this way, other threads can see the data that is finally synchronized to the main memory, thereby avoiding the problem of instruction rearrangement.

The read barrier guarantees that after the barrier, the read of the shared variable loads the latest data in the main memory, which means that when reading, it will not read its own local data, but go to the main memory to read

2. Orderly guarantee

The write barrier will ensure that when the instructions are rearranged, the code before the write barrier will not be arranged after the write barrier, and the code in front of it must be written to the main memory in front of it.

The read barrier will ensure that when the instructions are rearranged, the code after the read barrier will not be read in front of the read barrier

Note: The guarantee of order can only guarantee that the relevant code in this thread will not be reordered

5. Thread-safe singleton

The step of judging whether it is null or new may have thread safety issues, so to add syn, he comes in to judge whether there is any, and then create it, so it is also called lazy initialization

But there may be a problem with this, that is, I have to add syn every time to judge whether it is null. This locking has overhead. If the first creation is successful, the subsequent judgments are unsuccessful and it will take a long time to lock. waste performance

At this time, we can use double judgment, add an if outside to judge whether it is null, if it is not null, there is no need to lock

But this may have problems: the problem of instruction rearrangement, when there is an empty new object constructor in the bytecode file and the sequence of assigning values ​​​​to INSTANCE changes, if the value is assigned first but still constructed, others judge that this object has been used. The assignment is not null, so return this unconstructed object directly .

Although synchronized can guarantee the order of objects, the premise is to let synchronized completely manage this object, but now the INSTANCE object is placed outside, and synchronized is only assigned

solution:

In fact, just add a volatile to the INSTANCE variable.

Because volatile will add a write barrier, he can guarantee that his previous instructions must be executed in front, so this must be constructed before assignment, so it must be constructed during assignment.

question:

Why do singleton classes need to be final?

Because the final class cannot be inherited, if the subclass implements this class and then rewrites the method, the rewrite may destroy the singleton

If the serialization interface is implemented, what else is there to do to prevent de-serialization from breaking the singleton?

The creation of objects is not necessarily created through new. If we implement the serialization interface, we will also create new objects when we deserialize. In fact, the solution is to add a readResovle method, which will be called by deserialization and returned directly This object can be the result of not deserializing the bytecode

Guess you like

Origin blog.csdn.net/weixin_54232666/article/details/131236307