"The Art of Java Concurrent Programming" Fang Tengfei

Chapter 1 Challenges of Concurrent Programming

1. Context switching: CPU executes tasks cyclically through time slice allocation algorithm. After the current task executes a time slice, it will switch to the next
task. However, the state of the previous task will be saved before switching, so that when switching back to this task next time,
the state of this task can be loaded again . So the process from saving to loading is a context switch.
2. Deadlock: refers to the fact that two or more threads hold each other's resources, causing these threads to be in a waiting state and cannot be executed.
3. Resource restrictions.
Summary: The purpose of concurrent programming is to make the program run faster, but it will encounter many challenges, such as the problem of context switching,
deadlock, and resource limitations. Among them, the concurrent containers and tools provided by the JDK.

Chapter 2 The underlying principle of the Java concurrency mechanism

Java code will generate bytecode after compilation. The bytecode will be loaded into the JVM for execution by the class loader, and finally converted into assembly instructions
for execution on the CPU. The Java concurrency mechanism depends on the implementation of the JVM and the instructions of the CPU.

2.1 Application of volatile

2.2 The realization principle and application of synchronized

Synchronized realizes the basis of synchronization: every object in Java can be used as a lock.
For ordinary synchronization methods, the lock is the current instance object.
For static synchronization methods, the lock is the Class object of the current class.
For synchronized method blocks, the lock is the object configured in synchronized brackets.

Chapter 3 Java Memory Model

3.1 The basis of the Java memory model

3.1.1 Two key issues of concurrent programming model

There are two issues that need to be addressed in concurrent programming: one is how to communicate between threads, and the other is synchronization between threads.

Communication refers to the way in which threads exchange information. In imperative programming, there are two communication mechanisms between threads: shared memory and message passing . In the
shared memory concurrency model, threads share common programs. State, implicit communication is carried out by writing and reading the public state in memory.
There is no public state in the concurrency model of message passing, and the communication of message display must be sent between threads.

Synchronization refers to the mechanism used in the program to control the relative order of operations between different threads. In the concurrency model of shared memory, synchronization is performed explicitly. The programmer
must show that a certain piece of code or method needs to be executed mutually exclusive between threads.

Java concurrency uses a shared memory model. The communication between Java threads is always implicit, and the entire communication process is completely transparent to the programmer.

3.1.2 Abstract structure of Java memory model

In Java, all instance fields, static fields, and array elements are stored in heap memory, and heap memory is shared between threads. This chapter uses shared variables to refer to instance
fields, static fields, and array elements. Local variables will not be shared between threads, they will not have memory visibility issues, and they are not affected by the memory model.

From an abstract point of view, JMM defines the abstract relationship between threads and main memory: shared variables between threads are stored in main memory , and
each thread has a private local memory (Local Memory) , A copy of the thread to read/write shared variables is stored in local memory . Local memory is
an abstract concept of JMM and does not really exist. It covers cache, write buffer, register, and other hardware and compiler optimizations.
Insert picture description here
From the figure 3-1, the communication between thread A and thread B must go through these two steps:
1) Thread A flushes the updated copy of the shared variable in the local memory to the main memory.
2) Thread B reads the shared variable that thread A has updated from the main memory.

Overall, JMM shared variables control the interaction between the main memory and the local memory of each thread , to provide guarantees visibility of memory for Java programmers .

3.1.3 Reordering from source code to instruction sequence

When the program is running, in order to improve performance, the compiler and processor often reorder the instructions.
1) Compiler-optimized reordering
2) Instruction-level parallel reordering
3) Memory system reordering The
Insert picture description here
above 1 belongs to the compiler reordering, and 2 and 3 belong to the processor reordering. These reordering may cause memory visibility issues in multithreaded programs .
For compiler reordering , JMM's compiler reordering rules prohibit certain types of compiler reordering (not all compiler reordering must be prohibited).
For processor reordering , the JMM processor reordering rules require the Java compiler to insert specific types of memory barriers
(Memory Barriers) instructions when generating instruction sequences, and use memory barrier instructions to prohibit specific types of processor reordering.

3.1.4 Classification of concurrent programming models

3.1.5 Introduction to happens-before

Starting from JDK5, Java uses the new JSR-133 memory model, and JSR-133 uses the happens-before concept to illustrate memory visibility between operations .
In JMM, if the execution result of an operation needs to be visible to another operation, there must be a happens-before relationship between the two operations. These two
operations can be within one thread or between multiple threads
.
A happens-before rule corresponds to one or more compiler and processor reordering rules.
The happens-before rule closely related to programmers:
program sequence rule: program sequence rule: every operation in a thread, happens-before any subsequent operation in the thread.
Monitor lock rule: to unlock a lock, happens-before and then lock the lock.
Volatile variable rules: write to a volatile domain, happens-before any subsequent reads of this volatile domain.
Transitivity: If A happens-before B, and B happens-before C, then A happens-before C.
Note: The happens-before relationship between two operations does not mean that the previous operation must be executed before the next operation! Happens-before only
requires that the execution result of the previous operation is visible to the next operation, and the previous operation is ordered before the second operation.

3.2 Reordering

Reordering refers to a means by which compilers and processors reorder the order of instructions in order to optimize program performance.

3.2.1 Data dependence

If two operations access the same variable, and one of these two operations is a write operation, then there is a data dependency between the two operations.
Data dependence is divided into three types: in the
Insert picture description here
above three cases, as long as the execution order of the two operations is reordered, the execution result of the program will change.
The compiler and the processor will observe the data dependency when reordering, and the compiler and the processor will not change the order between the two operations that have the data dependency.

3.2.2 as-if-serial

as-if-serial semantics: no matter how the reordering is done, the execution result of the program cannot be changed.
In order to comply with the as-if-serial semantics, the compiler and processor will not reorder the two operations that have data dependencies.

3.2.3 Procedure sequence rules

3.2.4 The impact of reordering on multithreading

3.3 Sequential consistency

3.4 volatile memory semantics

After declaring a shared variable as volatile, the reading and writing of this variable will be very special.

3.4.1 Features of volatile

Visibility: Reading a volatile variable can always see the last write to this volatile variable.
Atomicity: The read/write of any single volatile variable is atomic, and the compound operation of volatile++ is not atomic.

Guess you like

Origin blog.csdn.net/shiquan202101/article/details/112297598