Concurrency series "synchronized keyword in Java"

This article is "The Beauty of Java Concurrent Programming" study notes

Memory visibility problem of shared variables in Java

Before I talk about synchronizedit, let's talk about the memory visibility of shared variables in Java.

Let's take a look at Java's memory model when processing shared variables in multithreading:

The Java memory model stipulates that all variables are stored in 主内存. When a thread uses variables, it will copy the variables in the main memory to its own working space or called working memory. When the thread reads and writes variables, it operates on its own working memory. Variables in. The Java memory model is an abstract concept, so what is the working memory of threads in actual implementation?

The actual memory model:

Shown in the figure is a dual-core CPU system architecture. Each core has its own 控制器(Controller)sum 运算器(ALU). The controller contains a group of sums 寄存器and the 操作控制器arithmetic unit performs arithmetic and logic operations. Each core has its own level one cache, and in some architectures there is also a level two cache shared by all CPUs. Then the working memory in the Java memory model corresponds to the L1 or L2 cache or CPU registers here.

When a thread manipulates a shared variable, it first copies the shared variable from the main memory to its own working memory, then processes the variable in the working memory, and updates the variable value to the main memory after processing. So what happens if thread A and thread B process a shared variable at the same time? We use the actual CPU architecture diagram above. Assuming that thread A and thread B use different CPUs, and the current two levels of Cache are empty, then due to the existence of Cache at this time, the memory will not be visible. See the following analysis for details .

  • Thread A first obtains the value of the shared variable X. Since neither of the two levels of Cache hits, it loads the value of X in the main memory, if it is 0. Then the value of X=0 is cached to the two-level cache, and thread A modifies the value of X to 1, then writes it to the two-level cache, and flushes it to the main memory. After thread A is operated, the value of X in the two-level Cache of the CPU where thread A is located and in the main memory are both 1.
  • Thread B gets the value of X. First, the first level cache misses, and then looks at the second level cache. The second level cache hits, so X=1 is returned; everything is normal here, because X=1 in the main memory at this time . Then thread B modifies the value of X to 2, and stores it in the primary and shared secondary Cache where thread 2 is located, and finally updates the value of X in the main memory to 2; everything is good here.
  • Thread A needs to modify the value of X again this time. The first level cache hits when getting it, and X=1. The problem arises here. It is clear that thread B has changed the value of X to 2, why is thread A still getting 1? ? This is the problem of invisible memory of shared variables, that is, the value written by thread B is invisible to thread A.

So how to solve the problem of invisible shared variable memory? synchronizedThis problem can be solved by using keywords in Java .

synchronized关键字Introduction

Synchronized block is a kind provided by 原子性内置锁Java. Every object in Java can be used as one 同步锁. These built-in Java locks that users cannot see are called 内部锁, also known as 监视器锁.

The execution code of the thread synchronizedwill automatically acquire the internal lock before entering the code block. At this time, other threads will be blocked and suspended when accessing the synchronized code block.

The thread that has acquired the internal lock will release the internal lock in one of the following situations:

  • Exit the synchronization code block normally;
  • After throwing an exception;
  • A waitseries of methods of the built-in lock resource are called in the synchronization block

The built-in lock 排它锁means that when a thread acquires the lock, other threads must wait for the thread to release the lock before acquiring the lock.

In addition, since the threads in Java correspond to the native threads of the operating system one-to-one, when a thread is blocked, it is necessary to switch from the user mode to the kernel mode to perform the blocking operation. This is a very time-consuming operation, and synchronizedthe use of Will cause a context switch.

synchronizedMemory semantics

The memory visibility problem of shared variables was introduced above, which is mainly caused by the working memory of the thread. Next, we will explain synchronizeda memory semantics, which can solve the memory visibility problem of shared variables.

synchronizedThe memory semantics of entering the block is to remove synchronizedthe variable used in the block from the working memory of the thread 清除, so that when synchronizedthe variable is used in the block, it will not be obtained from the working memory of the thread, but 直接从主内存中获取. synchronizedThe memory semantics of the exit block is to synchronizedflush the modification of shared variables in the block to the main memory.

In fact, this is 加锁and 释放锁semantics, when acquiring the lock clears the shared variables within the local memory lock block will be used, are loaded from the main memory when using these shared variables, will modify the local memory when the lock is released Shared variables are flushed to main memory. In addition to solving the problem of memory visibility of shared variables, it is synchronizedoften used to achieve atomic operations.

Please also note that synchronizedkeywords will cause thread context switching and bring thread scheduling overhead.

Guess you like

Origin blog.csdn.net/weixin_44471490/article/details/109061303