Java virtual machine and thread of JVM series

introduction

"Reading for teenagers is like looking at the moon through the cracks; reading for middle-aged people is like looking at the moon in the garden; reading for the elderly is like playing with the moon on the stage. The depth of experience is the depth of ears." Reference book: "In-depth understanding of Java Virtual Machine
insert image description here
"

Personal java knowledge sharing project - gitee address

Personal java knowledge sharing project - github address

java virtual machine and thread

The occasions where threads appear generally have data concurrency security, and how to ensure data concurrency security is a must-have skill for java programmers. In today's article, we will understand the underlying principles of threads from the perspective of the java virtual machine. let's go!!
insert image description here

Implementation of threads

We know that thread is a more lightweight scheduling execution unit than process. The introduction of thread can separate the resource allocation and execution scheduling of a process. Each thread can share process resources (memory address, file I/O, etc.), It can also be scheduled independently (thread is the basic unit of CPU scheduling).

Mainstream operating systems provide thread implementation, and the Java language provides unified processing of thread operations under different hardware and operating system platforms. Each instance of the java.lang.Thread class that has executed start() and has not yet ended It represents a thread.
insert image description hereWe noticed that the Thread class is significantly different from most Java APIs, and all its key methods are declared as Native. In the Java API, a Native method often means that this method is not used or cannot be implemented using platform-independent means (of course, it may be possible to use Native methods for execution efficiency, but usually the most efficient means are platform-related means. )

There are three main ways to implement threads: using kernel threads to implement, using user threads to implement, and using user threads plus lightweight processes to implement

  1. Implemented using kernel threads

    Kernel-Level Thread (KLT) is a thread directly supported by the operating system kernel (Kernel, hereinafter referred to as the kernel). This thread is switched by the kernel. The kernel schedules the thread by manipulating the scheduler. And is responsible for mapping the task of the thread to each processor. Each kernel thread can be regarded as a clone of the kernel, so that the operating system has the ability to handle multiple things at the same time. A kernel that supports multithreading is called a multi-threaded kernel (Multi-Threads Kernel)

    The program generally does not directly use the kernel thread, but uses a high-level interface of the kernel thread - Lightweight Process (Light Weight Process, LWP),A lightweight process is what we usually call a thread. Since each lightweight process is supported by a kernel thread, only when the kernel thread is supported first can there be a lightweight process. This 1:1 relationship between lightweight processes and kernel threads is called a one-to-one threading model

    insert image description here

    Due to the support of kernel threads, each lightweight process becomes an independent scheduling unit. Even if a lightweight process is blocked in a system call, it will not affect the entire process to continue working, but the lightweight process has it Limitations: First, because it is implemented based on kernel threads, various thread operations, such as creation, destruction, and synchronization, require system calls. The cost of system calls is relatively high, and it needs to switch back and forth between user mode (User Mode) and kernel mode (Kernel Mode). Secondly,Each lightweight process needs to be supported by a kernel thread, so the lightweight process consumes certain kernel resources (such as the stack space of the kernel thread), so the number of lightweight processes supported by a system is limited

  2. Implemented using user threads

    Broadly speaking,As long as a thread is not a kernel thread, it can be considered a user thread (User Thread, UT), therefore, from this definition, lightweight processes also belong to user threads, but the implementation of lightweight processes is always based on the kernel, and many operations require system calls, and the efficiency will be limited

    In a narrow sense, a user thread refers to a thread library that is completely built on the user space, and the system kernel cannot perceive the existence of threads. The establishment, synchronization, destruction and scheduling of user threads are completely completed in user mode without the help of the kernel. If the program is implemented properly, this kind of thread does not need to switch to the kernel state, so the operation can be very fast and low-consumption, and can also support a larger number of threads. Multi-threading in some high-performance databases is implemented by user threads. . This 1:N relationship between processes and user threads is called a one-to-many threading model

    insert image description here

    The advantage of using user threads is that it does not require the support of the system kernel, and the disadvantage is that without the support of the system kernel, all thread operations need to be handled by the user program itself. The creation, switching and scheduling of threads are all issues that need to be considered, and since the operating system only allocates processor resources to processes, such as "how to deal with blocking", "how to map threads to other processors in a multi-processor system "This kind of problem will be extremely difficult, if not impossible, to solve. Therefore, programs implemented using user threads are generally more complicated (the "complex" and "programs complete thread operations by themselves" mentioned here do not limit the need to write complex code to implement user threads in the program, and programs that use user threads , many of them rely on a specific thread library to complete basic thread operations, and these complexities are encapsulated in the thread library), except for the multi-thread programs in operating systems that do not support multi-threading (such as DOS) and a few special threads. In addition to the required programs, there are fewer and fewer programs using user threads. Languages ​​such as Java and Ruby have used user threads before, and finally gave up using them.

  3. Mixed implementation using user threads plus lightweight processes

    In addition to relying on the implementation of kernel threads and completely implemented by user programs, threads also have an implementation method that uses kernel threads and user threads together. In this hybrid implementation, there are both user threads and lightweight processes. User threads are still completely built in user space, so operations such as creation, switching, and destruction of user threads are still cheap, and can support large-scale user thread concurrency. andThe lightweight process supported by the operating system acts as a bridge between the user thread and the kernel thread, so that the thread scheduling function and processor mapping provided by the kernel can be used, and the system call of the user thread must be completed through the lightweight thread. Greatly reduces the risk of the entire process being completely blocked. In this mixed mode, the ratio of the number of user threads to lightweight processes is uncertain, that is, the relationship of N:M

    insert image description here

For Sun JDK, its Windows version and Linux version are implemented using a one-to-one threading model, and a Java thread is mapped to a lightweight process, because the threading model provided by Windows and Linux systems is the same one to one.

thread scheduling

Thread scheduling refers to the process by which the system allocates processor usage rights for threads. There are two main scheduling methods, Cooperative Threads-Scheduling and Preemptive Threads-Scheduling.

If a multi-threaded system with cooperative scheduling is used, the execution time of the thread is controlled by the thread itself. After the thread finishes executing its own work, it must actively notify the system to switch to another thread.The biggest advantage of cooperative multi-threading is that it is easy to implement, and because the thread will switch after it has finished its own work, the switching operation is known to the thread itself, so there is no thread synchronization problem. The "co-routine" in the Lua language is such an implementation. Its disadvantages are also obvious:The thread execution time is uncontrollable, even if there is a problem with the programming of a thread, and the system has not been notified to switch threads, the program will always be blocked there. A long time ago, the Windows 3.x system used cooperative methods to realize multi-process and multi-task, which was quite unstable. If a process insisted on not giving up CPU execution time, it might cause the entire system to crash.

If a multi-threaded system with preemptive scheduling is used, then each thread will be allocated execution time by the system, and thread switching is not determined by the thread itself (in Java, Thread.yield() can give up execution time, but it needs to obtain execution time If so, the thread itself has nothing to do).In this way of implementing thread scheduling, the execution time of the thread is controllable by the system, and there will be no problem that a thread will cause the entire process to block, The thread scheduling method used by Java is preemptive scheduling [. Contrary to the Windows 3.x example mentioned above, the Windows 9x/NT kernel uses preemption to implement multi-processes. When a process has a problem, we can also use the task manager to "kill" the process. ” without causing the system to crash.

Although Java thread scheduling is done automatically by the system, we can still "suggest" the system to allocate more execution time to some threads, and to allocate less time to other threads-this operation can be done by setting the thread priority .The Java language sets a total of 10 levels of thread priority (Thread.MIN_PRIORITY to Thread.MAX_PRIORITY). When two threads are in the Ready state at the same time, the thread with a higher priority is more likely to be selected by the system for execution.

However, thread priority is not very reliable, because Java threads are implemented by mapping to the system's native threads, so thread scheduling ultimately depends on the operating system, although many operating systems now provide the concept of thread priority , but it does not necessarily correspond to the priority of Java threads one by one. For example, there are 2147483648 (232) priorities in Solaris, but there are only 7 in Windows. It is better to say that the system has more priorities than Java threads. A little vacancy is enough, but for systems with fewer priorities than Java threads, several situations with the same priority have to occur. Table 12-1 shows the correspondence between Java thread priorities and Windows thread priorities. The other 6 thread priorities except THREAD_PRIORITY_IDLE are used in the JDK of the Windows platform

insert image description here
The above said that "thread priority is not very reliable", not only that different priorities will actually become the same on some platforms, but there are other situations that prevent us from relying too much on priority: priority may be changed by the system itself. For example, there is a function called "Priority Boosting" (Priority Boosting, of course, it can be turned off) in the Windows system. Its general function is that when the system finds that a thread is performing particularly "hard work", it may It will override the thread priority to allocate execution time for it. Therefore, we cannot use the priority in the program to completely and accurately determine which one of a group of threads whose status is Ready will be executed first.

state of the thread

The Java language defines 5 thread states. At any point in time, a thread can only have one and only one of these states. The 5 states are as follows:

  • New (New) : Threads that have not been started after creation are in this state
  • Run (Runable) : Runable includes Running and Ready in the thread state of the operating system, that is, the thread in this state may be executing, or it may be waiting for the CPU to allocate execution time for it
  • Waiting indefinitely (Waiting) : Threads in this state will not be allocated CPU execution time, they will wait to be explicitly woken up by other threads. The following methods put the thread into an indefinite wait state:
  • Timed Waiting (Timed Waiting) : Threads in this state will not be allocated CPU execution time, but they do not need to wait to be explicitly awakened by other threads, and they will be automatically awakened by the system after a certain period of time. The following method puts the thread into a time-definite wait state:
    • Object.wait() method with no Timeout parameter set.
    • Thread.join() method without setting the Timeout parameter
    • LockSupport.park() method
  • Blocked (Blocked) : The thread is blocked, the difference between "blocked state" and "waiting state" is: "blocked state" is waiting to acquire an exclusive lock, this event will happen when another thread gives up the lock; The "waiting state" is waiting for a period of time, or the occurrence of a wake-up action. When the program is waiting to enter the synchronization area, the thread will enter this state
    • Thread.sleep() method
    • Object.wait() method with Timeout parameter set
    • Thread.join() method with Timeout parameter set
    • LockSupport. parkNanos() method
    • LockSupport.parkUntil() method
  • Terminated : The thread status of the terminated thread, the thread has ended execution

The above five states will be converted to each other when a specific event occurs, and their conversion relationship is shown in the figure:
insert image description here

thread safety

Thread safety : "When multiple threads access an object, if you do not need to consider the scheduling and alternate execution of these threads in the runtime environment, and do not need to perform additional synchronization, or perform any other coordination operations on the caller, call this The behavior of the object can get the correct result, then the object is thread-safe"

According to the "safety degree" of thread safety, sorted from strong to weak, the data shared by various operations in the Java language can be divided into the following five categories: immutable, absolute thread safety,
relative thread safety , thread compatibility and thread opposition

1. Immutable

In the Java language (especially after JDK 1.5, that is, the Java language after the Java memory model has been revised), immutable (Immutable) objects must be thread-safe, neither the method implementation of the object nor the caller of the method Need to take any thread safety measures, as mentioned in the previous article,As long as an immutable object is constructed correctly (without this reference escaping), its externally visible state will never change, and it will never be seen to be in an inconsistent state among multiple threads. The security brought by "immutable" is the simplest and purest, such as: java.lang.String class object, it is a typical immutable object, we call its substring(), replace() and concat () These methods will not affect its original value, but will only return a newly constructed string object.

2. Absolute thread safety

satisfy"When multiple threads access an object, if you do not need to consider the scheduling and alternate execution of these threads in the runtime environment, and do not need to perform additional synchronization, or perform any other coordination operations on the caller, the behavior of calling this object is The correct result can be obtained, then this object is thread-safe"The thread in this sentence is absolutely thread-safe, but this definition is actually very strict. It usually takes a lot of effort for a class to achieve "no matter what the runtime environment is, the caller does not need any additional synchronization measures." , and sometimes even at an unrealistic cost.

Most of the classes that mark themselves as thread-safe in the Java API are not absolutely thread-safe. We can see what "absolute" means here through a thread-safe class that is not "absolutely thread-safe" in the Java API.

import java.util.Vector;

public class VectorTest {
    
    
    private static Vector<Integer> vector = new Vector();

    public static void main(String[] args) {
    
    
        while (true) {
    
    
            for (int i = 0; i < 10; i++) {
    
    
                vector.add(i);
            }
            Thread removedThread = new Thread(() -> {
    
    
                for (int i = 0; i < vector.size(); i++) {
    
    
                    vector.remove(i);
                }
            });

            Thread printThread = new Thread(() -> {
    
    
                for (int i = 0; i < vector.size(); i++) {
    
    
                    System.out.println((vector.get(i)));
                }
            });
            removedThread.start();
            printThread.start();
            while (Thread.activeCount() > 30) ;
        }
    }
}

You can run this case, and the exception information of ArrayIndexOutOfBoundsExce will be printed on the terminal, but the program will not be stopped.

This happens because, although the get(), remove() and size() methods of Vector used here are all synchronous, in a multi-threaded environment, if no additional synchronization measures are taken at the method call end , using this code is still unsafe, because accessing the array with i will throw an ArrayIndexOutOfBoundsException if another thread happens to delete an element at the wrong time such that index i is no longer available. If we want to ensure that this code can be executed correctly, we have to change the definitions of removeThread and printThread as shown in the following code:

import java.util.Vector;

public class VectorTest {
    
    
    private static Vector<Integer> vector = new Vector();

    public static void main(String[] args) {
    
    
        while (true) {
    
    
            for (int i = 0; i < 10; i++) {
    
    
                vector.add(i);
            }
            Thread removedThread = new Thread(() -> {
    
    
                synchronized (vector) {
    
    
                    for (int i = 0; i < vector.size(); i++) {
    
    
                        vector.remove(i);
                    }
                }
            });

            Thread printThread = new Thread(() -> {
    
    
                synchronized (vector) {
    
    
                    for (int i = 0; i < vector.size(); i++) {
    
    
                        System.out.println((vector.get(i)));
                    }
                }
            });
            removedThread.start();
            printThread.start();
            while (Thread.activeCount() > 30) ;
        }
    }
}

3. Relatively thread safe

Relative thread safety is what we usually call thread safety. It needs to ensure that the individual operations on this object are thread-safe. We don’t need to take additional safeguards when calling, but for some specific sequential calls , it may be necessary to use additional synchronization means on the calling side to ensure the correctness of the calling.

In the Java language, most thread-safe classes belong to this type, such as Vector, HashTable, collections wrapped by the synchronizedCollection() method of Collections, etc.

4. Thread compatible

Thread compatibility means that the object itself is not thread-safe, but you can ensure that the object can be safely used in a concurrent environment by correctly using synchronization methods on the calling side. We usually say that a class is not thread-safe, and most of the time it means This is the case. Most of the classes in the Java API are thread-compatible, such as the collection classes ArrayList and HashMap corresponding to the previous Vector and HashTable.

5. Thread opposition

Thread opposition refers to code that cannot be used concurrently in a multi-threaded environment, regardless of whether the calling end has taken synchronization measures. Since the Java language is inherently multi-threaded, codes that oppose multi-threading rarely appear, and are usually harmful and should be avoided as much as possible.

An example of thread opposition is the suspend() and resume() methods of the Thread class. If two threads hold a thread object at the same time, one tries to interrupt the thread and the other tries to resume the thread. When synchronization is performed, the target thread is at risk of deadlock. If the thread interrupted by suspend() is the thread that is about to execute resume(), then deadlock will definitely occur. It is for this reason that the suspend() and resume() methods have been deprecated by the JDK statement (@Deprecated). Common thread-opposing operations include System.setIn(), Sytem.setOut() and System.runFinalizersOnExit(), etc.

Thread-safe implementation method

Mutex synchronization

Mutual exclusion & synchronization (Mutual Exclusion&Synchronization) is a common means of ensuring the correctness of concurrency.Synchronization refers to ensuring that shared data is only used by one (or some, when using semaphores) threads at the same time when multiple threads access shared data concurrently. Mutual exclusion is a means to achieve synchronization, critical section (Critical Section), mutex (Mutex) and semaphore (Semaphore) are the main ways to achieve mutual exclusion. Therefore, in these 4 words,Mutual exclusion is the cause, synchronization is the effect; mutual exclusion is the method, synchronization is the purpose

In Java, the most basic mutual exclusion synchronization method is the synchronized keyword. After the synchronized keyword is compiled, two bytecode instructions, monitorenter and monitorexit, will be formed before and after the synchronization block. Both bytecodes need A parameter of type reference to specify the object to lock and unlock. If the synchronized in the Java program clearly specifies the object parameter, it is the reference of the object; if not specified, then according to whether the synchronized modification is an instance method or a class method, the corresponding object instance or Class object is used as the lock object

According to the requirements of the virtual machine specification, when the monitorenter instruction is executed, it first tries to acquire the lock of the object. If the object is not locked, or the current thread already owns the lock of that object, the lock counter is incremented by 1. Correspondingly, the lock counter is decremented by 1 when the monitorexit instruction is executed. When the counter is 0, the lock is released . If the acquisition of the object lock fails, the current thread will block and wait until the object lock is released by another thread.

In the description of the behavior of monitorenter and monitorexit in the virtual machine specification, there are two points that need special attention. First of all, the synchronized synchronization block is reentrant for the same thread, and there will be no problem of locking itself. Secondly, the synchronization block will block the entry of other threads before the thread that has entered is executed.

In addition to synchronized, we can also use the reentrant lock (ReentrantLock) in the java.util.concurrent (hereinafter referred to as JUC) package to achieve synchronization. In basic usage, ReentrantLock is very similar to synchronized. They both have the same thread reentrant lock. The only feature is that there is a difference in code writing. One is a mutex at the API level (the lock() and unlock() methods are completed with a try/finally statement block), and the other is a mutex at the native syntax level. However, compared to synchronized, ReentrantLock adds some advanced features, mainly the following three items: waiting for interruptible, fair locks, and locks that can bind multiple conditions

  1. Interruptible waiting means that when the thread holding the lock does not release the lock for a long time, the waiting thread can choose to give up waiting and handle other things instead. The interruptible feature is very helpful for processing synchronization blocks with very long execution times
  2. A fair lock means that when multiple threads are waiting for the same lock, they must acquire the lock in sequence according to the order in which the lock was applied for; an unfair lock does not guarantee this. When the lock is released, any thread waiting for the lock has Opportunity to acquire the lock. The lock in synchronized is unfair, and ReentrantLock is also unfair by default, but you can require a fair lock through the constructor with a boolean value
  3. Lock binding multiple conditions means that a ReentrantLock object can bind multiple Condition objects at the same time, and in synchronized, the wait() and notify() or notifyAll() methods of the lock object can implement an implicit condition, if you want When associated with more than one condition, you have to add an additional lock, but ReentrantLock does not need to do this, you only need to call the newCondition() method multiple times

Both synchronized and ReentrantLock can be used to achieve mutual exclusion synchronization. If your jdk version is before 1.6, it is recommended to use ReentrantLock. If it is 1.6 or later, it is recommended to use synchronized. As for why this choice is because the throughput performance of synchronized before jdk1.6 in a multi-threaded environment is worse than that of ReentrantLock, but in jdk1.6 a series of optimization measures have been taken for locks (this is a later article What needs to be explained) The performance of the two is basically the same, and the performance factor is no longer the reason for choosing ReentrantLock. The virtual machine will definitely be more inclined to the original synchronized in the future performance improvement, so it is still advocated in synchronized When the requirements can be realized, the use of synchronized for synchronization is given priority.

non-blocking synchronization

The main problem of mutual exclusion synchronization is the performance problem caused by thread blocking and waking up, so this kind of synchronization is also called blocking synchronization (Blocking Synchronization). From the way of dealing with the problem, mutual exclusion synchronization is a pessimistic concurrency strategy. It is always believed that as long as the correct synchronization measures (such as locking) are not taken, then there will definitely be problems, no matter whether the shared data is really shared or not. In case of competition, it must perform locking (the conceptual model is discussed here, in fact, the virtual machine optimizes a large part of unnecessary locking), user state core state conversion, maintenance of lock counters and checking whether there are blocked threads Operations such as wakeup are required.With the development of hardware instruction sets, we have another option: an optimistic concurrency strategy based on conflict detection. In layman's terms, the operation is performed first. If no other threads compete for the shared data, the operation is successful; if the shared data If there is contention and a conflict occurs, then take other compensation measures (the most common compensation measure is to keep retrying until it succeeds). Many implementations of this optimistic concurrency strategy do not need to suspend threads. Therefore, this synchronization operation is called non-blocking synchronization (Non-Blocking Synchronization)

Why do we say that the use of an optimistic concurrency strategy requires "the development of the hardware instruction set"? Because we need the atomicity of the two steps of operation and conflict detection, how can we guarantee it? If we use mutual exclusion synchronization to guarantee it, we will lose It makes sense, so we can only rely on hardware to complete this matter. The hardware guarantees that a behavior that requires multiple operations semantically can be completed with only one processor instruction. Such instructions are commonly used:

  1. Test and set (Test-and-Set)
  2. Fetch-and-Increment
  3. Exchange (Swap)
  4. Compare and exchange (Compare-and-Swap, hereinafter referred to as CAS) - high-frequency interview points
  5. Load link/conditional storage (Load-Linked/Store-Conditional, hereinafter referred to as LL/SC)

Among them, the first three are processor instructions that already existed in most instruction sets in the 20th century, and the latter two are newly added by modern processors, and the purpose and function of these two instructions are similar. In the IA64 and x86 instruction sets, there is the cmpxchg instruction to complete the CAS function, and the sparc-TSO also has the casa instruction to implement it. However, under the ARM and PowerPC architectures, a pair of ldrex/strex instructions are required to complete the LL/SC function.

The CAS instruction needs to have 3 operands, which are the memory location (in Java, it can be simply understood as the memory address of the variable, represented by V), the old expected value (represented by A) and the new value (represented by B). When the CAS instruction is executed, if and only if V meets the old expected value A, the processor updates the value of V with the new value B, otherwise it does not perform the update, but regardless of whether the value of V is updated, the old value of V will be returned, The above processing is an atomic operation

After JDK 1.5, the CAS operation can only be used in Java programs. This operation is packaged by several methods such as compareAndSwapInt() and compareAndSwapLong() in the sun.misc.Unsafe class. The virtual machine does special processing for these methods internally. , the result of instant compilation is a platform-dependent processor CAS instruction, there is no method call process, or it can be considered as unconditional inline

Since the Unsafe class is not a class provided for user program calls (the code of Unsafe.getUnsafe() restricts only the Class loaded by the Bootstrap ClassLoader to access it), therefore, if we do not use reflection, we can only Use it indirectly through other Java APIs, such as the integer atomic class in the JUC package, where methods such as compareAndSet() and getAndIncrement() use the CAS operation of the Unsafe class.

Use AtomicInteger#incrementAndGet to deepen your impression:
insert image description here

insert image description here

Although CAS looks beautiful, it is obvious that this operation cannot cover all usage scenarios of mutex synchronization, and CAS is not perfect semantically. There is such a logical loophole: if a variable V is read for the first time as A value, and it is still A value when it is ready to be assigned, can we say that its value has not been changed by other threads? If its value was changed to B during this period, later If it is changed back to A, then the CAS operation will mistakenly think that it has never been changed.This vulnerability is known as the "ABA" problem of CAS operations. In order to solve this problem, the JUC package provides a marked atomic reference class "AtomicStampedReference", which can ensure the correctness of CAS by controlling the version of the variable value. However, at present, this class is relatively "chicken ribs". In most cases, ABA problems will not affect the correctness of program concurrency. If you need to solve ABA problems, switching to traditional mutual exclusion synchronization may be more efficient than atomic classes

no synchronization scheme

To ensure thread safety, it is not necessary to synchronize, and there is no causal relationship between the two. Synchronization is only a means to ensure the correctness of shared data contention. If a method does not involve shared data, it naturally does not need any synchronization measures to ensure correctness. Therefore, some codes are inherently thread-safe. Briefly introduce Two of these categories:

  1. Reentrant Code: This kind of code is also called pure code (Pure Code), which can be interrupted at any moment of code execution, and then execute another piece of code (including recursively calling itself), while in the control After returning, the original program will not have any errors. Compared with thread safety, reentrancy is a more basic feature. It can guarantee thread safety, that is, all reentrant codes are thread safe, but not all thread safe codes are reentrant . Reentrant code has some common characteristics such asIt does not depend on the data stored on the heap and common system resources, the state quantities used are all passed in by parameters, and non-reentrant methods are not called. We can judge whether the code is reentrant through a simple principle: if a method, its return result is predictable, as long as the same data is input, it can return the same result, then it satisfies reentrancy Reentrant requirements, of course, thread-safe
  2. Thread Local Storage (Thread Local Storage): If the data required in a piece of code must be shared with other codes, then see if these shared data codes can be guaranteed to be executed in the same thread? If so, we can put The visible range of shared data is limited to the same thread, so that no data contention between threads can be guaranteed without synchronization (a classic example is ThreadLocal ensuring thread safety of web servers)

I originally wanted to use my own language to explain this knowledge point, but after trying a few times, I found that there is no way to explain it concisely, so the content of this article is excerpted from "In-depth understanding of java virtual machine" In the book, I personally feel that some trivial and irrelevant contents have been deleted, and some important places have been marked. I hope you can learn something from this article.

Guess you like

Origin blog.csdn.net/a_ittle_pan/article/details/126211064