Java high concurrency programming practice 1, locks learned in those years

1. Processes and threads

The program itself is static and is the product of the combination of many codes, which are stored in files. If a program is to run, it needs to be loaded into memory, compiled by a compiler to run in a way that the computer can understand.
If you want to start a Java program, you must first create a JVM process.
A process is the smallest unit of resource allocation by the operating system, and multiple threads can be created in a process. Multiple threads each have independent local variables, thread stacks, and program counters, and can access shared resources.

  1. A process is the smallest unit of resources allocated by the operating system, and a thread is the smallest unit of CPU scheduling;
  2. A process can contain multiple threads;
  3. The processes are relatively independent, and the threads in the process are not completely independent, and can share the heap memory, method area memory, system resources, etc. in the process;
  4. Process context switching is much slower than thread context switching;
  5. An exception in a process will not affect other processes, but an exception in a thread may affect other threads in the process;

insert image description here

2. Thread group and thread pool

1. Thread group

A thread group can manage multiple threads. As the name suggests, a thread group is to put threads with similar functions into a group for easy management.

package com.guor.test;

public class ThreadGroupTest {
    
    
    public static void main(String[] args) {
    
    
        // 创建线程组
        ThreadGroup threadGroup = new ThreadGroup("nezha");

        Thread thread = new Thread(threadGroup,()->{
    
    
            // 线程组名称
            String groupName = Thread.currentThread().getThreadGroup().getName();
            // 线程名称
            String threadName = Thread.currentThread().getName();
            System.out.println("groupName -- "+groupName);//groupName -- nezha
            System.out.println("threadName -- "+threadName);//threadName -- thread
        },"thread");

        thread.start();
    }
}

2. What is the difference between a thread group and a thread pool?

  1. Threads in a thread group can modify data across threads, and data between thread groups and thread groups cannot be modified across threads;
  2. The thread pool is to create a certain number of threads and process tasks in batches. After the current task is executed, the thread can perform other tasks. By reusing existing threads, the consumption caused by thread creation and destruction is reduced;
  3. The thread pool can effectively manage the number of threads and avoid the unlimited creation of threads. Threads consume a lot of system resources. OOM will occur at any time, and it will cause excessive CPU switching. It also has powerful expansion functions, such as delayed timing thread pools. .

3. User thread and daemon thread

There are two types of threads in Java: User Thread (user thread), Daemon Thread (daemon thread).

User threads are the most common threads. For example, when started by the main method, a user thread is created.

The role of Daemon is to provide convenient services for the operation of other threads. The most typical application of daemon threads is GC (garbage collector), which is a very competent guardian.

Garbage collection and JIT compiler threads in the JVM are the most common daemon threads.

The daemon thread runs as long as there is one user thread running. The daemon thread exits only when all user threads are finished.

When writing code, you can also thread.setDaemon(true)specify a thread as a daemon thread.

Thread daemonTread = new Thread();
 
  // 设定 daemonThread 为 守护线程,默认false
 daemonThread.setDaemon(true);
 
 // 验证当前线程是否为守护线程,返回 true 则为守护线程
 daemonThread.isDaemon();

Notes on daemon threads:

  1. thread.setDaemon(true)To be thread.start()set before, otherwise an exception will be thrown IllegalThreadStateException. You cannot set a running thread as a daemon thread;
  2. A new thread spawned in a daemon thread is also a daemon thread;
  3. Read and write operations or computing logic cannot be set as daemon threads;

4. Parallelism and Concurrency

Parallelism means that when one CPU in a multi-core CPU executes a thread, other CPUs can execute another thread at the same time, and the two threads do not preempt CPU resources and can run at the same time.

Concurrency means that the CPU processes multiple threads within a period of time. These threads will preempt CPU resources. CPU resources are switched back and forth between multiple threads according to the time slice period. Multiple threads run simultaneously within a period of time, but not at the same time. In progress.

What is the difference between parallel and concurrent?

  1. Parallelism means that multiple threads run at the same time at every moment of a period of time, and concurrency means that multiple threads run at the same time in a period of time (not at the same moment, but cross-execution within a period of time)
  2. Multiple parallel threads will not preempt system resources, and multiple concurrent threads will preempt system resources;
  3. Parallelism is the product of multiple CPUs. There is only concurrency in a single-core CPU, but no parallelism;

insert image description here

5. Pessimistic lock and optimistic lock

1. Pessimistic lock

Pessimistic lock makes the object become a unique object of the thread after a thread performs a locking operation, and other threads will be blocked by the pessimistic lock and cannot be operated.

Disadvantages of pessimistic locking:

  1. After a thread acquires a pessimistic lock, other threads must block.
  2. When threads are switched, it is necessary to continuously release locks and acquire locks, which is expensive.
  3. When a low-priority thread acquires a pessimistic lock, the high-priority thread must wait, resulting in thread priority inversion. A synchronized lock is a typical pessimistic lock.

2. Optimistic locking

Optimistic locking believes that operations on an object will not cause conflicts, so each operation is not locked, but only to verify whether there is a conflict when the last change is submitted. If there is a conflict, try again until it succeeds. The process of this attempt is called for spin. Optimistic locking does not actually lock, but optimistic locking also introduces problems such as ABA and too many spins.

Optimistic locking generally adopts the version number mechanism. First, the version number of the data is read, and the version number is compared when writing the data. If the data is consistent, the data is updated. Otherwise, the version number is read again until the version number is consistent.

Optimistic locking in Java is implemented based on CAS spin.

6. CAS

1. What is CAS?

Compare And Swap。

CAS(V, A, B), memory value V, expected value A, modified value B (whether V is equal to A, equal to execution, not equal to assigning B to V)

insert image description here

2. Problems brought by CAS

(1) ABA problem

The flow of CAS operation is:

  1. Read the original value.
  2. Compare and replace via atomic operations.
  3. Although the comparison and replacement are atomic, the two steps of reading the original value and comparing and replacing are not atomic, and the original value may be modified by other threads during the period.

ABA problems sometimes don't cause problems for the system, but sometimes they can be fatal.

The solution to the ABA problem is to add a version number to the variable, which is updated with each modification. The JUC package provides a class AtomicStampedReference, which maintains a version number, and the version number is changed every time the value is modified.

(2) Too many spins

When the CAS operation is unsuccessful, it will re-read the memory value and try to spin it. When the concurrency of the system is very high, that is, the value is changed after each new value is read, resulting in the failure of the CAS operation and continuous spin retry. , the use of CAS at this time does not improve the efficiency, but because the number of spins is too many, it is not as efficient as directly locking the operation.

(3) Only the atomicity of one variable can be guaranteed

When operating on a variable, CAS can guarantee atomicity, but when operating on multiple variables at the same time, CAS is powerless.

It can be encapsulated into an object, and then perform CAS operation on the object, or directly lock it.

7. The locks learned in those years

1. Fair lock and unfair lock

  • Fair lock: According to the queuing order of threads in the queue, the first comer gets the lock first
  • Unfair locks: When a thread wants to acquire a lock, it directly grabs the lock regardless of the queue order, and whoever grabs it is whoever grabs it.

2. Exclusive locks and shared locks

  • Exclusive lock: When multiple threads are competing for locks, whether it is a read operation or a write operation, only one thread can acquire the lock, and other threads block and wait.
  • Shared lock: Allows multiple threads to acquire shared resources at the same time, using an optimistic locking mechanism. Shared locks limit write and read operations, but do not limit read and write operations.

3. Reentrant locks and non-reentrant locks

  • Reentrant lock: A thread can occupy the same lock multiple times, but when unlocking, it needs to perform the same number of unlocking operations;
  • Non-reentrant locks: a thread cannot occupy the same lock multiple times;

Eight, deadlock, livelock, starvation

1. Deadlock

Multiple threads hold each other's resources, causing multiple threads to wait for each other and unable to continue to perform subsequent tasks.

2. Four necessary conditions for deadlock

  1. Mutual exclusion condition: refers to the exclusive use of the allocated resources by the process, and a certain resource is only occupied by one process for a period of time. If there are other processes requesting resources at this time, the requester can only wait until the resources are occupied process is released;
  2. Request and hold conditions: Refers to a process that has held at least one resource, but has made a new resource request, and the resource has been occupied by other processes. At this time, the requesting process is blocked, but it keeps other resources that it has obtained.
  3. Inalienable: Refers to the resources that the process has obtained, which cannot be deprived until it is used up, and can only be released by itself when it is used up.
  4. Circular wait: one waits for one, resulting in a closed loop.

3. Hunger

Starvation refers to the inability of a thread to continue executing because it cannot acquire the resources it needs.

4. The main cause of hunger

  1. High-priority threads keep grabbing resources, but low-priority threads cannot;
  2. A thread has not released resources, so that other threads cannot obtain resources;

5. How to avoid hunger

  1. Allocate resources using fair locks;
  2. Allocate sufficient system resources for the program;
  3. Avoid the thread holding the lock from occupying the lock for a long time;

6. Livelock

Livelock refers to the phenomenon that when multiple threads preempt the same resource at the same time, they all actively give the resource to other threads for use, causing the resource to switch back and forth between multiple threads, resulting in the phenomenon that the thread cannot continue to execute because it cannot obtain the corresponding resource. .

7. How to avoid livelock

Multiple threads can wait for a random period of time to preempt resources again, which will greatly reduce the number of conflicts in thread preemption resources and effectively avoid livelocks.

9. What is the upgrade principle of multi-threaded lock?

There are four lock states in total, no lock state, biased lock, lightweight lock, and heavyweight lock.
insert image description here

As locks compete, locks can be upgraded from biased locks to lightweight locks to heavyweight locks. However, the upgrade of the lock is one-way, only upgrade can not be downgraded.

1. No lock

Without locking the resource, all threads can access and modify the same resource, but only one thread can successfully modify the resource at the same time, and other threads that fail to modify will continue to retry until the modification is successful.

Lock-free always assumes that there is no conflict in access to shared resources, and threads can execute continuously without locking or waiting. Once a conflict is found, the lock-free strategy uses a technology called CAS to ensure the security of thread execution. CAS is the key to lock-free technology.

2. Bias lock

The code of the object is always executed by the same thread, and there is no competition between multiple threads. The thread automatically acquires the lock in subsequent executions, reducing the performance overhead caused by acquiring the lock. Biased locks refer to the biasing of the first locking thread. This thread will not actively release the biased lock. It will only be released when other threads try to compete for the biased lock.

The revocation of the biased lock requires that when there is no bytecode executing at a certain point in time, the thread of the biased lock is first suspended, and then it is determined whether the lock object is in a locked state. If the thread is not in an active state, the object header is set to Lock-free state, and revoke biased lock.

If the thread is active, upgrade to the state of the lightweight lock

3. Lightweight lock

A lightweight lock means that when the lock is a biased lock, it is accessed by the second thread B. At this time, the biased lock will be upgraded to a lightweight lock. Thread B will try to acquire the lock by spinning, but the thread will not. Block, improve performance from er.

There is currently only one waiting thread, then that thread will wait by spinning. However, when the spin exceeds a certain number of times, the lightweight lock will be upgraded to a heavyweight lock. When one thread already holds the lock, another thread is spinning, and when the third thread visits, the lightweight lock It will also be upgraded to a heavyweight lock.

Note: What is spin?

Spinlock means that when a thread acquires a lock, if the lock has been acquired by another thread, the thread will wait in a loop, and then continuously judge whether the lock can be successfully acquired, and will not exit the loop until the lock is acquired.

4. Heavyweight lock

It means that when one thread acquires the lock, all other threads waiting to acquire the lock will be blocked.

The heavyweight lock is implemented by the monitor inside the object, and the essence of the monitor is to rely on the implementation of the Mutex Lock of the underlying operating system. The operating system needs to switch from user mode to kernel mode to switch between threads, and the switching cost is very high. high.

5. Comparison of lock status

Bias lock Lightweight lock heavyweight lock
scenes to be used Only one thread enters the synchronized block Although there are many threads, but there is no conflict, the thread entry time is staggered and therefore does not compete for locks A lock contention occurs, and multiple threads enter the synchronized block to compete for the lock
Nature Cancel a sync operation CAS operation instead of mutex synchronization Mutual exclusion synchronization
advantage No blocking, high execution efficiency (CAS operation is required only when the biased lock is acquired for the first time, and only the ThreadId is compared later) does not block Does not consume CPU
shortcoming The applicable scenarios are too limited. If competition occurs, there will be additional consumption of biased lock revocation If the lock cannot be acquired for a long time, it consumes CPU Blocking, context switching, heavyweight operations, consuming OS resources

6. Lock removal

Eliminating locks is another kind of lock optimization for virtual machines. This optimization is more thorough. Java virtual machines are compiled when JIT is compiled (which can be simply understood as compiling when a certain piece of code is about to be executed for the first time, also known as just-in-time compilation), By scanning the running context, remove the locks that are unlikely to have shared resource competition. Eliminating unnecessary locks in this way can save the time of meaningless lock requests. For example, the append method of StringBuffer is a synchronization method, but in the add method The StringBuffer in is a local variable and will not be used by other threads. Therefore, it is impossible for StringBuffer to have a shared resource competition situation, and the JVM will automatically remove its lock.

Ten, Java multi-threaded mind map

insert image description here

Nezha boutique series of articles:

Summary of Java learning routes, brick movers counterattack Java architects

Summary of 100,000 words and 208 Java classic interview questions (with answers)

21 Tips for SQL Performance Optimization

Java Basic Tutorial Series

Spring Boot advanced practice
insert image description here

Guess you like

Origin blog.csdn.net/guorui_java/article/details/126727119