Concurrency in Java (1) The correct posture for concurrent programming

Why use concurrent programming

With the rapid development of today's CPUs, 4-core, 8-core and even 16-core CPUs have been released. In the era of single-core CPUs in the past, each thread can only compete for one CPU to obtain the right to run. In the context of multi-core CPUs, one thread can no longer make full use of multiple CPUs. Furthermore, the digital era has exacerbated the user's performance requirements for applications. Traditional single-threaded applications have gradually been eliminated, and multi-threaded concurrent execution The form of the CPU can maximize the computing power of the CPU , which is an important reason why you need to learn concurrent programming.

Concurrent programming also has disadvantages

Frequent context switches

Even single-core processors support multi-threaded code execution, and the CPU implements multi-threaded concurrent execution by allocating time slices to each thread. When the task A time slice is executed, it will switch to the next task. At this time, the state of the current task A needs to be saved, and this state needs to be restored when it is the next time the task A is executed. Such a save and restore is called a context switch . Frequent context switching will consume a lot of system resources!

There are ways to reduce context switching

  1. Lock-free concurrent programming: You can refer to concurrentHashMapthe idea of ​​lock segmentation, each bucket corresponds to a data segment, and let different threads process different segments of data, so that under the condition of multi-thread competition, the time for context switching can be reduced.
  2. CAS algorithm: Java's Atomic package uses the CAS algorithm to update data without locking.
  3. Use the fewest threads: Avoid creating unnecessary threads, so as not to cause a large number of threads in a waiting state.
  4. Coroutine: Achieve multi-thread scheduling in a single thread and maintain switching between multiple tasks in a single thread.

Thread safety issues

In a concurrent programming environment, the security problems that multithreading may bring are:

  1. Thread deadlock: Each thread waits for each other to release the resources it is occupying.
  2. Thread starvation: threads can never get CPU time slices and are always in a waiting state.
public class DeadLockDemo {
    private static String resource_a = "A";
    private static String resource_b = "B";

    public static void main(String[] args) {
        deadLock();
    }
    public static void deadLock() {
        Thread threadA = new Thread(()->{
            synchronized (resource_a) {
                    System.out.println("get resource a");
                    try {
                        Thread.sleep(3000);
                        synchronized (resource_b) {
                            System.out.println("get resource b");
                        }
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
        });
        Thread threadB = new Thread(()->{
            synchronized (resource_b) {
                    System.out.println("get resource b");
                    synchronized (resource_a) {
                        System.out.println("get resource a");
                    }
                }
        });
        threadA.start();
        threadB.start();
    }
}
复制代码

The above code demonstrates the scene of deadlock. By jpslooking at the process number and the jstackstatus of the application thread, you can see that both threads are waiting for the resources that the other is occupying, but they do not release the resources themselves, causing permanent waiting for each other.

Several common methods to avoid deadlock:

  1. Avoid one thread requesting to acquire multiple locks at the same time
  2. Add numbers to resources, threads can only acquire locks in order according to numbers
  3. The time to acquire the lock plus a period value , do not let the thread permanently request the lock
  4. Banker's algorithm, before allocating the lock, determine whether it will cause a deadlock , if it is, then reject the request to acquire the lock

Concepts to understand before learning concurrent programming

Synchronous and asynchronous

Synchronization: After method A calls method B, you need to wait for method B to finish executing before you can continue method A.

Asynchronous: After method A calls method B, method A can continue to process its own business logic without waiting for method B to finish execution.

For example, during overtime shopping, if an item is gone, you have to wait for the warehouse staff to transfer the goods with you. Until the warehouse staff sends the goods to you, you can continue to pay at the cashier. And the asynchronous call is just like online shopping. After you place an order online, you do n’t have to worry about anything. You should go for what you want. When the goods arrive, you will receive a notification to get it.

Concurrency and parallelism

Concurrency: Multiple threads are continuously executed alternately, and only one thread is executing at the same time .

Parallelism: In the true sense, multiple threads execute at the same time , and multiple threads are assigned to multiple CPUs to execute together at the same time .

For example, in a single-core CPU, there is no concept of parallel execution of threads, only the concept of concurrent execution. Because multiple threads must share a CPU, the CPU constantly switches the thread context to allow different threads to execute (really tired). In a multi-core CPU, multiple threads can be allocated to different CPUs to execute simultaneously at the same time.

Blocking and non-blocking

Blocking: If a thread A occupies resource X, when thread B requests to operate resource X, it must wait for thread A to release resource X. The waiting process is called blocking.

Non-blocking: Similar to the above example, except that thread B does not need to wait for thread A to release when requesting to operate resource X. Multiple threads can access resource X at will.

Critical region

The critical section is used to represent a common resource or shared data that can be accessed by multiple threads. However, when each thread operates on the critical section resources, once the critical section resources are occupied by one thread, the other threads must wait.

For example CopyOnWriteArrayList, in Java, this critical resource will not be locked when reading data, but the entire collection will be locked when data is added, deleted, or updated. Other threads must wait for the thread to complete the operation before accessing.

to sum up

These are some important concepts that must be understood before entering concurrent programming, mastering these concepts will be very helpful for the next study! I will continue to update the concurrent series of articles in the future. I hope to communicate with you. If this article helps you a little, your little praise will make me happy all day! Thank you for reading!

Giant's shoulder:

"The Art of Concurrent Programming in Java"

github.com/CL0610/Java…

Guess you like

Origin juejin.im/post/5e9f8994e51d4546fc797628