JUC Lecture 4: Locks in Java

Java provides a wide variety of locks. Each lock has different characteristics and can show very high efficiency in appropriate scenarios. This article is the fourth lecture of JUC. It aims to give examples of lock-related source code (the source code in this article comes from JDK 8 and Netty 3.10.6) and usage scenarios, and introduce readers to the knowledge points of mainstream locks and the applicable scenarios of different locks.

Preface

Java provides a wide variety of locks. Each lock has different characteristics and can show very high efficiency in appropriate scenarios. This article aims to give examples of lock-related source code (the source code in this article comes from JDK 8 and Netty 3.10.6) and usage scenarios, and introduce readers to the knowledge points of mainstream locks and the applicable scenarios of different locks.

In Java, locks are often defined according to whether they contain a certain feature. We group locks into categories by features, and then introduce them through comparison to help everyone understand the relevant knowledge more quickly. The following is an overall classification of the contents of this article:

img

Classification of locks

  • Locks can be divided into optimistic locks and pessimistic locks from the perspective of optimism and pessimism.

    • optimistic locking

      • Optimistic locking uses optimistic thinking to process data. Every time it reads data, it is believed that others will not modify the data, so it will not lock. However, when updating, it will judge whether others have updated the data during this period. It is usually used in When writing, read the current version number first and then lock it. The specific process is: compare the current version number with the last version number. If the version numbers are consistent, update them. If the version numbers are inconsistent, repeat the read, comparison, and write operations. Optimistic locking in Java is mostly implemented through CAS (Compare And Swap) operations. CAS is an atomic update operation. Before operating on data, it first compares whether the current value is the same as the incoming value. If If they are the same, update, otherwise the update operation will not be performed and a failure status will be returned directly.
      • In Java, the atomic variable class under the package is implemented java.util.concurrent.atomicusing CAS , an implementation method of optimistic locking.
    • pessimistic lock

      • Pessimistic lock: It is assumed that the concurrency environment is pessimistic. If a concurrency conflict occurs, consistency will be destroyed, so conflicts must be completely prohibited through exclusive locks.
      • Optimistic locking: (the lock granularity is small) It is assumed that the concurrency environment is optimistic, that is, although there will be concurrency conflicts, the conflicts can be found and will not cause damage. Therefore, you can not add any protection and decide to give up after discovering the concurrency conflicts. Please try again.
      • Row locks, table locks, read locks, write locks, etc. are all locked before performing operations. Exclusive locks such as synchronized and ReentrantLock in Java are the implementation of the pessimistic lock idea.
    • Adapt to the scene

      • Optimistic locking is suitable for situations where there are relatively few writes (multi-read scenarios) , that is, when conflicts rarely occur. This can save the cost of locks and increase the overall throughput of the system.
      • However, if there is a lot of writing, conflicts will often occur, which will cause the upper-layer application to continuously retry, which actually reduces the performance. Therefore, it is generally more appropriate to use pessimistic locking in a lot of writing scenarios.
  • From the perspective of fairness in acquiring resources, it can be divided into fair locks and unfair locks.

    • Fair lock - refers to multiple threads acquiring locks in the order in which they apply for locks, similar to queuing up for dinner, first come, first served.

      • In a concurrent environment, when acquiring a lock, each thread will first check the waiting queue maintained by the lock. If it is empty, or the current thread is the first in the waiting queue, it will occupy the lock, otherwise it will be added to the waiting queue. In the future, it will be retrieved from the queue according to the rules of FIFO.
    • Unfair lock means that the order in which multiple threads acquire locks is not in the order in which they apply for locks. It is possible that the thread that acquires the lock later acquires the lock earlier than the thread that acquires the lock first. In the case of high concurrency, priority inversion or starvation may occur.

      • Unfair locks are rude. Just try to take possession of the lock. If the attempt fails, use a method similar to fair locks.

      • The advantage of unfair locks is that the throughput is greater than fair locks.

    • Lock supports fair locking, but synchronized does not support fair locking.

    • When creating ReentrantLock in the concurrent package, you can specify the boolean type of the constructor to obtain a fair lock or an unfair lock. The default is an unfair lock.

  • From the perspective of whether resources are shared, they can be divided into shared locks and exclusive locks.

    • Exclusive lock : means that the lock can only be held by one thread at a time. Both ReentrantLock and Synchronized are exclusive locks.
    • Shared lock : means that the lock can be held by multiple threads.
      • There is no problem with multiple threads reading a resource class at the same time, so in order to meet the concurrency, reading shared resources should be possible at the same time. However, if one thread wants to write to a shared resource , no other thread should be able to read or write to the resource.
      • Its read lock is a shared lock, and its write lock is an exclusiveReentrantReadWriteLock lock .
      • The shared lock of the read lock ensures that concurrent reading is very efficient, and the processes of reading, writing, writing, and writing are mutually exclusive.
  • From the perspective of lock status, it can be divided into biased locks, lightweight locks and heavyweight locks. At the same time, spin locks are cleverly designed in the JVM to use CPU resources faster . These locks are described in detail below

    • Bias lock--》Lightweight lock--》Spin lock, adaptive spin, lock elimination, lock coarsening.
    • Spin lock – JVM
      • It means that the thread trying to acquire the lock will not block immediately, but will use a loop to try to acquire the lock. The advantage of this is to reduce the consumption of thread context switching, but the disadvantage is that the loop will consume the CPU.
      • Spin lock believes that if the thread holding the lock can release the lock resource in a short period of time, then those threads waiting for the competing lock do not need to switch between the kernel mode and the user mode to enter the blocked or suspended state. You need to wait (also called spin) to acquire the lock immediately after waiting for the thread holding the lock to release the lock. This avoids the consumption of lock time caused by user threads switching kernel states . The thread will occupy the CPU when spinning. When the thread spins for a long time and cannot acquire the lock, CPU will be wasted. Sometimes the thread will never acquire the lock and the CPU resources will be permanently occupied, so you need to set a spin The maximum time to wait . After the thread execution time exceeds the maximum spin waiting time, the thread will exit the spin mode and release the lock it holds.
        • "In-depth Understanding of Java Virtual Machine" Second Edition

Action1: Tell me about some locks you know?

  • Fair locks and unfair locks, reentrant locks (recursive) and non-reentrant locks, exclusive locks and shared locks, mutual exclusion locks/read-write locks, optimistic locks/pessimistic locks, spin locks...

  • Then according to the lock, let’s talk about the principles and differences between synchronized and ReentractLock.

  • From the hardware level to the Java level, the differences and application scenarios are cited, and what frameworks and source codes are used

1. Optimistic locking VS pessimistic locking

Optimistic locking and pessimistic locking are broad concepts that reflect different perspectives on thread synchronization. There are practical applications of this concept in both Java and databases.

Let’s talk about the concept first. For concurrent operations on the same data, pessimistic locks believe that other threads must modify the data when using the data, so they will first lock the data when acquiring the data to ensure that the data will not be modified by other threads. In Java, the synchronized keyword and the implementation class of Lock are both pessimistic locks.

Optimistic locks believe that no other thread will modify the data when using the data, so they will not add locks . They only determine whether other threads have updated the data before updating the data. If this data has not been updated, the current thread successfully writes its modified data. If the data has been updated by other threads, different operations (such as reporting an error or automatically retrying) are performed according to different implementation methods.

Optimistic locking is implemented in Java by using lock-free programming. The most commonly used is the CAS algorithm. The increment operation in the Java atomic class is implemented through CAS spin.

img

According to the above conceptual description we can find:

  • Pessimistic locking is suitable for scenarios with many write operations . Locking first can ensure that the data is correct during write operations.
  • Optimistic locking is suitable for scenarios with many read operations . The feature of no locking can greatly improve the performance of its read operations.

The concept is a bit abstract. Let’s take a look at examples of how to call optimistic locks and pessimistic locks:

// ------------------------- 悲观锁的调用方式 -------------------------
// synchronized
public synchronized void testMethod() {
    
    
	// 操作同步资源
}
// ReentrantLock
private ReentrantLock lock = new ReentrantLock(); // 需要保证多个线程使用的是同一个锁
public void modifyPublicResources() {
    
    
    lock.lock();
    // 操作同步资源
    lock.unlock();
}

// ------------------------- 乐观锁的调用方式 -------------------------
private AtomicInteger atomicInteger = new AtomicInteger();  // 需要保证多个线程使用的是同一个AtomicInteger
atomicInteger.incrementAndGet(); //执行自增1

Through the calling method example, we can find that pessimistic locks basically operate synchronized resources after explicit locking , while optimistic locks directly operate synchronized resources. So, why can optimistic locking achieve thread synchronization correctly without locking synchronization resources? For details, please refer to JUC Lecture 8: JUC Atomic Class: CAS, Unsafe and Atomic Class Detailed Explanation .

2. Spin lock VS adaptive spin lock

Before introducing spin locks, we need to introduce some prerequisite knowledge to help everyone understand the concept of spin locks.

Blocking or waking up a Java thread requires the operating system to switch the CPU state to complete, and this state transition requires processor time . If the content in the synchronized code block is too simple, the state transition may take longer than the execution time of the user code .

In many scenarios, the lock time of synchronized resources is very short. For this short period of time, switching threads, thread suspension and recovery costs may make the system lose more than it gains . If the physical machine has multiple processors and can allow two or more threads to execute in parallel at the same time, we can prevent the thread requesting the lock from giving up the execution time of the CPU to see if the thread holding the lock will soon Release the lock.

In order to make the current thread "wait a moment", we need to let the current thread spin. If the thread that previously locked the synchronization resource has released the lock after the spin is completed, then the current thread can directly obtain the synchronization resource without blocking. , thereby avoiding the overhead of switching threads. This is a spin lock.

img

Spin lock itself has shortcomings, it cannot replace blocking. Although spin waiting avoids the overhead of thread switching, it takes up processor time . If the lock is occupied for a short time, the effect of spin waiting will be very good. On the other hand, if the lock is occupied for a long time, the spinning thread will only waste processor resources . Therefore, the spin waiting time must have a certain limit. If the spin exceeds the limited number of times (the default is 10 times, you can use -XX:PreBlockSpinto change it) and the lock is not successfully obtained, the thread should be suspended.

The implementation principle of spin lock is also CAS. The do-while loop in the source code that calls unsafe for auto-increment operation in AtomicInteger is a spin operation. If the value modification fails, the spin is performed through the loop until the modification is successful.

For related information about spin locks, you can read JUC Lecture 4: Keywords: Detailed explanation of synchronized Section 4: Spin locks and adaptive spin locks

3. No lock VS biased lock VS lightweight lock VS heavyweight lock

These four locks refer to the status of the lock, specifically for synchronized. Before introducing these four lock states, some additional knowledge needs to be introduced.

In summary: Biased lock solves the locking problem by comparing it with Mark Word and avoids CAS operations. The lightweight lock solves the locking problem by using CAS operations and spin to avoid thread blocking and waking up, which will affect performance. A heavyweight lock blocks all threads except the thread that owns the lock .

img

For related information, please see JUC Lecture 4: Keyword: Detailed explanation of synchronized

4. Fair lock VS unfair lock

Fair lock means that multiple threads acquire locks in the order in which they apply for locks. Threads directly enter the queue and queue up. Only the first thread in the queue can obtain the lock. The advantage of fair lock is that threads waiting for the lock will not starve to death. The disadvantage is that the overall throughput efficiency is lower than that of unfair locks. All threads in the waiting queue except the first thread will be blocked. The cost of CPU waking up blocked threads is greater than that of unfair locks.

Unfair lock means that when multiple threads lock, they directly try to acquire the lock. If they cannot acquire the lock, they will wait at the end of the waiting queue. But if the lock happens to be available at this time, then this thread can directly acquire the lock without blocking, so unfair locks may occur where the thread that applies for the lock acquires the lock first. The advantage of unfair lock is that it can reduce the cost of waking up threads, and the overall throughput efficiency is high , because threads have a chance to directly obtain the lock without blocking, and the CPU does not have to wake up all threads. The disadvantage is that threads in the waiting queue may starve to death or wait a long time to obtain the lock.

It may be a bit abstract to describe it directly in language. Here the author uses an example seen elsewhere to talk about fair locks and unfair locks.

img

As shown in the picture above, suppose there is a well that is guarded by an administrator. The administrator has a lock. Only the person who gets the lock can draw water. After fetching water, the lock must be returned to the administrator. Everyone who comes to fetch water must get permission from the administrator and get the lock before they can fetch water. If someone is fetching water in front of them, then the person who wants to fetch water must queue up. The administrator will check to see if the next person who wants to fetch water is the person at the front of the queue. If so, he will give you a lock and let you fetch water; if you are not the first person in the queue, you must go to the queue. Queuing at the end, this is a fair lock.

But for unfair locks, the administrator has no requirements for the person who collects water. Even if there are people waiting in the waiting queue, if the previous person has just finished filling the water and returns the lock to the administrator, and the administrator has not yet allowed the next person in the waiting queue to get water, a queue-jumping person happens to come. , the person who jumped in line can get the lock directly from the administrator to fetch water. There is no need to queue. People who were originally waiting in line can only continue to wait. As shown below:

img

For more information, please refer to JUC Lecture 10: JUC Lock: Detailed explanation of ReentrantLock .

5. Reentrant lock VS non-reentrant lock

Reentrant locks, also known as recursive locks, mean that when the same thread acquires the lock in the outer method, the inner method that enters the thread will automatically acquire the lock (the prerequisite is that the lock object must be the same object or class). It will not It is blocked because it has been obtained before and has not been released. ReentrantLock and synchronized in Java are both reentrant locks. One advantage of reentrant locks is that they can avoid deadlocks to a certain extent . Let’s use sample code for analysis:

public class Widget {
    
    
    public synchronized void doSomething() {
    
    
        System.out.println("方法1执行...");
        doOthers();
    }

    public synchronized void doOthers() {
    
    
        System.out.println("方法2执行...");
    }
}

In the above code, both methods in the class are modified by the built-in lock synchronized, and the doOthers() method is called in the doSomething() method. Because the built-in lock is reentrant, the same thread can directly obtain the lock of the current object when calling doOthers() and enter doOthers() for operation.

If it is a non-reentrant lock, then the current thread needs to release the lock of the current object obtained when executing doSomething() before calling doOthers(). In fact, the object lock is already held by the current thread and cannot be released. So a deadlock will occur at this time.

And why can reentrant locks automatically acquire locks during nested calls? Let’s analyze them separately through diagrams and source code.

Still taking the example of fetching water, there are multiple people queuing up to fetch water. At this time, the administrator allows the lock to be bound to multiple buckets of the same person. When this person uses multiple buckets to fetch water, after the first bucket is bound to the lock and finishes fetching water, the second bucket can also be directly bound to the lock and starts fetching water. After all buckets have fetched water, they fetch water. The talent will return the lock to the administrator. All the water fetching processes of this person can be successfully executed, and subsequent people waiting can also fetch water. This is a reentrant lock.

img

But if it is a non-reentrant lock, the administrator can only allow the lock to be bound to one bucket of the same person. The first bucket is bound to the lock and does not release the lock after fetching water. As a result, the second bucket cannot be bound to the lock and cannot fetch water. The current thread is deadlocked, and all threads in the entire waiting queue cannot be awakened.

img

We said before that ReentrantLock and synchronized are both reentrant locks, so let's compare and analyze the source code of the reentrant lock ReentrantLock and the non-reentrant lock NonReentrantLock to see why non-reentrant locks can deadlock when they repeatedly call synchronization resources.

First, both ReentrantLock and NonReentrantLock inherit the parent class AQS, which maintains a synchronization status status to count the number of reentries. The initial value of status is 0 .

When a thread tries to acquire the lock, the reentrant lock first tries to acquire and update the status value. If status == 0 means that no other thread is executing the synchronization code, the status is set to 1 and the current thread starts executing. If status != 0, determine whether the current thread is the thread that acquired the lock. If so, execute status+1, and the current thread can acquire the lock again. Non-reentrant locks directly acquire and try to update the value of the current status. If status != 0, it will fail to acquire the lock and the current thread will be blocked.

When releasing the lock, the reentrant lock also first obtains the value of the current status, provided that the current thread is the thread holding the lock. If status-1 == 0, it means that all operations of the current thread to repeatedly acquire the lock have been completed, and then the thread will actually release the lock. For non-reentrant locks, after determining that the current thread is the thread holding the lock, the status is directly set to 0 and the lock is released.

img

For more information please see:

6. Exclusive lock (exclusive lock) VS shared lock

Exclusive locks and shared locks are also concepts. Let's first introduce the specific concepts, and then introduce exclusive locks and shared locks through the source code of ReentrantLock and ReentrantReadWriteLock.

An exclusive lock is also called an exclusive lock , which means that the lock can only be held by one thread at a time. If thread T adds an exclusive lock to data A, other threads cannot add any type of lock to A. A thread that obtains an exclusive lock can both read and modify data. The implementation classes of synchronized in JDK and Lock in JUC are mutex locks.

A shared lock means that the lock can be held by multiple threads. If thread T adds a shared lock to data A, other threads can only add shared locks to A and cannot add exclusive locks. The thread that obtains the shared lock can only read data and cannot modify the data.

Exclusive locks and shared locks are also implemented through AQS, and different methods are implemented to achieve exclusive or shared locks.

The picture below shows part of the source code of ReentrantReadWriteLock:

img

We see that ReentrantReadWriteLock has two locks: ReadLock and WriteLock. From the word meaning, one is a read lock and the other is a write lock, collectively called "read-write lock". Further observation reveals that ReadLock and WriteLock are locks implemented by the internal class Sync. Sync is a subclass of AQS. This structure also exists in CountDownLatch, ReentrantLock, and Semaphore.

In ReentrantReadWriteLock, the lock subjects of read locks and write locks are both Sync, but the locking methods of read locks and write locks are different. Read locks are shared locks, and write locks are exclusive locks. The shared lock of read lock can ensure that concurrent reading is very efficient, and the processes of reading and writing, writing and reading, and writing and writing are mutually exclusive. Because the read lock and write lock are separated, the concurrency of ReentrantReadWriteLock is better than that of ordinary mutex locks. A big improvement.

For more information, please refer to JUC Lecture 10: JUC Lock: Detailed explanation of ReentrantReadWriteLock

7. Locks in Java

7.1. Why are biased locks and lightweight locks introduced? Why are heavyweight locks expensive?

  • The bottom layer of heavyweight locks relies on the system's synchronization function to implement, which is implemented in Linux using pthread_mutex_t (mutex lock).

  • These underlying synchronization function operations will involve: switching between operating system user mode and kernel mode, and process context switching. These operations are time-consuming, so the overhead of heavyweight lock operations is relatively high .

  • In many cases, there may be only one thread when acquiring the lock, or multiple threads may acquire the lock alternately. In this case, it is not cost-effective to use heavyweight locks, so biased locks and lightweight locks are introduced to reduce the cost. Lock overhead when there is no concurrency contention .

7.2. Bias lock has cancellation and expansion. Why use it if the performance loss is so huge?

  • The advantage of biased locking is that when only one thread acquires the lock, it only needs to modify the markword through one CAS operation , and then make a simple judgment each time, avoiding the CAS operation of lightweight locks each time they acquire and release the lock.

  • If you are sure that the synchronized code block will be accessed by multiple threads or the competition is high, you can -XX:-UseBiasedLockingturn off the biased lock through parameters.

7.3. What usage scenarios do bias locks, lightweight locks, and heavyweight locks correspond to?

  • 1) Bias lock

    • Applies to only one thread acquiring the lock. When the second thread tries to acquire the lock, it will be upgraded to a lightweight lock even if the first thread has released the lock at this time.
    • But there is a special case. If there is a heavy bias toward the biased lock , the second thread can try to acquire the biased lock at this time.
    • Action: What is heavy bias?
  • 2) Lightweight lock

    • Suitable for multiple threads to acquire locks alternately. The difference from biased locks is that multiple threads can acquire the lock, but there must be no competition. If there is, the heavyweight lock will be upgraded. Some students may say that there is no spin, please continue reading.
  • 3) Heavyweight lock

    • Suitable for multiple threads to acquire locks at the same time.
    • Action: Only heavyweight locks will have spin operations

7.4. In which stage does spin occur?

Spin lock code verification

import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;

public class SpinLockDemo {
    
    
    // 现在的泛型装的是Thread,原子引用线程
    AtomicReference<Thread> atomicReference = new AtomicReference<>();

    public void myLock() {
    
    
        // 获取当前进来的线程
        Thread thread = Thread.currentThread();
        System.out.println(Thread.currentThread().getName() + "\t come in ");
        // 开始自旋,期望值是null,更新值是当前线程,如果是null,则更新为当前线程,否者自旋
        while(!atomicReference.compareAndSet(null, thread)) {
    
    
						// 摸鱼
        }
    }

    public void myUnLock() {
    
    
        // 获取当前进来的线程
        Thread thread = Thread.currentThread();
        // 自己用完了后,把 atomicReference 变成 null
        atomicReference.compareAndSet(thread, null);
        System.out.println(Thread.currentThread().getName() + "\t invoked myUnlock()");
    }
    
		public static void main(String[] args) {
    
    
        SpinLockDemo spinLockDemo = new SpinLockDemo();
        // 启动t1线程,开始操作
        new Thread(() -> {
    
    
            // 开始占有锁
            spinLockDemo.myLock();
            try {
    
    
                TimeUnit.SECONDS.sleep(5);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            // 开始释放锁
            spinLockDemo.myUnLock();

        }, "t1").start();

        // 让main线程暂停1秒,使得t1线程,先执行
        try {
    
    
            TimeUnit.SECONDS.sleep(1);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }

        // 1秒后,启动t2线程,开始占用这个锁
        new Thread(() -> {
    
    
            // 开始占有锁
            spinLockDemo.myLock();
            // 开始释放锁
            spinLockDemo.myUnLock();
        }, "t2").start();
		}
}

Output results

t1	 come in 
t2	 come in 
t1	 invoked myUnlock()
t2	 invoked myUnlock()

Spin occurs during the heavyweight lock phase.

99.99% of the accounts on the Internet say that spin occurs in the lightweight lock stage, but after actually looking at the source code (JDK8), this is not the case.

  • There is no spin operation in the lightweight lock phase . In the lightweight lock phase, as long as competition occurs, it will directly expand into a heavyweight lock.

  • In the heavyweight lock stage, if the lock acquisition fails, it will try to spin to acquire the lock.

7.5. Why should we design spin operation?

Because the suspension overhead of heavyweight locks is too high.

  • Generally speaking, the code in the synchronized code block should be executed soon. At this time, the thread competing for the lock can easily obtain the lock by spinning for a period of time, thus saving the overhead of heavyweight lock suspension.

7.6. How does adaptive spin reflect adaptation?

The adaptive spin lock has a limit on the number of spins , ranging from 1000 to 5000.

  • If the current spin is successful in acquiring the lock, 100 spins will be rewarded. If the current spin fails to acquire the lock, the spin will be deducted 200 times.
  • So if the spin is always successful, the JVM thinks that the success rate of the spin is very high and it is worth spinning a few more times, so it increases the number of spin attempts.
  • On the contrary, if spin always fails, the JVM thinks that spin is just a waste of time and tries to reduce spin as much as possible.

7.7. What is Java read-write lock? What problems does the read-write lock design mainly solve?

From the perspective of whether resources are shared, they can be divided into shared locks and exclusive locks.

  • Exclusive lock : means that the lock can only be held by one thread at a time. Both ReentrantLock and Synchronized are exclusive locks.

  • Shared lock : means that the lock can be held by multiple threads.

    • There is no problem with multiple threads reading a resource class at the same time, so in order to meet the concurrency, reading shared resources should be possible at the same time. However, if one thread wants to write to a shared resource, no other thread should be able to read or write to the resource.
    • For ReentrantReadWriteLock, its read lock is a shared lock and its write lock is an exclusive lock .
    • The shared lock of the read lock ensures that concurrent reading is very efficient , and the processes of reading, writing, writing, and writing are mutually exclusive.
  • Implement a read-write cache operation. What will happen if there is no lock at the beginning?

import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;

class MyCache {
    
    
    private volatile Map<String, Object> map = new HashMap<>();

    public void put(String key, Object value) {
    
    
        System.out.println(Thread.currentThread().getName() + "\t 正在写入:" + key);
        try {
    
    
            // 模拟网络拥堵,延迟0.3秒
            TimeUnit.MILLISECONDS.sleep(300);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        map.put(key, value);
        System.out.println(Thread.currentThread().getName() + "\t 写入完成");
    }

    public void get(String key) {
    
    
        System.out.println(Thread.currentThread().getName() + "\t 正在读取:");
        try {
    
    
            // 模拟网络拥堵,延迟0.3秒
            TimeUnit.MILLISECONDS.sleep(300);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        Object value = map.get(key);
        System.out.println(Thread.currentThread().getName() + "\t 读取完成:" + value);
    }
}

public class ReadWriteWithoutLockDemo {
    
    
    public static void main(String[] args) {
    
    
          MyCache myCache = new MyCache();
          // 线程操作资源类,5个线程写
          for (int i = 0; i < 5; i++) {
    
    
              final int tempInt = i;
              new Thread(() -> {
    
    
                  myCache.put(tempInt + "", tempInt +  "");
              }, String.valueOf(i)).start();
          }

          // 线程操作资源类, 5个线程读
          for (int i = 0; i < 5; i++) {
    
    
              final int tempInt = i;
              new Thread(() -> {
    
    
                  myCache.get(tempInt + "");
              }, String.valueOf(i)).start();
          }
    }
}

Output result:

0	 正在写入:0
1	 正在写入:1
3	 正在写入:3
2	 正在写入:2
4	 正在写入:4
0	 正在读取:
1	 正在读取:
2	 正在读取:
4	 正在读取:
3	 正在读取:
1	 写入完成
4	 写入完成
0	 写入完成
2	 写入完成
3	 写入完成
3	 读取完成:3
0	 读取完成:0
2	 读取完成:2
1	 读取完成:null
4	 读取完成:null

Seeing that some threads read null, you can ReentrantReadWriteLocksolve it

package com.lun.concurrency;

import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantReadWriteLock;

class MyCache2 {
    
    

    private volatile Map<String, Object> map = new HashMap<>();
    private ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();

  	public void put(String key, Object value) {
    
    
        // 创建一个写锁
        rwLock.writeLock().lock();
        try {
    
    
            System.out.println(Thread.currentThread().getName() + "\t 正在写入:" + key);
            try {
    
    
                // 模拟网络拥堵,延迟0.3秒
                TimeUnit.MILLISECONDS.sleep(300);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            map.put(key, value);
            System.out.println(Thread.currentThread().getName() + "\t 写入完成");
        } catch (Exception e) {
    
    
            e.printStackTrace();
        } finally {
    
    
            // 写锁 释放
            rwLock.writeLock().unlock();
        }
    }

    public void get(String key) {
    
    
        // 使用读锁
        rwLock.readLock().lock();
        try {
    
    
            System.out.println(Thread.currentThread().getName() + "\t 正在读取:");
            try {
    
    
                // 模拟网络拥堵,延迟0.3秒
                TimeUnit.MILLISECONDS.sleep(300);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            Object value = map.get(key);
            System.out.println(Thread.currentThread().getName() + "\t 读取完成:" + value);
        } catch (Exception e) {
    
    
            e.printStackTrace();
        } finally {
    
    
            // 读锁释放
            rwLock.readLock().unlock();
        }
    }

    public void clean() {
    
    
        map.clear();
    }
}

public class ReadWriteWithLockDemo {
    
    
    public static void main(String[] args) {
    
    
        MyCache2 myCache = new MyCache2();
        // 线程操作资源类,5个线程写
        for (int i = 1; i <= 5; i++) {
    
    
            // lambda表达式内部必须是final
            final int tempInt = i;
            new Thread(() -> {
    
    
                myCache.put(tempInt + "", tempInt +  "");
            }, String.valueOf(i)).start();
        }

        // 线程操作资源类, 5个线程读
        for (int i = 1; i <= 5; i++) {
    
    
            // lambda表达式内部必须是final
            final int tempInt = i;
            new Thread(() -> {
    
    
                myCache.get(tempInt + "");
            }, String.valueOf(i)).start();
        }
    }
}

Output result:

1	 正在写入:1
1	 写入完成
2	 正在写入:2
2	 写入完成
3	 正在写入:3
3	 写入完成
5	 正在写入:5
5	 写入完成
4	 正在写入:4
4	 写入完成
2	 正在读取:
3	 正在读取:
1	 正在读取:
5	 正在读取:
4	 正在读取:
3	 读取完成:3
2	 读取完成:2
1	 读取完成:1
5	 读取完成:5
4	 读取完成:4

7.8. What are the Java synchronization mechanisms?

blocking sync

  • 1. Synchronized keyword, I believe everyone is familiar with this, it is best to understand its principles.

  • 2. Lock interface and its implementation classes, such as ReentrantLock.ReadLock and ReentrantReadWriteLock.WriteLock

non-blocking synchronization

  • 1、CAS
  • 2、AtomicInteger

No sync solution

  • To ensure thread safety, synchronization is not necessarily necessary. If a method does not involve sharing data , then it naturally does not require any synchronization measures to ensure correctness.

  • 1. Various tools

    • Semaphore: It is a counter used to protect access to one or more shared resources. It is a basic tool for concurrent programming. Most programming languages ​​provide this mechanism. This is also often mentioned in operating systems. of

    • CountDownLatch: is a synchronization auxiliary class provided by the Java language. It allows threads to wait until a set of operations being performed in other threads is completed.

    • CyclicBarrier: It is also a synchronization auxiliary class provided by the Java language. It allows multiple threads to wait for each other at a certain rendezvous point;

    • Phaser: It is also a synchronization auxiliary class provided by the Java language. It divides concurrent tasks into multiple stages for running. Before starting the next stage, all threads in the current stage must be executed. This is a feature only available in JAVA7.

    • Exchanger: It provides a data exchange point between two threads.

  • 2. Thread local storage

For details, please read this article: JUC Lecture 2: Java Concurrency Theory Basis: Java Memory Model (JMM) and Threads Section 7

Conclusion

This article provides a basic introduction to commonly used locks and common lock concepts in Java, and conducts a comparative analysis from the perspective of source code and practical application. Due to space limitations and personal level limitations, not all content is explained in depth in this article.

In fact, Java itself has well encapsulated the lock itself, making it easier for R&D students to use it in their daily work. However, R&D students also need to be familiar with the underlying principles of locks and choose the most suitable lock in different scenarios. Moreover, the ideas in the source code are very good ideas and are worthy of everyone's learning and reference.

References

  • "The Art of Java Concurrent Programming"
  • Locks in Java
  • Analysis of Java CAS principles
  • Java concurrency - keyword synchronized analysis
  • Summary of Java synchronized principles
  • Let’s talk about concurrency (2) – Synchronized in Java SE1.6
  • In-depth understanding of read-write locks—ReadWriteLock source code analysis
  • [JUC] JDK1.8 source code analysis of ReentrantReadWriteLock
  • Java Multithreading (10) In-depth Analysis of ReentrantReadWriteLock
  • Java-implementation principle of read-write lock

Guess you like

Origin blog.csdn.net/qq_28959087/article/details/133044819