ReentrantLock与synchronized

Multithreading and concurrency are nothing new, but one of the innovations in Java language design is that it was the first mainstream language to integrate a cross-platform threading model and a formal memory model directly into the language. The core class library contains a  Thread class that can be used to construct, start, and manipulate threads, and the Java language includes constructs for communicating concurrency constraints across threads --  synchronized and  volatile . While simplifying the development of platform-independent concurrent classes, it in no way makes writing concurrent classes more tedious, just easier.

A quick review of synchronized

Declaring a block of code as synchronized has two important consequences, usually the  atomicity and  visibility of the code . Atomicity means that a thread can only execute code protected by a specified monitor object (lock) at a time, preventing multiple threads from conflicting with each other when updating shared state. Visibility is more subtle; it deals with various anomalous behaviors of memory caches and compiler optimizations. In general, threads are not subject to cache variables in a way that does not have to be immediately visible to other threads (whether those threads are in registers, in processor-specific caches, or through instruction reordering or other compiler optimizations) value constraints, but if the developer used synchronization, as shown in the code below, the runtime would ensure that an update to a variable by a thread precedes an update to an existing  synchronized block when entering a monitor by the same monitor. (lock) protected another  synchronized block, these updates to variables will be immediately visible. Similar rules exist for  volatile variables.

[java] view plain copy

  1. synchronized (lockObject) {   
  2.   // update object state  
  3. }  


So, implementing a synchronization operation needs to take into account everything needed to update multiple shared variables safely, without race conditions, without corrupting data (assuming the synchronization boundaries are in the right place), and ensuring that other threads that are properly synchronized can see the variables of these variables. latest value. Build "write once, run anywhere" concurrency by defining a clean, cross-platform memory model (modified in JDK 5.0 to correct some errors in the original definition) and by following this simple rule Classes are possible:

You must synchronize whenever a variable you are about to write may be read by another thread next, or a variable you are about to read is the last to be written by another thread.

But now it's a little better, in recent JVMs, the performance cost of uncontended synchronization (when one thread owns the lock, no other thread attempts to acquire the lock) is still very low. (It wasn't always the case; synchronization in early JVMs wasn't optimized, so many people thought so, but now it's become a misconception that synchronization has a high performance cost, contention or not .)

Improvements to synchronized

So it looks like the synchronization is pretty good, doesn't it? So why did the JSR 166 team spend so much time developing the  java.util.concurrent.lockframework? The answer is simple - synchronization is nice, but it's not perfect. It has some functional limitations - it can't interrupt a thread that is waiting to acquire a lock, it can't vote to get a lock, and if you don't want to wait, you can't get a lock. Synchronization also requires that the release of the lock can only take place in the same stack frame as the one in which the lock was acquired, which is fine in most cases (and interacts nicely with exception handling), however, there are some non-block-structured Locking is a more appropriate situation.

ReentrantLock class

java.util.concurrent.lock The  Lock framework in is an abstraction of locking that allows locking to be implemented as a Java class, rather than as a language feature. This  Lock leaves room for multiple implementations, which may have different scheduling algorithms, performance characteristics, or locking semantics. ReentrantLock The class implements it  Lock , which has the  synchronized same concurrency and memory semantics, but adds some features like lock voting, timed lock waits, and interruptible lock waits. In addition, it provides better performance in high contention situations. (In other words, when many threads want to access a shared resource, the JVM can spend less time scheduling threads and more time executing threads.)

What does reentrant  lock mean? In simple terms, it has an acquisition counter associated with the lock, if a thread that owns the lock acquires the lock again, the acquisition counter is incremented by 1, and the lock needs to be released twice before it is actually released. This mimics the  synchronized semantics; if the thread enters a synchronized block protected by a monitor already owned by the thread, the thread is allowed to proceed, and when the thread exits the second (or subsequent)  synchronized block, the lock is not released, only the thread exits it entered The synchronized lock is released only when the monitor protects the first  block.

When looking at the code sample in Listing 1, you can see that  Lock there is one noticeable difference from synchronized -- the lock must be released in the finally block. Otherwise, if the protected code would throw an exception, the lock might never be released! This distinction may not seem like much, but in reality, it is extremely important. Forgetting to release the lock in the finally block can leave a ticking time bomb in the program, and when the bomb goes off one day, it takes a lot of effort to figure out where the source is. With synchronization, the JVM will ensure that locks are automatically released.


Listing 1. Securing a block of code with ReentrantLock.

[java] view plain copy

  1. Lock lock = new ReentrantLock();  
  2. lock.lock();  
  3. try {   
  4.   // update object state  
  5. }  
  6. finally {  
  7.   lock.unlock();   
  8. }  


In addition to that, the implementation under contention is  ReentrantLock more scalable than the current synchronized implementation. (In a future JVM version, synchronized contention performance is likely to improve.) This means that when many threads are contending for the same lock,  ReentrantLock the overall overhead used is usually much  synchronized less.

Comparing the scalability of ReentrantLock and synchronized

Tim Peierls built a simple benchmark using a simple linear congruent pseudorandom number generator (PRNG) to measure   the relative scalability between synchronized and  . LockThis example is good because  nextRandom() the PRNG is actually doing some work every time it is called, so this benchmark is actually measuring a reasonable, real  synchronized and  Lock application rather than testing code that is purely on paper or does nothing (Like many so-called benchmark programs.)

In this benchmark, there is an  PseudoRandom interface that has only one method  nextRandom(int bound) . This interface java.util.Random is very similar to the functionality of a class. Because when generating the next random number, the PRNG uses the latest generated number as input, and maintains the last generated number as an instance variable. The point is to keep the code segment that updates this state from being preempted by other threads, so I want to Ensure this with some form of locking. java.util.Random Classes can do this too.) We  PseudoRandom built two implementations for this; one using syncronized and the other using  java.util.concurrent.ReentrantLock . The driver spawns a large number of threads, each of which frantically competes for time slices, and then calculates how many rounds per second the different versions can execute. Figures 1 and 2 summarize the results for different numbers of threads. This review isn't perfect and has only been run on two systems (a dual Xeon running hyper-threaded Linux and a single processor Windows system), but it should be enough to show  synchronized the  ReentrantLock scalability advantage compared to .


The graphs in Figures 1 and 2 show throughput in calls per second, adjusting the different implementations to 1 thread  synchronized . Each implementation focuses relatively quickly on the throughput rate of a steady state that typically requires the processor to be fully utilized, spending most of the processor time doing the actual work (computer random numbers), and only a small Part of the time is spent on thread scheduling overhead. You'll notice that the synchronized version is quite poor at handling any kind of contention, while the  Lock version spends considerably less time on the overhead of scheduling, leaving room for higher throughput for a more efficient CPU use.

condition variable

The root class  Object contains some special methods for communicating between the thread   's   and wait() . These are advanced concurrency features that many developers never use - which is probably a good thing, since they are fairly subtle and easy to misuse. Fortunately, with the introduction in JDK 5.0   , developers have very few places to use these methods.notify()notifyAll()java.util.concurrent

There is an interaction between notification and locking - in order to  wait OR  on an object notify , you must hold the lock on that object. Just like  Lock a synchronous generalization, the  Lock frame contains a  generalization of wait sum  notify , and this generalization is called  条件(Condition) . Lock The object then acts as a factory object for condition variables bound to this lock, and unlike the standard  wait sum  notify method, there  Lock can be more than one condition variable associated with it for a given one. This simplifies the development of many concurrent algorithms. For example,  条件(Condition) the Javadoc shows an example of a bounded buffer implementation that uses two condition variables, "not full" and "not empty", which is more readable than an implementation with only one wait set per lock Better (and more efficient). Condition The methods  are similar to the wait ,  notifyand  notifyAll methods, named  await ,  signal and  , respectively signalAll , because they cannot override  Object the corresponding methods on .

it's not fair

If you look at the Javadoc, you'll see that  ReentrantLock one parameter to the constructor is a boolean value that allows you to choose whether you want a  fair or  unfair lock. Fair locks allow threads to acquire locks in the order in which they were requested; unfair locks allow for bargaining, in which a thread can sometimes acquire locks before other threads that request the locks first.

Why don't we make all locks fair? After all, fairness is good and unfairness is bad, isn't it? (When kids want a decision, they always yell "it's not fair". We think fairness is very important, and kids know it.) In reality, fairness ensures that locks are very robust locks with great performance cost. The bookkeeping and synchronization required to ensure fairness means that contested fair locks have lower throughput than unfair locks. As the default, fairness should be set to  false , unless fairness is critical to your algorithm, and threads need to be served strictly in the order they are queued.

So what about synchronization? Is the built-in monitor lock fair? The answers came as a surprise to many, they were unfair and always will be. But no one complained about thread starvation, because the JVM guarantees that all threads will eventually get the lock they were waiting for. Ensuring statistical fairness, which is sufficient in most cases, is much less expensive than an absolute fairness guarantee. ReentrantLock So, the fact that it's "unfair" by default  just superficializes what has always been an event in synchronization. If you don't mind this when you're syncing, do  ReentrantLock n't worry about it when you're syncing.

Figures 3 and 4 contain the same data as Figures 1 and 2, with the addition of a dataset for random number benchmarking, this time using fair locks instead of the default negotiated locks. As you can see, fairness comes at a price. If you need fairness, you have to pay, but don't make it your default choice.


Everywhere is good?

It seems  ReentrantLock to be  synchronized better in every way - it can do everything  synchronized it can, it has the  synchronized same memory and concurrency semantics, it has features  synchronized it doesn't, and it has better performance under load. So, should we just forget about  synchronized it and stop thinking of it as a good idea that has already been optimized? Or even  ReentrantLock rewrite our existing  synchronized code with? In fact, several introductory books on Java programming take this approach in their chapters on multithreading, entirely Lock by example, and only treat synchronized as history. But I think it's taking a good thing too far.

Don't abandon synchronized

While  ReentrantLock a very attractive implementation, it has some important advantages over synchronized, but I think it would be a serious mistake to rush to treat synchronized as if it were not. java.util.concurrent.lock The lock class in is a tool for advanced users and advanced situations  . In general, you should continue to use synchronized unless you  Lock have a clear need for an advanced feature of the .

Why am I advocating conservatism in the use of an apparently "better" implementation? Because for  java.util.concurrent.lock locking classes in , synchronized still has some advantages. For example, when using synchronized, you can't forget to release the lock; the synchronized JVM does this for you when you exit the block. You can easily forget to  finally release the lock with a block, which is very bad for your program. Lock Your program will pass the test, but will deadlock in practice, and it will be hard to pinpoint the cause at that point ( and a good reason why it shouldn't be used by junior developers at all  .)

Another reason is because, when the JVM uses synchronized to manage lock requests and releases, the JVM can include lock information when generating thread dumps. These are very valuable for debugging because they can identify the source of deadlocks or other abnormal behavior. Lock A class is just a normal class, the JVM doesn't know exactly which thread owns the  Lock object. Also, almost every developer is familiar with synchronized, which works in all versions of the JVM. Until JDK 5.0 becomes standard (which may be two years from now), using  Lock classes will mean exploiting features that are not available in every JVM and are not familiar to every developer.

When to choose ReentrantLock instead of synchronized

So, when should we use  ReentrantLock it? The answer is very simple - when you do need some features that synchronized does not have, such as time lock waits, interruptible lock waits, blockless locks, multiple condition variables, or lock voting. ReentrantLock There are also scalability benefits, and it should be used in high contention situations, but keep in mind that most synchronized blocks almost never have contention, so high contention can be put aside. I recommend developing with synchronized until it is proven that synchronized is not suitable, rather than just assuming  ReentrantLock "performance will be better" if you use it. Remember, these are advanced tools for advanced users. (And, truly advanced users like to choose the simplest tool they can find, until they decide that the easy tool doesn't work.). As always, get things done first and then consider whether it is necessary to do it faster.

Lock The framework is a compatible alternative to synchronous, it provides  synchronized many features not provided, and its implementation provides better performance under contention. However, these obvious benefits are not enough to justify the  ReentrantLock replacement  synchronized . Instead, make your choice based on whether you  need ReentrantLock  the ability or not. Most of the time, you shouldn't choose it - synchronized works well, works on all JVMs, more developers know about it, and it's less error prone. Only use it when you really need  Lock it. In these cases, you will be happy to have this tool.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326032800&siteId=291194637