Java lock Detailed: "exclusive lock / shared lock lock + fair / unfair optimistic locking lock + / + pessimistic locking thread-locking"

In Java concurrency scenario involves a variety of locks such as fair locks, optimistic locking, pessimistic locking and so on, this article describes the classification of various locks:

Lock fair / unfair Lock

Reentrant lock

Exclusive lock / shared lock

Lock optimistic / pessimistic locking

Segmented lock

Spinlocks

Thread lock

VS pessimistic optimistic locking lock

Optimistic and pessimistic locking lock on a broad concept, reflects the view of the thread synchronization of different angles, and practical applications in Java database has a corresponding concept.

1. Optimistic locking

As the name suggests, it is very optimistic, pick up data every time that others are not modified, so it will not be locked, but when the update will determine what others during this time did not go to update the data, you can use the version number, etc. mechanism.

Optimistic locking is suitable for the types of applications to read , optimistic locking in Java is achieved by the use of lock-free programming, the most commonly used algorithm is CAS , atomic increment operation Java classes on the spin achieved by CAS.

CAS Full Compare And Swap (compare and exchange), a lock-free algorithms. Implement thread synchronization between multiple variables without using locks (no thread is blocked) it is. Atoms class java.util.concurrent package is achieved by a CAS optimistic locking.

In simple terms, CAS algorithm has three three operands:

  • You need to read and write memory value V.
  • Value to compare A.
  • To write the new value of B.

If and only if the value of the expected value V A and the same memory, the memory value V modified as B, V otherwise . This is an optimistic locking of thinking, I believe it before it changes, no other threads to modify it; and Synchronized is a pessimistic lock, it thinks before it changes, there will be other threads to modify it, pessimistic locking efficiency very low .

2. pessimistic locking

Always assume the worst case, every time the data are considered to get others will modify, so every time she took the data will be locked, so people want to take this data will be blocked until it got the lock.

Traditional relational MySQL database inside to use a lot of this locking mechanism, such as row locks, table locks, etc., read lock, write lock, are locked before doing the first operation.

In contrast to other databases, MySQL locking mechanism is relatively simple, its most notable feature is the different storage engines support different lock mechanisms.

such as:

  1. MyISAM and MEMORY storage engine uses a table-level lock (table-level locking);
  2. InnoDB storage engine supports both row-level locking (row-level locking), also supports table-level locking, but the default is to use row-level locking.

MySQL locks the two main characteristics may be roughly summarized as follows:

Ali P8 architects talk: Features and Applications MySQL row locks, table locks, pessimistic locking, optimistic locking

  • Table-level lock: Small overhead, lock fast; not deadlock (because MyISAM will lock time access to all the required SQL); locking large size, the probability of lock conflicts of the highest and lowest degree of concurrency.
  • Row-level locking: large overhead, locking slow; there will be a deadlock; locking the smallest size, lowest probability of lock conflicts, have the highest degree of concurrency.
  • Page locks: lock overhead and speed between the table and row locks; deadlock occurs; particle size between locking table and row locks, general concurrency

Row and table locks

1. mainly divided against lock granularity, generally divided into: row locks, table locks, lock the library

(1) row locks: access to the database, lock the entire line of data to prevent concurrency errors.

(2) table lock: access to the database, lock the entire table data to prevent concurrency errors.

2. row and table locks difference:

  • Table lock: Small overhead, locking fast, does not deadlock; large locking strength, high probability of lock conflicts, the lowest degree of concurrency
  • Line Lock: large overhead, locking slow, there will be a deadlock; lock small size, low probability of lock conflicts, high concurrency

Optimistic and pessimistic locking lock

(1) pessimistic lock: As the name suggests, is very pessimistic, pick up data every time think others will modify, so every time she took the data will be locked, so people want to take this data will block until it got lock.

Traditional relational database inside to use a lot of this locking mechanism, such as row locks, table locks, etc., read lock, write lock, are locked before doing the first operation.

(2) optimistic lock: As the name suggests, is very optimistic, pick up data every time when they are that others will not be modified, so it will not be locked, but when the update will determine what others during this time did not go to update the data you can use mechanisms such as the version number.

Optimistic locking is suitable for the types of applications to read, so you can improve throughput if the database provided similar write_condition optimistic locking mechanisms are actually provided like.

(3) pessimistic locking and optimistic locking difference:

Other kinds of locks have advantages and disadvantages, can not be considered to be a good alternative, as optimistic locking apply to lower write relatively few cases, namely when the conflict is really rare, so can save a lock of overhead, plus large throughput of the entire system. However, if the frequent conflict, the upper application will continue to be retry, such but rather reduces the performance, so use pessimistic locking is more appropriate in this case.

Shared lock

Shared lock refers to a number of different transactions, sharing the same lock on the same resource. Equivalent for the same door, the same as it has more keys. Like this, there is a door to your house, there are several keys to the door, you have one, you have a girlfriend, you are likely to enter your home through the key, this is the so-called shared lock.

Just say, pessimistic locking, the general database has been achieved, a shared lock also belong to a pessimistic lock, then shared lock in mysql through what command to call it. By querying the information learned by executing a statement in the back plus lock in share mode on behalf of certain resources coupled with a shared lock.

When to use table locks

For InnoDB tables, in most cases you should use row-level locking, since the transaction and row lock often we chose InnoDB tables grounds. But in individual special affairs, may also consider the use of table-level locking.

  • The first is: Transaction most or all of the data needs to be updated, and relatively large table, if you use the default row locks, this transaction is not only low efficiency, and may cause other transactions to wait long locks and lock the conflict, the situation can be considered under use table locks to increase the speed of execution of the transaction.
  • The second scenario is: a transaction involving multiple tables, more complex, is likely to lead to deadlock, resulting in a large number of transaction rollback. This situation can also be considered a one-time transaction involves locking the table in order to avoid deadlock and reduce the overhead of database due to transaction rollback brings.

Of course, the application of these two transactions can not be too much, otherwise, you should consider using a MyISAM table.

Table and row locks scenarios:

  • Use table-level locking and concurrency is not high, mainly to check, apply a small amount of updates, such as small web application;
  • The row-level locking is suitable for highly concurrent environment, transaction integrity requires higher system, such as online transaction processing systems.

Another example is the implementation of the Java synchronization synchronized keyword mentioned above is a typical pessimistic locking.

Detailed most complete Java locks: exclusive locks / + shared lock lock fair / unfair optimistic locking lock + / pessimistic locking

3. In short:

  • Pessimistic lock for writing many operating scenarios , the first lock ensures that the correct data write operation.
  • Read more optimistic locking for the operation scene , unlocked features enable it to significantly enhance the performance of read operations.

Fair and unfair lock lock VS

1. Lock fair

Is very fair, in a concurrent environment, each thread will first see this lock when acquiring the lock maintenance waiting queue, if it is empty, or the current thread is waiting on a queue, it occupies a lock, otherwise it will be added to waiting in the queue, the future will be taken in accordance with the rules from the FIFO queue to himself.

The advantages of fair lock is waiting for the lock thread will not starve to death. The disadvantage is that the overall throughput efficiency is relatively low to non-arm lock, waiting queue in addition to the first thread of all threads are blocked, CPU overhead blocking threads are awakened larger than non-fair locks.

2. unfair lock

Up directly try to have a lock, if the attempt fails, then a similar fair locks that way.

Lock unfair advantage is the reduction evoke thread overhead, high overall throughput efficiency, because the thread has a chance not to block direct access to the lock, CPU does not have to wake up all the threads. The disadvantage is that in a waiting queue thread may starve, or wait a long time before acquiring the lock.

Detailed most complete Java locks: exclusive locks / + shared lock lock fair / unfair optimistic locking lock + / pessimistic locking

3. Typical applications:

java jdk and contracting in ReentrantLock can specify the boolean type constructor to create fair locks and unfair lock (default) , such as: fair locks can use the new ReentrantLock (true) implementation.

VS share exclusive lock lock

1. exclusive lock

It means that the lock can only be held by a thread.

2. Shared lock

It means that the lock can be held by multiple threads.

3. Compare

For Java ReentrantLock concerned, it is the exclusive lock. But for another class that implements ReadWriteLock Lock, which read lock is a shared lock, which locks are exclusive write locks.

Shared lock to read lock ensures very efficient concurrent read , write, write, read, write processes are mutually exclusive.

Exclusive lock and a shared lock is achieved through the AQS , by implementing different methods to achieve exclusive or shared.

4.AQS

Abstract queue synchronizer (AbstractQueuedSynchronizer, referred to as the AQS) is used to build the base frame or other synchronization lock assembly, which uses a volatile variable integer (designated state) to maintain synchronization state to complete the resource acquired by the built-in FIFO queue queue worker threads.

Detailed most complete Java locks: exclusive locks / + shared lock lock fair / unfair optimistic locking lock + / pessimistic locking

Read shown, the AQS, non-blocking data structures and atomic structure of variable classes implement concurrent package as shown above, and the like are based on volatile base class variable / CAS and write implemented, and like Lock, synchronizer blocking queue, the Executor and concurrent containers and other senior class is based on the base class implementation.

Segmented lock

A lock is actually the lock segment design, not a specific lock, for ConcurrentHashMap, its implementation is complicated to achieve efficient concurrent operation of the lock in the form of segments.

We ConcurrentHashMap speaking about the meaning and segmented lock design, ConcurrentHashMap the lock segment called Segment, its structure that is similar to HashMap (JDK7 and JDK8 in HashMap implementation), that has an internal Entry array, the array each element is a linked list; but it is also a ReentrantLock (Segment inherited ReentrantLock).

When you need to put the elements, not the entire hashmap be locked, but the first to know that he wants to put a segment, then the segments were locked by hashcode, so when multiple threads put the time, As long as not on a segment, to achieve a true parallel insertion.

However, statistics size at the time, but I could get hashmap global information when you need to get all of the segments to locks statistics.

Is designed to lock segment refinement lock granularity, the entire array need not be updated when the operation time, it performs a locking operation with respect to only one array.


Java thread lock

Multi-threaded resource is free

Thread deadlock

Lock options

Because multiple threads is common ownership belongs to the process address space and resources , then will there is a problem:

If multiple threads to access a resource at the same time, how to deal with?

In Java concurrent programming, often encounter multiple threads access the same shared resource, this time as a developer must consider how to maintain data consistency, which is the source of Java lock mechanism (synchronization) of.

Java provides a way to achieve a variety of multi-threaded locking mechanisms, common are:

  1. synchronized
  2. ReentrantLock
  3. Semaphore
  4. AtomicInteger etc.

Each mechanism has its own advantages and disadvantages and application scenarios, must be familiar with their characteristics can be handy when multi-threaded Java application development.

4 kinds of Java thread lock (thread synchronization)

1.synchronized

In Java synchronized keyword is used to maintain data consistency.

synchronized mechanism to lock a shared resource, only to get a lock thread can access shared resources, so that you can force makes access to shared resources are sequential.

Java developers know synchronized, use it to synchronize multi-threaded operation is very simple, as long as the need to synchronize the other's methods, classes or blocks of code added to the keyword, which can guarantee at a time with at most one thread synchronization code to perform the same object can be modified to ensure that the code will not be interference from other threads during execution. Using a modified code synchronized with atoms and visibility, the frequency used in the program requires a very high process synchronization, interprocess synchronization meet the general requirements.

synchronized (obj) {

//method

…….

}

Mechanism synchronized implementation depends on the JVM on the software level, so its performance with the Java version of escalating increases.

To Java1.6, synchronized done a lot of optimization, adaptive spin locks elimination, lock coarsening, lightweight locks and lock bias, efficiency has been improving on nature. After the launch of Java1.7 and 1.8, both the implementation mechanism keyword optimized.

It should be noted, can not be Thread.interrupt () interrupted by synchronized when the thread waits for a lock, it is necessary to check the programming time to ensure the rational, otherwise it may cause thread deadlock embarrassment.

Finally, although the lock mechanism implemented in Java there are many, and some lock mechanism performance is higher than the synchronized, it is strongly recommended to use the keyword in multithreaded applications, as easy to implement, follow-up work done by the JVM, reliability high. Only when the lock mechanism to determine the current performance bottlenecks in multithreaded programs, before considering the use of other mechanisms, such as ReentrantLock and so on.

2.ReentrantLock

Reentrant lock, as the name implies, the thread lock may be repeated a plurality of times for entering the get operation.

Lock ReentantLock inherited interfaces and implements the methods defined in the interface, in addition to all the work done can be synchronized, but also in response to an interrupt may be provided such as lock, the lock request may be a polling timing lock method to avoid deadlock multithreaded .

Lock mechanism implemented depends on the particular CPU is specified, it is considered not bound by the JVM , and can be done by other underlying implementation language platform. In the small amount of concurrent multi-threaded applications, ReentrantLock and synchronized performance is almost the same, but under conditions of high concurrency, synchronized performance decreased rapidly a few times, but still able to maintain the performance of a ReentrantLock level.

Therefore, we recommend the use of ReentrantLock under high concurrency situations.

ReentrantLock introduced two concepts: Fair locking and non-locking fair .

Lock refers to the fair distribution mechanism lock is fair, usually the first thread that made the acquisition request will first be allocated to the lock to lock. On the contrary, JVM random, proximity principle allocation mechanism locks are called unfair lock.

ReentrantLock provides a way to initialize the fairness lock in the constructor, the default non-fair locks. This is because the efficiency of non-equity lock the practical implementation is far beyond the fair locks, unless the program with special needs, otherwise, the most commonly used unfair distribution mechanism locks.

ReentrantLock carried out by the method lock () and unlock () locking and unlocking operation, and will be synchronized automatically unlocked JVM different mechanisms need to manually unlock the lock ReentrantLock. To avoid program abnormality occurs normally can not be unlocked, using ReentrantLock must finally unlocking operation in the control block . Shown generally used as follows:

Lock lock = new ReentrantLock();

try {

lock.lock();

// ...} for mission operations 5

finally {

lock.unlock();

}

3.Semaphore

Both types of locking mechanisms are "mutex", learned the operating system knows that a mutex is a special case of process synchronization relationship, there is only the equivalent of a critical resource, at the same time to a maximum of one thread Provide services. However, in actual complex multi-threaded applications, there may be more critical resource, at this time we can make use of Semaphore semaphore to complete access to multiple critical resources.

Semaphore ReentrantLock basically completed all the work, using methods similar thereto, to obtain and release the critical resources acquire () and release () method.

After measurement, Semaphone.acquire () method defaults to lock interrupt response, consistent with ReentrantLock.lockInterruptibly () effect, which means that while waiting for the critical resource can be interrupted Thread.interrupt () method.

Moreover, also realized Semaphore lock request and the time lock polling function, in addition to the various method name tryAcquire tryLock, using almost the same method ReentrantLock. Semaphore also provides a mechanism for fair and unfair lock, it can also be set in the constructor.

Semaphore lock release operation be performed manually, and therefore ReentrantLock as a result of the case in order to avoid the thread can not throw an exception occurs normally release the lock, release lock operation must be completed in a finally block .

4.AtomicInteger

First, here is a representative AtomicInteger series of the same class, there are common AtomicLong, AtomicLong, etc., they have the same realization of the principle, the difference in the operation of different object types.

We know that, in a multithreaded program, such as ++ i
or
i ++ no other operations are atomic, one thread is unsafe operation. We typically use the synchronized operation becomes an atomic operation, but such operation is purposely provided JVM some synchronization classes, so that more convenient to use, and that the process efficiency becomes higher. Through the relevant data show that performance is usually AtomicInteger several times ReentantLock of.

Java thread lock summary

1.synchronized:

In the resource competition is not very intense situation, the case of occasional synchronized, synchronized is very appropriate. The reason is that the compiler optimization synchronize often as possible, in addition to very good readability.

2.ReentrantLock:

In the fierce competition for resources is not the case, little more than a synchronized performance handicap points. But when synchronous very intense time, synchronized performance can suddenly drop several times, and indeed ReentrantLock can maintain normal.

Use ReentrantLock high concurrency situations.

3.Atomic:

And similar to the above, under no circumstances intense, a little less performance than synchronized, and the heat of the moment, but also to maintain normal. Heat of the moment, Atomic's performance will be better than ReentrantLock about doubled. But it has the disadvantage that only a synchronous value, a piece of code in a variable Atomic appear only, more than one valid synchronization. Because he can not be synchronized between multiple Atomic.

So, when we write synchronization, priority synchronized, if you have special needs, and then further optimized. Atomic ReentrantLock and if used improperly, not only does not improve performance, but also may lead to disaster.

These are the Java thread lock explain, except to deal with high concurrency from a programming point of view, more needs from architectural design to cope with the high level of concurrency scenarios, such as: detailed as follows Redis caching, CDN, asynchronous messages.


Guess you like

Origin juejin.im/post/5cf5eb2df265da1bca51c77c