Locking mechanisms - some of the problems caused by the use of locks

First emphasize that: all locks (including built-in locks and high-level lock) are all consumed by the performance, in the case of high concurrency, use locks may be larger than the consumption of the thread itself, since the lock mechanism to bring context switching, synchronization resources etc. consumption, so if possible, should be less use of locks in any case, if unavoidable, non-blocking algorithm is a good solution.

Internal lock

Java language to ensure atomicity by the synchronized keyword. This is because each Object has an implicit locks, also known as the monitor object. This lock automatically get inside before entering synchronized, but once they left this way (regardless of the way through and leave this method) automatically release the lock. Obviously, this is an exclusive lock, between each lock requests are mutually exclusive. Relative to the number of senior lock (Lock / ReadWriteLock, etc.) described earlier, the cost of which is higher than the synchronized. But the synchronized grammar is relatively simple, but also relatively easy to use and understand, the error wording is not easy. And we know that once called Lock lock () method to obtain the release of the lock without the correct words are likely to deadlock. So Lock release operation is always followed finally block inside, this is the time to adjust and redundancy in the code structure. Also described earlier in Lock said the realization of the hardware resources have been used in the extreme, so little space that can be optimized in the future, unless the hardware with higher performance. But synchronized only an implementation specification, which is different in different hardware platforms as well as a high room for improvement, the future of Java will be optimized in a locked main in it.

performance

Since the lock always brought affect performance, so whether the use of locks and lock using the occasion becomes particularly important. If you use the Web to force an exclusive lock on a high concurrent requests, then you can find Web throughput will drop sharply.

To take advantage of concurrency to improve performance, the starting point is this: more efficient use of existing resources, while allowing the program to open up more resources available as possible. This means that the machine as much as possible in a busy state, in the usual sense, said CPU is busy computing, rather than waiting. Of course, CPU to do useful things rather than unnecessary cycle. Of course, in practice usually reserved for some resources out to the special circumstances of emergency, which can be seen in many examples later in the thread pool concurrency.

Thread blocks

Achieve locking mechanism generally require operating system support, obviously this will increase overhead. When the lock competition, failure is bound to happen threads blocked. JVM both spin-wait (keep trying, knowing that success is achieved in many CAS), the operating system can also be blocked thread suspended until a timeout or be awakened. Normally it depends on the time relationship between them, and context switching overhead and acquire the lock to wait . Spin-wait suitable for relatively short wait, hang thread more suitable for those waiting for time-consuming.

Suspend a thread may be because they can not get into the lock, or a need for specific conditions or time-consuming I / O operations. Suspend a thread needs two additional context switches and operating system, with many resources such as cache: If the thread is swapped out in advance, then once got locked or conditions are met, then they will need to execute threads in exchange for the queue, which thread, the two context switches may be more time-consuming.

Lock competition

The impact of lock contention conditions two

  • Holding a lock time
  • Lock request frequency

Obviously when both are very small, lock contention will not become a major bottleneck, but if used improperly, resulting in both relatively large, the CPU may not be effective processing tasks, resulting in the accumulation of a large number of tasks

So reducing lock contention method has the following three:

  • Reducing the time the lock is held
  • Reducing the frequency lock request
  • Use exclusive locks instead of shared locks

Deadlock

  • If a thread will never release the resources needed another thread will lead to a deadlock, that there are two cases,
  1. A thread will never release the lock, resulting in B never get a lock, so the thread B die
  2. A thread needs to thread B holds the lock, the lock thread A thread B has a need, causes the thread to wait for another AB
  • There is a deadlock situation occurs, if a thread can never be scheduled, then wait for the results of this thread to thread may die, and this situation is called a deadlock hunger, such as in non-lock fair, if some threads are very active, in the case of high concurrency of such threads can always get the lock, then the low activity of threads may never get a lock, so that the starvation death occurred

The solution is to avoid deadlock:

  • Possible to use locks according to the specification, in addition to the small particle size of the lock request
  • Use the advanced lock inside tryLock or timing mechanism, (designated acquire the lock timeout, if not the time to get to give up)

Livelock

Always try to thread the case of an operation but always fails thread is not blocked in this case, but the task can not be executed.

  • For example, in an infinite loop is always trying to do something, the result is always a failure, the thread will never be able to jump out of this cycle
  • In one queue, each queue is removed from the first task to execute a failed each time, then the task into the queue head, continue to fail
  • Under the agreement collision case, the low priority thread threads are not implemented, it will lead to livelock occurs.

Guess you like

Origin www.cnblogs.com/shemlo/p/11604221.html