java highly concurrent learning - Deadlock (a)

A necessary condition for a deadlock occurs:

   1, mutually exclusive conditions

   2, the request condition and the holding

   3, resources are not deprived conditions

    4, loop wait condition

 

Concurrent multi-threaded best practices;

  1, the use of local variables

   2, immutable class

   3, to minimize the lock scope range: S = 1 / (1-a + a / n) (Dahl Mu law)

Where, a serial proportion calculation section, n is the number of processing nodes in parallel. Thus, when a = 0, the maximum acceleration = n ratio S; when a = 1, the minimum speedup s = 1; when n → ∞, the limit acceleration ratio of s → 1 / a, which is the upper limit of speedup . For example, if 25% of the entire code is a serial code, the overall performance of the parallel processing can not exceed 4. This formula has been accepted by the academic community, and was called "Amdahl's Law" (Amdahl law).

   4, using the thread pool Executor, instead of using new Thread ();

   5, do not use synchronous rather wait () and notify ()

    6, blockingQueue achieve producer-consumer model

   7, in conjunction with concurrent instead of adding synchronization collection lock

   8, using semaphore to create access bounded

  9, rather use the synchronization code blocks do not use the method of synchronizing

  10, avoid the use of static variables

 

hashmap和concurrenthashmap

   The final two parameters to hashmap loading capacity and the expansion factor.

   When the load exceeds the capacity and product expansion factor of the current use of memory, it will be on the hashmap for expansion, not thread-safe, at the time of concurrent calls, an infinite loop may occur in the process of expansion of.

 Difference concurrenthashmap1.7 version 1.8 and version 1.8 is introduced into a red-black tree, when the number of the list when more than 8, it will trigger a red-black trees, red-black tree addressing time is o (logn), while addressing list time is o (n)

 

 

Learning caching:

 Hit rate = hits / (hits + misses)

 The largest element of space:

 Clear Cache policy: FIFO, LFU, LRU, expiration time, randomly

 

Business scenarios and business needs: cache for reading and writing small application scenarios

 

Guava cache: inherit the design ideas of concurrenthashmap

 

Guess you like

Origin www.cnblogs.com/wcgstudy/p/11525351.html