Java concurrency-concept and principle

1. Concept

1. There are only two types of concurrency: mutual exclusion (a certain kind of lock) + synchronization (a certain kind of condition)

2. Mutual exclusion: only one thread is allowed to access, and the jvm uses it when scheduling time slices

3. Synchronization: The coordination between applications is written by the application to implement a certain logic, generally based on mutual exclusion.

4. Tools defined in POSIX: memory barrier (to avoid inconsistencies caused by multi-core cache), mutex, condition, readwritelock, semaphore

5. The monitor process abstracts the pv operation, unlocking and unlocking into a data structure instead of hashing it in the thread code

6. io cache: The CPU uses DMA to cache to the kernel, and then copies it to the user space, which is better than not using the cache: there is no need to allocate and release memory multiple times; the disk can be read in the entire block at one time; when reading, use more Less cpu intervention.

7. Memory mapping: the kernel and user space use the same cache, saving the copy process

8. JAVA NIO direct buffer: no need to walk the JVM heap, use OS memory

Second, some common sense

1、Waiting VS blocked

waiting: the waiting logic written by the programmer to switch the time slice

blocked: io operations, highly competitive synchronized, sleep, etc., cause threads to wait, which is implemented by jvm

2. yield: still runnable, but temporarily let jvm schedule others

3. The benefits of using multithreading in java:

1> Make full use of multi-core cpu 2> Multiple operations are performed concurrently, improve response speed, and avoid message blocking 3> The advantage of multi-threaded IO is that it can make full use of memory cache and DMA, and the logic in IO can be parallelized.

4. Disadvantages of using multithreading: 1> There is a cost for context switching (tens to hundreds of clocks): CPU registers need to be saved and loaded, the code of the system scheduler needs to be executed, and the shared data between multi-core caches 2> Occupy RAM. 3> When IO DMA cannot be parallelized, multi-threading will reduce the throughput.

5. The "time slice" of linux is 0.75ms to 6ms by default

6. When there is no non-daemon thread in main, the jvm process will exit;

7. Safe termination: check state in a while loop instead of calling stop

 

3. Tools provided by java (cache cas mutex/condition)

1, volatile (lock-free): invalidate the register cache line, forcing other threads to re-read the memory (applicable to one thread write, multiple threads read)

2. The logic of synchronized mutual exclusion, a certain type of lock (similar to a monitor), corresponds to the above mutual exclusion. The implementation of jvm is very efficient, and the specific implementation is as follows:

1" Default biased lock: When single-threaded execution does not even need to call the lock, you can directly record a state in the head of the object, and only add a monitorenter instruction at the implementation level. It is suitable for most cases where the synchronized modified code blocks are executed by a single thread, and can also be considered as an optimization for a single thread.

2" Lightweight locks (reentrant, uninterruptible, unfair): upgrade to lightweight locks when other threads compete, and use spin (cpu idling) to wait for other threads to release locks (without switching time) slice), suitable for the situation where the lock will be acquired soon (wait for a while, this time slice has not ended) (all inside the jvm are lightweight locks)

3》Heavyweight lock: When the spinning time is long, use the heavyweight lock, you need to switch the time slice to avoid idling (the thread becomes blocked)

3. The atomic operation atomixXXX cas spin is realized (unlocked)

4. wait (release lock)/notify corresponds to synchronization

Interaction synchronization between threads controlled by the programmer, not time slice scheduling of the JVM. Conditions must be locked. wait will release the corresponding lock, but notify will not. After notify is executed, it will not return to wait immediately, but wait until the lock is competed before executing the logic of wait. After obtaining the condition, you need to check the status. notify turns the thread from waiting for a condition to waiting for a lock.

WaitQueue->SynchronizedQueue中

synchronized(lock){while(!flag){lock.wait();} do();}

synchronized(lock){change();lock.notifyall();}

5. Piped is used to transfer data between threads (based on shared memory)

6. Thread class (only wait releases the lock)

Join is also locked internally (synchronized), loop judgment, wait

sleep/yield yields the cpu time slice and does not release the lock

7. The fairness of the lock (fair): After release, it must be placed at the end of the team, and it must be acquired from front to back; if it is acquired unfairly, see if it has been acquired last time.

8. Reasons for thread suspension: io, no time slice, no lock

4. Expansion of tools

1. Lock: Serialization in a sense. Syncronized solidifies the way of use, which is easy to use, but not flexible. In addition to basic synchronization needs, locks need to be considered reentrant? fair? Mutex or read and write? non-blocking? time out? Is it interruptible (not always blocking there) when the thread acquires the lock? finer granularity? (All are optimistic locks by default) These flexibility are achieved through the concurrent.lock package of java5 (handling unlocking and deadlocking by yourself). Flexibility of the lock interface try lockInterruptibly r/w timeout

2. Synchronized semantics: reentrant, unfair, mutually exclusive, blocking (uninterruptible, other threads will be blocked during sleep, and the lock will be held even if it dies), no timeout

3. The granularity of the lock: volatile < atomicXXX < CAS < various lock < wait

4. Extension of lock

ReentrantLock: Reentrant (recursive), interruptible (exclusive if it can be acquired)

ReentrantReadWriteLock: When reading more and writing less, the granularity is small and the performance is high.

StampedLock: Avoid too many reading threads, making it difficult for writing threads to be scheduled. When there is a write, you should allow writing first, and then re-read it, considering fairness

    AbstractQueuedSynchronizer: Manage mutually exclusive shared blocking queues and waiting queues. tryacquire(Shared) getExclusiveQueuedThreads

isHeldExclusively release

Does the blocked thread respond to interrupts? Interruptibly (will not remove the synchronization queue) acquire acquires synchronously without blocking

5. Extension of wait/notify

      LockSupport: wait and release wait (no wait/notify timing requirements)

park: wait for conditions, not before unpark,

unpark: release a condition

      condition: supports multiple queues, timeouts and interrupts, synchronizer inner class

await (release lock): how to return? be signaled or interrupted

signal: waiting queue --> synchronization queue, the subsequent operations of wait will only be performed if the lock is obtained again

 

总结:ReentrantReadWriteLock/StampedLock;condition/LockSupport

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326434302&siteId=291194637