Several levels of java multithreading

1. Basics

1. JVM cross-platform refers to cross-hardware and OS platforms, and is a low-level virtual machine for machines

2. Memory is divided into main memory (corresponding to bare metal memory) and working memory of each thread (corresponding to process user space).

3. The java thread corresponds to the os process, the workspace corresponds to the user space of the process, and the jvm corresponds to the hardware

4. Main memory instruction: lock (locked to a thread) unlock read write

Thread working memory instruction: load store use assign . All operations within the function are directed to the thread's workspace memory.

5, in linux. A java thread corresponds to a kernel thread (lightweight process)

6. CAS: spin lock, optimistic lock, not forced to switch the context every time, but short-term rotation training and waiting, which is equivalent to a smaller lock granularity. The principle of CAS: loop processing, each time the assumptions and settings are compared, generally those that do not conflict will succeed directly, which is suitable for situations where concurrency conflicts are not very obvious.

7. Java threads are preemptive scheduling

8. By default, synchronized locks the this object, and static synchronized locks the code segment, both of which are pessimistic locks.

 

Second, the abstraction level of java concurrency

1. The lock condition corresponds to the thread primitive of POSIX, which can control concurrency in a fine-grained manner.

ReadWriteLock: can be read by multiple threads, one can write, and can be downgraded;

ReentrantReadWriteLock: Reentrant allows for higher throughput

 

2. Volatile: Modifications of one thread can be seen by other threads; instruction reordering optimization is not allowed. (Cannot achieve concurrent write)

3. atomicXXX uses the atomic update data type implemented by optimistic locking

4. Thread-safe collection class ConcurrentHashMap 

CopyOnWriteArrayList

BlockingQueue

The internal implementation of ArrayBlockingQueue is to put objects into an array (fixed length, memory saving)

Each element of the DelayQueue is released after the time period of the value returned by the getDelay() method of each element of the DelayQueue

LinkedBlockingQueue stores its elements in a chained structure (link nodes)

PriorityBlockingQueue has priority when it is taken out

SynchronousQueue   只能容纳一个元素

BlockingDeque(LinkedBlockingDeque) 双端操作,分别支持在first和last端的操作

ConcurrentNavigableMap

 

5、线程间交互

CountDownLatch 等对方结束

CyclicBarrier:到了地方一起开始

Semaphore:控制并发线程个数

Exchanger :分别做,在固定的地方进行协调

Phaser:复杂过程的同步控制

 

6、消息队列,线程池

ExecutorService Callable(会返回结果) Future(get时如果没有结果会阻塞)

 几种线程池的区别:

newCachedThreadPool:有空闲用空闲,没有就新建,不限制总个数

newFixedThreadPool:控制总个数

newScheduledThreadPool:可以调度

newSingleThreadExecutor:单线程,保证先进先出

newWorkStealingPool:自己队列空闲时,可以帮助别的线程干活,适合于处理时间差异较大时(线程切换,本身有消耗)

7、ForkJoinPool 分叉和合并(分治策略),类似于map reduce的

 

 

三、并发模型抽象

1、生产、消费

2、读写

3、哲学家竞争

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326404415&siteId=291194637