Multithreaded concurrent arts

https://www.cnblogs.com/paddix/p/5374810.html

Context switching : Even if single-core processor also supports multi-threaded code execution, this mechanism is achieved by the CPU allocates CPU time slices to each thread. CPU time slice is assigned to each thread of execution time, because time is very short piece, so the CPU is by constantly switching threads execute, so we feel execute multiple threads simultaneously

CPU allocation algorithm tasks are performed by the time slice loop, a current task execution after a time slice, it will switch to the next task. But the switch on a task previously saved state. So that when the next time will be the task switch, you can load the status of this task. So the task from the process is stored and then loaded a context switch.

two,

volatile applications

Use the keyword volatile variable modified, java thread memory model ensures that all threads see the value of the variable is consistent.

Value processor memory operation, the memory copy is copied to the variables in the cache, then the value of the cache processing, and finally written back to memory.

Use volatile variables, after being modified value will be immediately written to memory, this operation will write back cache inside the CPU other invalid value.

Cache coherence principle that each data processor by sniffing propagating on the bus to detect the data is not its own cache has expired, when the processor finds himself cache line corresponding to the memory address is modified, the processor stores the current the cache line invalidated. Cache coherence prevent two or more processors simultaneously modified by the cache data memory area

synchronize application

Synchronize lock information is stored in the object using the head. Mark Word in the operational phase, the content of Mark Word will change with lock flag changes.

No lock status, tend to lock, lock lightweight, heavyweight lock. Lock can only upgrade not downgrade

Biased locking

When a thread attempts to acquire a lock bias, will check the object head lock flag is whether the 1 | 01, and if so, show a biased locking. Try to use cas, will mark Word to change the current thread ID, if unsuccessful, check the current biased locking thread id is current, if it is to acquire a lock success. If not, this indicates that there is competition tend to lock when it reaches the global security points will lock the thread is suspended, lock escalation is a lightweight lock. Then blocked in the seat belt thread continues to execute code down.

 (1) Access Mark Word identifies whether the biased locking set to 1, if the lock flag is confirmed to be biased state 01--.

  (2) If the state to be biased, then the test points to the current thread if the thread ID, and if so, proceeds to step (5), otherwise go to step (3).

  (3) If the thread ID is not pointing to the current thread, the CAS operation lock by competition. If the competition is successful, Mark Word thread ID is set to the current thread ID, then execute (5); if competition fails to perform (4).

  (4) If the CAS obtain biased locking fails, it means there is competition. Biased locking thread gets when reaching the global security point (safepoint) is suspended, biased locking upgraded to lightweight lock, then blocked at the point of thread-safe synchronization code continues down.

  (5) perform synchronization substituting

Biased locking implementation is the emergence of competitive strategy to release the lock. Lock needs to wait until the global revocation security dots (no bytecode instruction being executed at this time point GC). At this point we would first suspend security thread that owns the lock bias, and then check the status of this thread is alive, and if not, then no lock is set to state. If this memory alive are walked Mark Word stack, traverse the object lock biased record, stack marKWord set to either lock-free state, or favor other threads. Suspended thread finally wake up.

Parameters may be used, the amount of partial closing lock,: - XX: -UseBiasedLocking = false

Lightweight lock

Before a thread attempts to acquire a lightweight lock, the lock bits identify the judge is not 0 | 01, if it is 1 | 01 show is biased locking. Stack the locks on the current thread. The object header content stored on the stack opened up storage lock records space, and then use the cas try to Mark Word replacement target header to point to the current lock recording space address, if successful, get to the lock, the lock flag is set to 00, table names lightweight lock.

When the lightweight unlocked, the lock records storage space replaces head back to the object, if successful, there is no competition, and if that fails, there is competition, expansion heavyweight lock, and wake up waiting threads.

(1) When the code to enter the synchronized block, if the synchronization object lock status is no lock state (lock flag is "01" state, whether it is biased locking "0"), the virtual machine first stack frame in the current thread establishing a record called lock (lock record) space for storing a copy of the current lock object of Mark Word, officially known as Displaced Mark Word.

  (2) copy target header Mark Word copied into the lock record.

  (3) the copy is successful, the virtual machine will try to use the CAS operation target Mark Word updated to point to Lock Record pointer, and Lock record in the owner pointer to the object mark word. If the update is successful, proceed to step (4), otherwise step (5).

  (4) If the update action is successful, then the thread owns the lock of the object, and the object of Mark Word lock flag is set to "00", it means that this object is lightweight locked state.

  (5) If this update fails, the virtual machine will first check whether the object of Mark Word points to the current thread's stack frame, if it means the current thread already owns the lock of this object, it can directly enter a synchronization block continues . Otherwise, a plurality of threads described lock contention, lightweight expanded heavyweight lock must lock, lock flag state value becomes "10", Mark Word is stored pointing heavyweight lock (mutex) pointer back threads waiting for the lock should enter the blocked state. The current thread will try to use spin to acquire the lock is to prevent the spin thread is blocked, and the use of the process cycle to acquire the lock.

Atomic operation

Bus key, the shared memory when the processor operation time, provide a LOCK # signal processor, when the processing time to apply the table by accessing the memory bus will be blocked

Cache lock, frequent use of memory is cached to the cache, the cached into the cache memory is locked during LOCK operation, cache coherency guarantee atomicity.

Modifying the cache coherency memory area while preventing two or more processors by the data cache

CAS operation implemented using atomic consistency

CAS have the ABA problem, use the version number to solve.

Atomic operation can only guarantee a shared variable

Third, the memory model

1, the communication mechanism between the threads imperative programming: shared memory, message passing

Shared Memory: common state shared program between threads, by reading - implicit communication written in memory of the common state

Messaging: displaying a communication between threads by sending messages

Synchronization means for controlling the program sequence of the different mechanisms of action occurs threads.

Shared memory programmer must control the display of a method or a piece of code in a mutually exclusive execution threads, the synchronization mechanism is shown

Messaging, message received after transmitting the message thread of execution threads, the synchronization mechanism is implicit.

java concurrent use of shared memory model. In java, all the instance fields, static fields or array variable is stored in the stack memory, heap memory is shared between threads. An abnormal amount bureau, exception handling, not shared heap memory, they do not have the visibility of the problem of memory is not affected by memory model

JMM: java memory model

JMM decides when a thread writes to shared variables visible to another thread. Shared memory variables between threads, and each thread has a private local memory (covering the cache, write buffer register), local memory to store the thread to read and write shared variables copy

Each CPU, has a register, and the solution does not match the CPU speed heap memory access speed problems, the buffer level L1, L2, L3, corresponding to each memory is certainly a register L3, L2, L1, corresponding to each CPU may one could share. Thread in the time slice execution CUP, CUP corresponding to the register value of the thread needs to copy the memory into a register, a register may build thread or may not share.

Reordering

Compiler reordering, the compiler does not change the premise of the single-threaded semantics will re-arrange the order of execution of sentence

Processor reordering: reordering instruction level parallelism, modern processors using instruction level parallelism techniques to overlap a plurality of instructions executed. Reordering memory system, since the processor uses read and write buffers, which makes loading and storage appear to be scrambled

Compile JMM's attention sorting rules prohibit a particular type of compiler thinks highly of the sort. JMM processor reordering rules claim java compiler production instruction set sequence, particular type of inserted memory barrier instruction to prohibit a particular type of processor memory barrier instruction from the sorting by

JMM is a language-level memory model, which ensures that the different compilers and different processor platforms, discouraged by ordering and reordering processor compiler prohibit certain types of

To provide consistent visibility programmer memory

happens-before rules:

Program sequence rules: a thread for each operation, happens-before any subsequent operation to the thread

Monitor lock rule: unlock a lock, happens-before subsequent lock this lock

Volatile variable rules: to write a Volatile domain, read happens-before to follow all of this variable

Transitive: if Ahappens-before B, and B happens-before C, then A happens-before C

Data dependency: the operation instruction sequences for a single processor and a single thread of execution performed, the processor, data dependencies between the different thread is not considered a processor and a compiler

as-if-serial semantics: no matter how the results of reordering, single-threaded program can not be changed

Sequential Consistency memory model is a theoretical model for reference, in the design of processor memory model and the programming language in the memory are sequentially consistent model as a reference

Two features of sequential consistency memory model

1) All operations must be performed in a thread in program order

2) All programs can only see a single sequence of operations performed. In sequential consistency memory model each operation must be atomic and immediately visible to all threads.

Volatile memory semantics

volatile variable itself has the following characteristics

Visibility: reading of a volatile variable, always able to see the last write this volatile variable

Atomicity: having any single atomic read and write volatile variables, but similar operations are not volatile ++ Such a composite having atomic

volatile read and write and release locks have access to the same memory effect

volatile memory write semantics: for volatile writing, JMM will the variable corresponding local memory is flushed to main memory. Achieved by the addition of memory variable semantic memory barrier against volatile should be written back to the reordering performed in the variable volatile previously performed, if the memory is not added pi this barrier, flushed to main memory should not be timely refreshed in. Memory compiler barrier is added

volatile memory read semantics: reading of the volatile, the variable will JMM corresponding to all local memory shared variables become invalid, read directly from main memory. Semantic memory is achieved by adding a memory barrier. Add memory barrier to be shared variable before the variable after the operation of the reordering volatile to volatile variables, such operation is not shared variables in main memory bian latest value of the variable.

Lock semantic memory

Mutex lock allows the implementation of the critical areas

When a thread release, JMM will correspond to the thread local memory shared variables flushed to main memory

When a thread to acquire a lock, the thread will JMM corresponding local memory is deasserted, so that the protected area must read the monitor critical shared variable from main memory

Lock semantics to implement memory:

Fair lock locking method first reading a volatile variable state

The last time equity release lock writeback volatile variable state

Unfair lock lock release is written back to the volatile state variables

Unfair lock locking: locking method atomically update the state variables, compareAndSet () method referred to as the CAS. Description of the method as follows: If the current value is equal to the expected value, synchronization status Atomically set to the updated value given, this method has a volatile read and write memory semantics

The compiler will not any reordering volatile memory read operation and read back the volatile, the compiler will not write any of the foregoing volatile memory write operations and reordering volatile

Free and acquire lock in at least two ways: 1) the use of read-write volatile variables have memory semantics

2) the use of volatile and volatile CAS read write memory that comes with the lock semantics

final field memory semantics

Compilers and processors to follow two rules reorder

1) in the constructor function to write a field subsequent to final reference object assignment this constructor a reference to a variable, not between the two operating reordering

2) initial final target domain contains a read reference, and then read the final initial time domain, reordering is not between these two operations

Written final field reordering rules: JMM prohibits the compiler to write the final reordering fields outside the constructor, the compiler will write after the final field, insert StoreStore barrier before the constructor return, this processor the final barrier ban reordering the write field outside constructor.

Reordering final rule fields: a thread, and the first initial read the read object reference object contains the final field, the processor is prohibited JMM reordering these two operations. The compiler will add LoadLoad final barrier in front of the reading field. First an indirect reference to an object dependency relationship between the initial and final operating two domains contained in the object, so the compiler will not reorder the two operations

final field is a reference type: For reference types, the write field final reordering rules compiler and a processor for increasing a constraint, then this is written in the constructor outer member of the domain in the referenced object constructor for a final construct an object reference is assigned to a reference variable, can not be reordered

Double check safety lock: the establishment of a single case of volatile mode based on the object, the object variable with volatile modified class

private volatile static Instance instance;
   
        public static Instance getInstance () {
            if (instance == null) {
                Synchronized (Instance.class) {
                    if (instance == null) 
                    instance = new Instance;
                    return instance;
                }
            }
            return instance;
        }

The pre-write volatile write operations, reordering to write back, so there will not be instantiated is not yet complete, it has been allocated heap address, resulting instance is not empty.

Concurrent Programming Fundamentals JAVA

OS thread scheduling is a minimum unit

When setting the thread priority, thread frequently blocked for (sleep or I / O operation) needs to be set a high priority, and the emphasis is calculated (CPU requires more computation time or partial) setting a low priority thread will not be exclusive to ensure CPU

Thread Status:

NEW: the initial state, the thread is constructed, but also not called start () method

RUNNABLE: running state, java thread multithreaded operating system running and ready to run in the general called

BLOCKED: blocked state, indicates that the thread blocked on lock

WAITING: wait state, it indicates that the thread into a wait state, entering the state indicates that the current thread other threads need to make some specific actions (or wake-up interrupt)

TIME_WAITING: timeout wait state, this is different from the WAITING state, the state can be returned at a specified time on their own

TERMINATED: termination of state, indicated that the current thread is finished

Thread state, is the life of the process thread, the thread may not have a lock, lock only thread pause point

After the thread initialization is complete, call start () method to start a thread, meaning start () method is: the current thread synchronization told java virtual machine, as long as thread scheduling idle, then immediately started a thread calls start () method.

Interrupt flag can be understood as a property of the thread, which indicates whether a running thread interrupted by another thread to perform the operation. Other threads that thread by calling interrupt () method its interrupt operation. Thread by checking their properties to interrupt response, call isInterrupt () to determine whether or not interrupted. You can also use interrupted () static method to the current thread is reset. If the thread has ended (end of run), even if the call had interrupt () method, and then call the thread object isInterrupt () method returns false;

Method (for example Thread.sleep (long mills)) interrupt flag these methods to the thread before throwing InterruptException clear, this time calling isInterrupt () statement throws many InterruptException way back returns false

Pause: Resources (), after adjusting thread will not release the occupied suspend, occupied resources go to sleep easily lead to deadlock. Not recommended for use

Resume: resume () is not recommended

Termination: stop (), the end of a thread is not the time to ensure resources are properly released, the program may work in an uncertain state. Not recommended for use

The security thread is terminated: interrupt, cancel (). When the thread terminates a chance to clean up resources.

Any object has its own monitor, when the object is called synchronized block or synchronized method, the thread execution method must first obtain the object's monitor to enter the internal synchronization blocks or synchronization method, but did not get the blocking monitor thread sync block in storage, or a synchronization method, enters blocked state

Waiting / notification mechanism

A thread change the value of an object, and another thread to change the perception, the former producer, the latter is the consumer

wait (): This method of calling thread enters WAITING state, only to wait another thread or interrupt notification to return, wait () method releases the object lock

wait (long) waiting timeout period, parameters millisecond, if there is no time-out notice to return

wait (long, int) waiting for a more refined control of the timeout, nanosecond

notify () notify a thread waiting on an object, it () method returns from the wait, but the premise is to obtain the return of the object lock

notifyAll (): inform all waiting on this object's thread, but calling this method does not mean giving up the lock, but the lock before giving up normal exit from the lock object

Use wait (), you need to lock the object when a call to notify and notifyAll. After calling wait () method, the thread state is changed to the WAITING RUNNING, the current thread will wait queue object placed

After notify and notifyAll method call, still waiting thread does not () returns from wait, then you need to call and notify notifyAll method of thread released pending threads have a chance to return from the wait. Or notify notifyAll is moved synchronous queue awakened thread, the thread is blocked state change from WAITING

From the premise wait to return is to get a lock object

Waiting party (consumers), notified party (producer)

Input and output flow duct mainly used for data transmission between the threads, and the transmission medium is the memory

Thread.join (): If a thread A calls thread.join (), wait for the current thread A Thread before proceeding to the implementation of End

Thread pool essence: Use a thread-safe connection worker thread work queue threads and the client, the client thread into a work queue task after returning. The worker thread is continuously removed from the payroll job queue and executed. When the Task Force queue is empty, all of the worker threads to wait on the work queue. When there is a client submits a job
after a worker thread will notice any.

Java in the lock

Finally block lock is released, after the lock acquisition can ensure normal release. Do not get locked in try to prevent the process of obtaining an error lock, the lock is released for no reason

Lock interface provides the synchronized keyword does not have the following features:

Non-blocking attempt to acquire the lock: the current thread tries to acquire the lock, the lock is no other time if this other thread gets Acquires the lock success

Can acquire a lock interrupted: when the acquired lock the thread is interrupted, interrupt exception was thrown while releasing the lock. (Synchronize key lock can be acknowledged, can not respond to that, if the thread stuck in the queue, you can not withdraw from the queue, continue to wait to break the lock .Lock interface provides the response after the lock was stuck in the queue thread can interrupt response exit from the queue)

Timeout get lock: If you do not get the lock, it returns the specified deadline

Lock interface defines the basic operation of acquiring the lock, the lock is released, wherein the method of api:

void lock (): acquire the lock, the lock is obtained when the return

void lockInterruptibly () throws InterruptedExcption: acquiring the lock can interrupt response

boolean tryLock (): non-blocking attempt to acquire the lock, use this method to acquire the lock immediately return a result, the success or failure

boolean tyrLock (long time, TimeUtil util) throws IntrruptedException: acquire the lock timeout, responsive to the interrupt three cases: obtaining a lock timeout range; timeout range is interrupted; end of the timeout lock is not obtained.

void unLock (): release the lock

Condition newCondition (): Gets the component to wait for notification, the current component and the current lock bindings, after the current thread to acquire the lock, wait method of this thread can call the component. After calling thread releases the lock

Lock interface to achieve substantially complete access to all control thread by polymerizing a synchronizer subclass.

Queue synchronizer: AbstractQueuedSynchronize, is used to build the base frame locks or other synchronization components. It represents the synchronization status by an int member variable, queuing through the built-in FIFO queue management thread is blocked

Synchronizer itself does not implement interface methods, inherited the main use is to provide a method for modifying the state. Intrinsically lock is to modify the state of the acquisition. And it provides a simplified implementation of the lock, and provide scalability.

DETAILED internal synchronization logic is implemented:

When a thread acquires the lock failure will enter an internal FIFO queue, each thread is a node, store contains a reference to the current thread, thread state, the precursor successor node. When the lock is released will wake up the first node thread tries to acquire a lock (acquire synchronization status)

Contains references to two nodes in the synchronizer, a is the first node, one end node of the queue when the thread is added to the CAS is used to secure the tail node is added.

Obtain and release exclusive synchronization status: attempts to acquire synchronization status, success, then acquire the lock success, failure, then the thread is packaged as a node. And use the thread-safe cas added to the queue tail node, the node in the queue waiting for the spin cycle of death acquire synchronization state, if a node fails in the thread gets blocked, need to wake up the thread of the precursor node, or the thread is interrupted. Only abnormal situation will make Node deleted from the queue. Immediate release state, because only the head node acquired status

Obtain and release shared synchronization status: synchronizer attempt to get state, successfully, to acquire the lock success, failure, create a node, which spin cycle of death get in sync. When released state needs cas release state, because there may be multiple threads simultaneously released state.

Wait Timeout exclusive acquiring synchronization state: If a predecessor node is head node, then try to acquire synchronization state, otherwise, spin-wait loop, if the timeout point of less than 0 returns false, if less than a little time, spin cycle wait (to prevent obsolete), or sleep

Reentrant lock: ReentrantLock, thread lock acquisition request, when the synchronization status 0 to directly change the state, otherwise the thread that holds the lock judgment is consistent with the consistent synchronization state plus 1, to obtain a lock success, or failure to acquire the lock. Release the lock, first determine the current thread is a thread acquires the lock, then the state value by one, until the synchronization status return value is 0 if the lock is successful, otherwise it returns false, the release is successful, update the current thread is empty (non-lock fair process )

Fair locks, is in the order request to acquire the lock, FIFO.

Fair lock acquisition process: When the synchronization state is 0, there is no previous node if the current thread, the state change cas, otherwise fals

Reentrant lock status changes directly back to get the results.

Read-write locks: ReentrantReadWriteLock. If a read lock is acquired synchronization state, the subsequent read lock, you can obtain synchronization state. Then if the write lock is to occlusion.

Thread obtain a write lock, block other write locks and read locks. Acquiring synchronization status of the write lock, you can then get a read lock, then release the write lock, complete lock downgrade.

Write lock acquisition: write reentrant lock is locked, if the state has been synchronized read lock acquisitions, or has acquired a write lock on the acquisition of non-current thread synchronization status failed.

Read lock is a shared lock.

Condition objects: Condition defines the waiting notification method, when the current thread calls these methods, you need to get ahead of lock condition associated object, condition objects are created out of objects by the Lock, Lock condition depends on the object.

ConditionObject is AbstractQueuedSynchronized internal classes, each object contains a condition waiting queue, waiting queue is a FIFO queue, if a thread calls Condition.await () method, the thread releases the lock, the thread state and waiting state. Subsequent synchronization queue wakeup node, the node is added to the thread configured to wait condition tail of the queue, the queue corresponding to the first node synchronization, changes to the queue tail node. Thread is certainly the first condition to obtain the lock associated with the object, so join the tail node does not use cas method. On the monitor model object, and each object has its synchronous queue waiting queue.

Thread calls condition.singal () method, the first node will wait queue calls enq (Node node) method cas infinite loop added to the end of the queue node synchronization, call acquireQueued () method (attempt to obtain a lock), added to acquire synchronization state competition go, when the thread to acquire the lock, the thread returns from the await () state

ConcurrentLinkedQueue

The team queue, ConcurrentLinkedQueue is an infinite list, adding elements is to insert the tail node. The main use of tail node is added to complete the design element

The first node of the head element is empty, the node is equal to tail head node queue is empty, is inserted into the first element, next node tail node to the first element. next node is empty tail node, tail node points to the second element is inserted into the second element. tail node does not always point to the end node, tail node if the next node is empty, then the tail next insertion element pointing element node, if the next node is not empty, then the tail node pointing element. Also see from the code constantly looking through the tail node to the end node, then insert cas. If the tail node has been pointed tail node, you need to have updated affect the efficiency of tail node, if the node has not updated tail, each node needs to find the end of a very long time. The current design is from the tail up to a tail node.

 

Remove the element, the FIFO queue, if the first node is empty, then the value of his next node if the next node is also empty, then the queue is empty, if the next node has a value, the value of the element is removed, and the head node assignment value to a third node, next head node to the fourth node point value, and the like into the element, the true head node nearest node a maximum interval.

Blocking queue

When the queue is full will block a thread insert operation, when the queue is empty, the value of the thread will be blocked

add (e) inserting operation, the queue is full thrown offer (e) returns a boolean value, insertion operation succeeds or fails, put (e) the blocking thread queue is full offer (e, time, unit), the queue is full blocking timeout until

remove (e) removal operation, throw an exception queue is empty, poll () return value success, failure to return empty, take () queue empty block threads, poll (e, time, unit) queue empty block until a timeout

ArrayBlockingQueue: array structure bounded blocking queue Sorts the elements of FIFO, whether by fair range thread configuration

LinkedBlockingQueue: linked list structure bounded blocking queue

PriorityBlockingQueue: Support prioritization of unbounded blocking queue

DelayQueue: Use a priority queue implementation unbounded blocking queue

SynchronousQueue: a storage element without blocking queue for each put operation must wait for a get operation

LinkedTransferQueue: unbounded blocking queue of a linked list structure

LinkedBlockingDeque: list consisting of two-way blocking queue

The principle of blocking queue: reentrant lock reentrantLock with the condition queue

Fork / Join framework

Work-stealing algorithm: complete the task queue threads never complete the task queue task thread's access task called work stealing, doubly linked list from scratch takes a task, a task taken from the tail

Advantages: the use of threaded parallel computing, between threads to reduce the precision

Disadvantages: there is competition, if only one double-ended task list. Consumption of system resources. (Establishing double-ended list)

RecursiveAction: did not return for the results of the task

RecursiveTake: There used to return the results of the task

Update type atomic basis: using bottom unsafe.compareAndSwapInt (this, valueoffset, expect, update); native method

unsafe.compareAndSwapLong(this, valueoffset, expect, update);

Thread pool works :

When submitting a new task to the thread pool thread pool process:

  1. If the thread is currently running less than corePoolSize (basic thread size), creating a new thread to execute the task. (This step needs to acquire the global lock, call prestartAllCoreThreads method thread pool, thread pool is created in advance and start all basic threads)
  2. If the running thread is equal to or more than corePollSize, the task will be added BlockingQueue (blocking queue can be bounded unbounded can also be used, if the queue is unbounded, then the parameters maximumPoolSize meaningless.
  3. If you can not join the task BlockingQueue (queue is full), then create a new thread to handle the task. (This step needs to acquire Global Lock)
  4. If you create a new thread will cause the number of threads currently running more than maximumPoolSize, the task will be rejected, and calls RejectedExecutionHandler.rejectedExecution () method.

  5. new ThreadPoolExcutor (corePoolSize (basic thread size), maximumPoolSize (the maximum number of thread pool), keepAliveTime (free survival time after the worker thread), milliseconds (TimeUtils), runnableTaskQueue (blocking queue), handler (RejectExecutionHandle saturation policy)); --- --- create a thread pool

  6. Submit task threads execute and submit method, execute method does not return value, submit method returns the type of future

Executor framework

In jvm threaded model, java threads are mapped to one local operating system threads, it will create a local operating system thread when java thread starts. java multithreaded program task decomposition, user-level scheduler (the Executor frame) maps these tasks for a fixed number of threads, the underlying operating system to map these threads to the processor

ThreadPoolExecutor is multithreaded core implementation class, used to perform the task is submitted

3 types ThreadPoolExecutor:

           Fixed number of threads the thread pool FixedThreadPool, corePoolSize maximumPoolSize are arranged with the fixed parameters, keepAliveTime of 0 indicates an idle thread terminates immediately, using a LinkedBlockingQueue blocking queue (capacity Integer.MAX_VALUE) unbounded blocking queue. Suitable for heavily loaded server

          SingleThreadExecutor applicable to guarantee the order of tasks, the same corePoolSize maximumPoolSize set value 1. keepAliveTime of 0 indicates an idle thread terminates immediately, using a LinkedBlockingQueue blocking queue (capacity Integer.MAX_VALUE) unbounded blocking queue.

          cachedThreadPool adapted to perform asynchronous short-term small tasks, or lightly loaded servers. corePoolSize value 0, maximumPoolSize value Integer.MAX_VALUE. keepAliveTime 60 seconds. SynchronousQueue queue is blocked (no capacity, each insertion operation must wait for another thread removing operation). If there is no idle thread pool thread, or initial thread in the thread pool thread is empty, cachedThreadPool directly create a new thread.

 

 

scheduledThreadPoolExecutor is an implementation class, you can run the command after a given delay, or recurring tasks

 

Future implementation class interface or FutureTask representative asynchronous computation result (FutureTask implements Runnable interface, it can also be submitted to be executed)

Callable or Runnable interface implementation classes, or may be performed ThreadPoolExecutor ScheduledThreadPoolExecutor.

         Runnabel not return results, Callable can return results

 

 

 

 

Published 15 original articles · won praise 0 · Views 1066

Guess you like

Origin blog.csdn.net/ma316110/article/details/85317306