Common interview questions Java concurrent programming instruction reordering three read-write lock Introduction Principles (Happen-Before) Java first occurrence of thread-safe concurrent queue Queue

1. The relationship between processes and threads as well as coroutines

   1.1 process, straight view that, after the program stored on the hard disk to run, will form a separate memory banks in the memory space, the memory body has its own separate address space , has its own stack, the higher the affiliated units of the operating system.

    The operating system will process units, allocation of system resources (CPU time slice, memory and other resources), the process is the smallest unit of resource allocation.

  1.2 thread, sometimes referred to as lightweight processes (Lightweight Process, LWP), the operating system scheduler (CPU scheduler) the smallest unit of execution. 

  1.3 coroutine , coroutine lightweight thread is a user-state, coroutine scheduling entirely controlled by the user. Coroutine has its own stack and register context.

     Coroutine scheduled handover, the context save registers and stack to another location, when cut back, context restore a previously saved registers and the stack, the stack is directly overhead operation switch substantially no core, may not be locked access global variables Therefore very fast context switching.
        Coroutine internal interrupt subroutine can then perform another subroutine in turn, at the appropriate time and back again is then performed.

    Reference: https://www.cnblogs.com/starluke/p/11795342.html

 

2. The difference between concurrency and parallelism
                Concurrency: refers to a unified time, the macro to handle multiple tasks
                in parallel: Refers to a unified time to really handle multiple tasks on


3.Java in the way multithreading

  3.1 inherit the Thread class, override the run method

  3.2 The Runnable interface, override the run method

  3.3 Callable and create threads through FutureTask

  3.4 to create a thread by thread pool


4.Callable and Future Mode
            

Callable

In Java, create a thread there are two ways, one is the Thread class inheritance, one is to achieve Runnable interface. However, the disadvantage of these two methods is that at the end of a thread of execution tasks, the results can not be obtained. We generally only use shared variables or shared memory and thread communication means to achieve the purpose of obtaining results of the task.

However, Java, but also provided the use of Callable and Future operations to achieve access to the results of the task. Callable task to perform, produce results, and Future to get results.

Callable interface is defined as follows:

public interface Callable<V> {
    /**
     * Computes a result, or throws an exception if unable to do so.
     *
     * @return computed result
     * @throws Exception if unable to compute a result
     */
    V call() throws Exception;
}

 

A Runnable interface differs in that, call a generic method with a return value V.

Future Mode

Future core model is that: in addition to the main function of the wait time, and such period may be had to wait for processing other business logic

Futrure mode: For multi-threaded, if you want to wait for the results thread A thread B, then thread A is no need to wait for the B, B until the outcome, you can get a future Future, B, etc. The result is there and then take real results.

In a multithreaded often cited example is: network download pictures, beginning to replace the last picture is fuzzy pictures, download pictures and other images threaded download After the replacement. In this process, you can do some other things.

import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;


/** 
 * @author wn: 
 * @version 上午16:53:57 
 * 类说明 
 */
public class ThreadCallTest {
  public static void main(String[]args){
    ExecutorService executor=Executors.newCachedThreadPool();
    Task task=new Task();
    Future<Integer> result=executor.submit(task);
    if (executor != null)
            executor.shutdown();
  
    try {
        System.out.println("call result"+result.get());
    } catch (InterruptedException e) {
        e.printStackTrace();
    } catch (ExecutionException e) {
        e.printStackTrace();
    }
    System.out.println("over");
  
  }
}

class Task implements Callable<Integer>{
    @Override
    public Integer call() throws Exception {
        System.out.println("3.开始 ....");
        Thread.sleep(3000);
        System.out.println("4.结束 ....");
        return "xyz";
    }
}

 

When the Task start does not affect the main thread running, result.get () will wait 3 seconds to return results xyz

Future common method

V get (): Gets the results of asynchronous execution, if no results are available, this method will block until the asynchronous computation is complete.

V get (Long timeout, TimeUnit unit): Gets the asynchronous execution result, if no results are available, this method will block, but there is a time limit, if the blocking time exceeds the set timeout time, the method throws an exception.

boolean isDone (): if the task execution ends, either end or canceled during normal or abnormal, return true. => Result.isDone ()

boolean isCanceller (): If the task is completed before being canceled, it returns true.

boolean cancel (boolean mayInterruptRunning): If the task has not started, execute cancel (...) method returns false; if the task has already started, execution cancel (true) method to perform this task will interrupt threads to try to stop the task, If you stop successful, return true;

When the task has been started, execute cancel (false) method will have no effect (normal thread to execute to completion) for tasks executing thread, then return false;

When the task has been completed, execution cancel (...) method returns false. mayInterruptRunning parameter indicates whether to interrupt the execution threads.

Indeed Future provides three functions:

  • (1) able to interrupt the task execution
  • If (2) determine the completion of task execution
  • The result of (3) to complete the acquisition task execution


5. Create a thread pool manner (typically NA Excecutors.newxxxx created, typically using the ThreadPoolExecutor)
  newCachedThreadPool thread pool to create a cache, if the thread pool is longer than the processing needs, flexibility recovered idle thread, if not recovered, the new thread .
  newFixedThreadPool create a fixed-size thread pool, you can control the maximum number of concurrent threads, excess threads will wait in the queue.
  newScheduledThreadPool create a fixed-size thread pool to support regular and periodic task execution.
  newSingleThreadExecutor create a single-threaded thread pool, use it only to perform the task only worker threads to ensure that all tasks are performed in a specified order (FIFO, LIFO, priorities).

6.Java thread among state what

  6.1 New Status (New): thread object is created later, entered the new state. For example, Thread thread = new Thread () .

  6.2 ready state (Runnable): also known as "executable state." After the thread object is created, other thread calls the start () method of the object, so as to start the thread. For example, thread.start (). Thread a state of readiness at any time may be executed by the CPU scheduling.

  6.3 running state (Running): thread gets CPU authority for execution. Note that, the thread can only be entered from the ready state to the running state.

  6.4 blocked (Blocked): blocking state is the thread for any reason to give up the right to use CPU temporarily stops running. Until the thread into the ready state, a chance to go running. Case of obstruction of three categories:

  • (01) waits for blocking - by calling the thread wait () method, thread to wait for the completion of a job.
  • (02) synchronous blocking - a thread get synchronized synchronous lock failure (because the lock was occupied by another thread), it will enter synchronous blocking state.
  • (03) other obstruction - through the sleep () or join the calling thread () or issue an I / O request, the thread into the blocked state. When sleep () timeout, the Join () or a timeout wait for a thread to terminate, or I / O processing is completed, the thread into the ready state again.

  6.5 death state (Dead):thread execution is over or due to abnormal exit the run () method, the thread end of the life cycle.

          

 

 

 

7. A method commonly used in multi-threaded
  Java multi-thread common method has the following

   start,run,sleep,wait,notify,notifyAll,join,isAlive,currentThread,interrupt

 

  1) start method

 

     Used to start a thread, the corresponding thread into the waiting state. Once the turn of the CPU resources it uses when it can out of its main thread and independent start

 

     Their life cycle a. Note that even if the corresponding thread calls start method, but related threads will not necessarily executed immediately, the main purpose is to call the start method

 

      The current thread into the wait in line. Not necessarily immediately get cpu usage rights ...

 

  2) run method

 

     Run the same function and method of the Thread class Runnable interface, the system automatically calls the user are not invoked.

 

  3) sleep and wait method

 

     Sleep: a method in Java Thread class, the current thread will be suspended so that the cpu usage rights. But monitoring the state still exists, that is, if the current thread

 

    Synchronization lock into words, SLEEP method does not release the lock, even if the current thread so that the cpu usage rights, but the other is out of sync catch thread can not be won

 

    It was executed. Until the middle finger sleep method after a given period of time, sleep method will continue to receive too cpu's sleep before then continue to use permissions thread.

 

     Wait: is the method of the Object class, wait Method is a thread that has entered the synchronization lock, so that my being let out of sync lock, so that other waiting for this synchronization

 

    Lock thread can obtain get a chance to execute. Only other method calls notify or notifyAll (Note that calls notify or notifyAll method does not release

 

    Lock, just tell other calls wait method of  thread can participate in the competition lock ..) Methods to be able to wake up the relevant thread. Also note that method must wait in sync off

 

    Key word modified method to call.

 

  4) notify和notifyAll

 

      Call wait since the release method is waiting in the thread. The only difference is that notify and notifyAll notify wake up a waiting thread. The notifyAll wakes up

 

     All are waiting for the waiting thread. Note that notify and notifyAll will not release the synchronization lock corresponding oh.

 

  5) isAlive

 

      检查线程是否处于执行状态。在当前线程执行完run方法之前调用此方法会返回true。

 

                                             在run方法执行完进入死亡状态后调用此方法会返回false。

 

  6) currentThread

 

     Thread类中的方法,返回当前正在使用cpu的那个线程。

 

  7) intertupt

 

     吵醒因为调用sleep方法而进入休眠状态的方法,同时会抛出InterrruptedException哦。

 

  8)join

 

      线程联合 例如一个线程A在占用cpu的期间,可以让其它线程调用join()和本地线程联合。

 9)yield()

  调用yield方法会让当前线程交出CPU权限,让CPU去执行其他的线程。它跟sleep方法类似,同样不会释放锁。但是yield不能控制具体的交出CPU的时间,另外,yield方法只能让拥有相同优先级的线程有获  取CPU执行时间的机会。

  注意,调用yield方法并不会让线程进入阻塞状态,而是让线程重回就绪状态,它只需要等待重新获取CPU执行时间,这一点是和sleep方法不一样的。


            
8.线程状态流程图

 

 

 


9.volatile关键字有什么用途,和Synchronize有什么区别

volatile关键字的作用
其实volatile关键字的作用就是保证了可见性和有序性(不保证原子性),如果一个共享变量被volatile关键字修饰,那么如果一个线程修改了这个共享变量后,其他线程是立马可知的。为什么是这样的呢?比如,线程A修改了自己的共享变量副本,这时如果该共享变量没有被volatile修饰,那么本次修改不一定会马上将修改结果刷新到主存中,如果此时B去主存中读取共享变量的值,那么这个值就是没有被A修改之前的值。如果该共享变量被volatile修饰了,那么本次修改结果会强制立刻刷新到主存中,如果此时B去主存中读取共享变量的值,那么这个值就是被A修改之后的值了。
volatile能禁止指令重新排序,在指令重排序优化时,在volatile变量之前的指令不能在volatile之后执行,在volatile之后的指令也不能在volatile之前执行,所以它保证了有序性

 

synchronized关键字的作用
synchronized提供了同步锁的概念,被synchronized修饰的代码段可以防止被多个线程同时执行,必须一个线程把synchronized修饰的代码段都执行完毕了,其他的线程才能开始执行这段代码。
因为synchronized保证了在同一时刻,只能有一个线程执行同步代码块,所以执行同步代码块的时候相当于是单线程操作了,那么线程的可见性、原子性、有序性(线程之间的执行顺序)它都能保证了。
volatile关键字和synchronized关键字的区别

     (1)、volatile只能作用于变量,使用范围较小。synchronized可以用在变量、方法、类、同步代码块等,使用范围比较广。
     (2)、volatile只能保证可见性和有序性,不能保证原子性。而可见性、有序性、原子性synchronized都可以保证。
     (3)、volatile不会造成线程阻塞。synchronized可能会造成线程阻塞。

            
10.指令重排和先行发生原则
            Java指令重排序
   Java先行发生原则(Happen-Before)
11.并发编程线程安全三要素
            保证线程安全的三个方面
12.进程和线程间调度算法:
  12.1先来先服务(FCFS)
    先来先服务(first-come first-served,FCFS):是最简单的调度算法,按照请求的先后顺序进行调度。
  特点:
    非抢占式的调度算法,易于实现,但性能不好
    有利于长作业,不利于短作业。 因为短作业必须等待前面的长作业执行完毕才能执行,会造成短作业的等待时间过长。
  12.2短作业优先(SJF)  

    短作业优先(shortest job first,SJF):按估计运行时间最短的顺序进行调度。
  特点:
    非抢占式的调度算法,优先照顾短作业,具有很好的性能,降低平均等待时间,提高吞吐量。
    不利于长作业,长作业可能一直处于等待状态,造成长作业饿死。
    没有考虑作业的优先紧迫程度,不能用于实时系统。
  12.3最短剩余时间优先(SRTN)
     最短剩余时间优先 (shortest remaining time next, SRTN):
     最短剩余时间优先是短作业优先的抢占式版本,按剩余运行时间最短的剩余进行调度。
     新的进程到来时,将新进程的运行时间与当前进程的剩余运行时间进行比较。
     如果新进程的运行时间更短,则挂起当前进程,运行新进程;否则新进程等待。
  特点: 确保一旦新的短进程进入系统,可以尽快处理。
  12.4时间片轮转
    时间片轮转(round robin,RR):
    使用队列,按照FCFS的原则将就绪进程排序,后来的进程插入到队列末尾。
    选择队列中的首进程,为其分配CPU时间片。时间片用完,计时器发出时钟中断,停止该进程的运行,将其送回队列末尾。
    重复步骤1和2,直到完成进程调度。
  特点:
    时间片轮转的效率与时间片的大小有很大关系。 时间片太小,进程切换频繁反而使CPU利用率变低;时间片太大,系统的实时性不能得到保证。
    时间片轮转算法,大多用于分时系统。

  12.5优先级调度
    优先级调度:为每个进程分配一个优先级,优先级高的先调度,优先级低后调度。
    根据当前进程执行时,遇到较高优先级进程的处理方式,分为抢占式优先级调度、非抢占式优先级调度。

    抢占式优先级调度:进程在执行期间,具有更高优先级的进程到来,则中断当前进程,执行具有更高优先级的进程。
    非抢占式优先级调度:进程在执行期间,具有更高优先级的进程到来,仍然继续执行直到完成。
    为了防止低优先级的进程永远等不到调度,可以随着时间的推移增加等待进程的优先级。
  12.6多级反馈队列
    多级反馈队列:
    设置多个队列,每个队列的时间片大小不同,例如1, 2, 4, 8 ...。
    进程就绪时首先进入一级队列,被调度执行后,如果还需执行,则进入二级队列,依次类推。如果到了最后一级队列还未执行完毕,则仍然进入该队列。
    每一级队列中的进程按照FCFS进行调度,只有当上一级队列中没有进程等待执行时,才能调度当前队列中的进程。


    多级反馈队列是为连续执行多个时间片的进程考虑的,通过更改每级队列的时间片大小,减少该进程的调度次数。

    参考:操作系统之进程和线程(进程调度算法、进程间通信)
                
            Java中采用抢占式
               
13.Java开发中用过哪些锁:

  13.1 乐观锁

 

    乐观锁顾名思义,就是很乐观,每次去拿数据的时候都认为别人不会修改,所以不会上锁,但是在更新的时候会判断一下在此期间别人有没有去更新这个数据,可以使用版本号等机制。乐观锁适用于多读的应用类型,这样可以提高吞吐量,在Java中java.util.concurrent.atomic包下面的原子变量类就是使用了乐观锁的一种实现方式CAS(Compare and Swap 比较并交换)实现的

 

    乐观锁适合读操作非常多的场景,不加锁会带来大量的性能提升;  

 

    乐观锁在Java中的使用,是无锁编程,常常采用的是CAS算法,典型的例子就是原子类,通过CAS自旋实现原子操作的更新。

 

  13.2 悲观锁

 

    悲观锁总是假设最坏的情况,每次去拿数据的时候都认为别人会修改,所以每次在拿数据的时候都会上锁,这样别人想拿这个数据就会阻塞直到它拿到锁。比如Java里面的同步原语synchronized关键字的实现就是悲观锁。

 

    悲观锁适合写操作非常多的场景;

 

    悲观锁在Java中的使用,就是利用各种锁;

 

  13.3 独享锁

 

    独享锁是指该锁一次只能被一个线程所持有。

 

    独享锁通过AQS来实现的,通过实现不同的方法,来实现独享锁。

 

    对于Synchronized而言,当然是独享锁。

 

  13.4 共享锁

 

    共享锁是指该锁可被多个线程所持有。

 

    读锁的共享锁可保证并发读是非常高效的,读写,写读,写写的过程是互斥的。

 

    共享锁也是通过AQS来实现的,通过实现不同的方法,来实现共享锁。

 

  13.5 互斥锁

 

    互斥锁在Java中的具体实现就是ReentrantLock。

 

  13.6 读写锁

 

    读写锁在Java中的具体实现就是ReadWriteLock。

 

  13.7 可重入锁    

 

    重入锁也叫作递归锁,指的是同一个线程外层函数获取到一把锁后,内层函数同样具有这把锁的控制权限;
    synchronized和ReentrantLock就是重入锁对应的实现;
    synchronized重量级的锁 ;
    ReentrantLock轻量级的锁;

 

  13.8 公平锁

 

    公平锁是指多个线程按照申请锁的顺序来获取锁。

 

    对于Java ReetrantLock而言,通过构造函数指定该锁是否是公平锁,默认是非公平锁。非公平锁的优点在于吞吐量比公平锁大。

 

  13.9 非公平锁

 

    非公平锁是指多个线程获取锁的顺序并不是按照申请锁的顺序,有可能后申请的线程比先申请的线程优先获取锁。有可能,会造成优先级反转或者饥饿现象。  

 

    对于Synchronized而言,也是一种非公平锁。由于其并不像ReentrantLock是通过AQS的来实现线程调度,所以并没有任何办法使其变成公平锁。

 

  13.10 分段锁

 

    分段锁其实是一种锁的设计,并不是具体的一种锁,对于ConcurrentHashMap而言,其并发的实现就是通过分段锁的形式来实现高效的并发操作。

 

    我们以ConcurrentHashMap来说一下分段锁的含义以及设计思想,ConcurrentHashMap中的分段锁称为Segment,它即类似于HashMap(JDK7和JDK8中HashMap的实现)的结构,即内部拥有一个Entry数组,数组中的每个元素又是一个链表;同时又是一个ReentrantLock(Segment继承了ReentrantLock)。

 

    当需要put元素的时候,并不是对整个hashmap进行加锁,而是先通过hashcode来知道他要放在哪一个分段中,然后对这个分段进行加锁,所以当多线程put的时候,只要不是放在一个分段中,就实现了真正的并行的插入。

 

    但是,在统计size的时候,可就是获取hashmap全局信息的时候,就需要获取所有的分段锁才能统计。

 

    分段锁的设计目的是细化锁的粒度,当操作不需要更新整个数组的时候,就仅仅针对数组中的一项进行加锁操作。

 

  13.11 偏向锁  

 

    偏向锁是指一段同步代码一直被一个线程所访问,那么该线程会自动获取锁。降低获取锁的代价。

 

  13.12 轻量级锁

 

    轻量级锁是指当锁是偏向锁的时候,被另一个线程所访问,偏向锁就会升级为轻量级锁,其他线程会通过自旋的形式尝试获取锁,不会阻塞,提高性能。

 

  13.13 重量级锁

 

    重量级锁是指当锁为轻量级锁的时候,另一个线程虽然是自旋,但自旋不会一直持续下去,当自旋一定次数的时候,还没有获取到锁,就会进入阻塞,该锁膨胀为重量级锁。重量级锁会让他申请的线程进入阻塞,性能降低。

 

  13.14 自旋锁

 

    在Java中,自旋锁是指尝试获取锁的线程不会立即阻塞,而是采用循环的方式去尝试获取锁,这样的好处是减少线程上下文切换的消耗,缺点是循环会消耗CPU。


14.synchronized关键字理解

代表这个方法加锁,相当于不管哪一个线程(例如线程A),运行到这个方法时,都要检查有没有其它线程B(或者C、 D等)正在用这个方法(或者该类的其他同步方法),有的话要等正在使用synchronized方法的线程B(或者C 、D)运行完这个方法后再运行此线程A,没有的话,锁定调用者,然后直接运行。它包括两种用法:synchronized 方法和 synchronized 块。

关键字可用来给对象和方法或者代码块加锁,当它锁定一个方法或者一个代码块的时候,同一时刻最多只有一个线程执行这段代码。当两个并发线程访问同一个对象object中的这个加锁同步代码块时,一个时间内只能有一个线程得到执行。另一个线程必须等待当前线程执行完这个代码块以后才能执行该代码块。然而,当一个线程访问object的一个加锁代码块时,另一个线程仍可以访问该object中的非加锁代码块。

 

在Java中,synchronized关键字是用来控制线程同步的,就是在多线程的环境下,控制synchronized代码段不被多个线程同时执行。

synchronized是Java中的关键字,是一种同步锁。它修饰的对象有以下几种:
1. 修饰一个代码块,被修饰的代码块称为同步语句块,其作用的范围是大括号{}括起来的代码,作用的对象是调用这个代码块的对象;
2. 修饰一个方法,被修饰的方法称为同步方法,其作用的范围是整个方法,作用的对象是调用这个方法的对象;
3. 修改一个静态的方法,其作用的范围是整个静态方法,作用的对象是这个类的所有对象;
4. 修改一个类,其作用的范围是synchronized后面括号括起来的部分,作用主的对象是这个类的所有对象。


                
15.CAS无锁机制
           CAS无锁机制
16.AQS
            AQS简介
17.ReentrantLock底层实现

ReentrantLock主要利用CAS+CLH队列来实现。它支持公平锁和非公平锁,两者的实现类似。

 

  • CAS:Compare and Swap,比较并交换。CAS有3个操作数:内存值V、预期值A、要修改的新值B。当且仅当预期值A和内存值V相同时,将内存值V修改为B,否则什么都不做。该操作是一个原子操作,被广泛的应用在Java的底层实现中。在Java中,CAS主要是由sun.misc.Unsafe这个类通过JNI调用CPU底层指令实现。
  • CLH队列带头结点的双向非循环链表

 

 

ReentrantLock的基本实现可以概括为:先通过CAS尝试获取锁。如果此时已经有线程占据了锁,那就加入CLH队列并且被挂起。当锁被释放之后,排在CLH队列队首的线程会被唤醒,然后CAS再次尝试获取锁。在这个时候,如果:

 

  • 非公平锁:如果同时还有另一个线程进来尝试获取,那么有可能会让这个线程抢先获取;
  • 公平锁:如果同时还有另一个线程进来尝试获取,当它发现自己不是在队首的话,就会排到队尾,由队首的线程获取到锁
        

           
18.ReentrantLock和Synchronized之间的区别
相似点:

两个都是可重入锁,它们都是加锁方式同步,而且都是阻塞式的同步,也就是说当如果一个线程获得了对象锁,进入了同步块,其他访问该同步块的线程都必须阻塞在同步块外面等待,而进行线程阻塞和唤醒的代价是比较高的(操作系统需要在用户态与内核态之间来回切换,代价很高,不过可以通过对锁优化进行改善)。

功能区别:

相同点:

             1.它们都是加锁方式同步;

              2.都是重入锁;

             3. 阻塞式的同步;也就是说当如果一个线程获得了对象锁,进入了同步块,其他访问该同步块的线程都必须阻塞在同步块外面等待,而进行线程阻塞和唤醒的代价是比较高的(操作系统需要在用户态与内核态之间来回切换,代价很高,不过可以通过对锁优化进行改善);      
不同点:

 

 

       

 

   参考:https://blog.csdn.net/qq_40551367/article/details/89414446


19.ReentrantReadWriteLock

  读写锁简介

20.BlockingQueue和ConcurrentLinkedQueue简介

BlockingQueue
  阻塞队列(BlockingQueue)是一个支持两个附加操作的队列,这两个附加的操作是:

    在队列为空时,获取元素的线程会等待队列变为非空;

    当队列满时,存储元素的线程会等待队列可用;

  Blocking queue commonly used in the producers and consumers of the scene, the producer is added to the queue thread elements, the consumer is to take elements from the queue thread. Blocking queue storage container element is the producers, and consumers only take elements from the container;

  In java, the interface is located BlockingQueue java.util.concurrent package, a thread-safe blocking queue;

  In the new concurrent package, BlockingQueue good solution to multiple threads, how efficient and safe "transfer" of data problems, these efficient and thread-safe queue class, bring us to quickly build high-quality multi-threaded programs great convenience;

  The common queue consists of the following two:

    1. First In First Out (FIFO): the first element is inserted into the queue is also the first out queue, the queue functions like, to some extent this also reflects a queue fairness;

    2. After the backward out (LIFO): After inserting the element queue is the first out queue, which queue priority of recent events;


ConcurrentLinkedQueue

ConcurrentLinkedQueue: queues is suitable for a high concurrency scenarios, by lock-free manner, to achieve high performance at high concurrency state, usually better performance in ConcurrentLinkedQueue BlockingQueue.

     It is an unbounded thread-safe queue based on linked nodes. Elements of the queue to follow the FIFO principle. Is added to the first head, the tail is recently added to the queue does not allow null elements.


      ConcurrentLinkedQueue important ways:
      the Add and the offer () is the method of adding elements (ConcurrentLinkedQueue in the method maybe has no difference)
      poll () and peek () is to take the first element nodes, except that it deletes the element, which will not.               

     peek and poll the queue when none of the data, the acquired data is null, no blocking

 

 

Concurrent queue Queue

 

Guess you like

Origin www.cnblogs.com/chx9832/p/12592767.html