Java Concurrency Basics Summary

Concurrency is the ability to run multiple programs or parts of a program in parallel. If a time-consuming task in a program can be run asynchronously or in parallel, the throughput and interactivity of the entire program will be greatly improved. Modern PCs have multiple CPUs or multiple cores in one CPU, and the ability to use multiple cores reasonably will become the key to a large-scale application.

basic thread usage

  There are two ways to write code that is executed when a thread is running: one is to create an instance of the Thread subclass and override the run method, and the second is to implement the Runnable interface when creating the class. Of course, implementing  Callable is also a way. The combination of Callable and Future can achieve the return value after the task is executed, while Runnable and Thread cannot obtain the result after the task is executed.

copy code
public class ThreadMain {
    public static void main(String[] args) {
        MyThread myThread = new MyThread();
        new Thread(myThread).start();

        new MyThreas2().start();
    }
}

// The first way, implement the Runable interface 
class MyThread implements Runnable {
    @Override
    public void run() {
        System.out.println("MyThread run...");
    }
}

// The second way, inherit the Thread class, rewrite the run() method 
class MyThread2 extends Thread {
    @Override
    public void run() {
        System.out.println("MyThread2 run...");
    }
}
copy code

  The start() method will return immediately once the thread is started, without waiting for the run() method to return after execution, as if the run method was executed on another cpu.

Note: A common mistake made to create and run a thread is to call the thread's run() method instead of the start() method, as follows:

Thread newThread = new Thread(MyRunnable());
newThread.run();  //should be start();

  At first you don't feel anything wrong, because the run() method is called exactly as you want. However, in fact, the run() method is not executed by the newly created thread, but by the current thread. That is, it is executed by the thread that executes the above two lines of code. To make the new thread created execute the run() method, the start method of the new thread must be called.

The combination of Callable and Future implements the implementation of obtaining the return value after executing the task :

copy code
public static void main(String[] args) {
    ExecutorService exec = Executors.newSingleThreadExecutor();
    Future<String> future = exec.submit(new CallTask());

    System.out.println(future.get());
}
class CallTask implements Callable {
    public String call() {
        return "hello";
    }
}
copy code

给线程设置线程名:

MyTask myTask = new MyTask();
Thread thread = new Thread(myTask, "myTask thread");

thread.start();
System.out.println(thread.getName());

  当创建一个线程的时候,可以给线程起一个名字。它有助于我们区分不同的线程。

 

volatile

  在多线程并发编程中synchronized和Volatile都扮演着重要的角色,Volatile是轻量级的synchronized,它在多处理器开发中保证了共享变量的“可见性”。可见性的意思是当一个线程修改一个共享变量时,另外一个线程能读到这个修改的值。它在某些情况下比synchronized的开销更小,但是volatile不能保证变量的原子性。

  volatile变量进行写操作时(汇编下有lock指令),该lock指令在多核系统下有2个作用:

  • 将当前CPU缓存行写回系统内存。
  • 这个写回操作会引起其他CPU缓存了改地址的数据失效。

  多CPU下遵循缓存一致性原则,每个CPU通过嗅探在总线上传播的数据来检查自己的缓存值是否过期了,当发现缓存对应的内存地址被修改,将对应缓存行设置为无效状态,下次对数据操作会从系统内存重新读取。更多volatile知识请点击深入分析Volatile的实现原理

 

synchronized

  在多线程并发编程中Synchronized一直是元老级角色,很多人都会称呼它为重量级锁,但是随着Java SE1.6对Synchronized进行了各种优化之后,有些情况下它并不那么重了。

  Java中每一个对象都可以作为锁,当一个线程试图访问同步代码块时,它首先必须得到锁,退出或抛出异常时必须释放锁。

  • 对于同步方法,锁是当前实例对象。
  • 对于静态同步方法,锁是当前对象的Class对象。
  • 对于同步方法块,锁是Synchonized括号里配置的对象。

  synchronized关键字是不能继承的,也就是说基类中的synchronized方法在子类中默认并不是synchronized的。当线程试图访问同步代码块时,必须先获得锁,退出或抛出异常时释放锁。Java中每个对象都可以作为锁,那么锁存在哪里呢?锁存在Java对象头中,如果对象是数组类型,则虚拟机用3个word(字宽) 存储对象头,如果对象是非数组类型,则用2字宽存储对象头。更多synchronized知识请点击Java SE1.6中的Synchronized

 

线程池

  线程池负责管理工作线程,包含一个等待执行的任务队列。线程池的任务队列是一个Runnable集合,工作线程负责从任务队列中取出并执行Runnable对象。

ExecutorService executor  = Executors.newCachedThreadPool();
for (int i = 0; i < 5; i++) {
    executor.execute(new MyThread2());
}
executor.shutdown();

Java通过Executors提供了4种线程池:

  • newCachedThreadPool:创建一个可缓存线程池,对于新任务如果没有空闲线程就新创建一个线程,如果空闲线程超过一定时间就会回收。
  • newFixedThreadPool:创建一个固定数量线程的线程池。
  • newSingleThreadExecutor:创建一个单线程的线程池,该线程池只用一个线程来执行任务,保证所有任务都按照FIFO顺序执行。
  • newScheduledThreadPool:创建一个定长线程池,支持定时及周期性任务执行。

  以上几种线程池底层都是调用ThreadPoolExecutor来创建线程池的。

ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime, TimeUnit unit, BlockingQueue<Runnable> workQueue)
  • corePoolSize(线程池的基本大小):当提交一个任务到线程池时,线程池会创建一个线程来执行任务,即使其他空闲的基本线程能够执行新任务也会创建线程,等到需要执行的任务数大于线程池基本大小时就不再创建。如果调用了线程池的prestartAllCoreThreads方法,线程池会提前创建并启动所有基本线程。
  • maximumPoolSize(线程池最大大小):线程池允许创建的最大线程数。如果队列满了,并且已创建的线程数小于最大线程数,则线程池会再创建新的线程执行任务。值得注意的是如果使用了无界的任务队列这个参数就没什么效果。
  • keepAliveTime(线程活动保持时间):线程池的工作线程空闲后,保持存活的时间。所以如果任务很多,并且每个任务执行的时间比较短,可以调大这个时间,提高线程的利用率。
  • TimeUnit(线程活动保持时间的单位):可选的单位有天(DAYS),小时(HOURS),分钟(MINUTES),毫秒(MILLISECONDS),微秒(MICROSECONDS, 千分之一毫秒)和毫微秒(NANOSECONDS, 千分之一微秒)。,可以选择的阻塞队列有以下几种:
  • workQueue(任务队列):用于保存等待执行的任务的阻塞队列。

 

    • ArrayBlockingQueue:是一个基于数组结构的有界阻塞队列,此队列按 FIFO(先进先出)原则对元素进行排序。
    • LinkedBlockingQueue: A blocking queue based on a linked list structure. This queue sorts elements according to FIFO (first in, first out), and the throughput is usually higher than that of ArrayBlockingQueue. The static factory method Executors.newFixedThreadPool() uses this queue.
    • SynchronousQueue: A blocking queue that does not store elements. Each insert operation must wait until another thread calls the remove operation, otherwise the insert operation has been blocked, and the throughput is usually higher than the LinkedBlockingQueue, which is used by the static factory method Executors.newCachedThreadPool.
    • PriorityBlockingQueue: An infinitely blocking queue with priority.

When submitting a new task to the thread pool, its processing flow is as follows:

  1. First determine whether the basic thread pool is full? If it is not full, a worker thread is created to execute the task, and when it is full, it enters the next process.
  2. Second, determine whether the work queue is full? If it is not full, submit a new task to the work queue, and if it is full, enter the next process.
  3. Finally, determine whether the entire thread pool is full? If it is not full, a new worker thread is created to execute the task, and when it is full, it is handed over to the saturation strategy to process the task.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324439573&siteId=291194637