Principle Java thread pool thread pool and several types of presentation

Use the thread pool under what circumstances?
1. A single task processing time is relatively short
2. The large number of tasks to be treated

The benefits of using the thread pool:
1. Reduce the creation and destruction of threads on the time spent and the cost of system resources
2. The thread pool is not used, it may cause the system to create a large number of threads resulting in system memory and consumed "over-switching" .

Thread pool works:
Why use thread pool?
Such as Web server, database server, file server or mail server like many server applications for processing a large number of short tasks from some remote source. Requesting a certain way arrives at the server, this approach may be through a network protocol (e.g. HTTP, FTP, or the POP), or possibly by a JMS queue by polling the database. Regardless of how the request arrives, the server application often is: a single task processing time is very short and the number of requests is huge.
A simplistic model to build server applications should be: whenever a request arrives to create a new thread, and then in the new thread to service the request. In fact, for prototyping This method works well, but if you try to deploy server applications that run in this manner, so serious disadvantage of this method is very obvious. Each request corresponds to a thread (thread-per-request) less than one approach is: to create a great new thread for each request overhead; to create a new thread for each request server spent on creating and destroying threads system resources and time consuming than the processing time and resources spent on actual user requests for more.
In addition to the overhead of creating and destroying threads, thread activity also consumes system resources. Creating too many threads in a JVM may cause the system due to excessive consumption of memory and run out of memory or "Switch excessive." In order to prevent the lack of resources, the application server needs some way to limit the number of requests any given moment process.
Thread pool thread provides a solution for the life cycle cost and lack of resources. Through the multiple tasks reuse threads, thread creation overhead is amortized on a number of tasks. The benefit is that, because the thread already exists when a request arrives, inadvertently eliminated the delays caused by thread creation. In this way, you can immediately request a service, make the application more responsive. Moreover, by appropriately adjusting the number of threads in the pool, that is, when the number of requests exceeds a certain threshold, it is mandatory to any other newly arrived request waits until a thread to get treatment, which can prevent the shortage of resources.
An alternative to the thread pool
Far from the thread pool within an application server using the unique multi-threaded. As mentioned above, sometimes, to create a new thread for each new task is very wise. However, if the task is created too frequently while the average processing time for the task is too short, it creates a new thread for each task will cause performance problems.
Another common thread model is assigned a background thread and task queue for a particular type of task. AWT and Swing on the use of this model, there is a GUI event thread In this model, all the work leading to the user interface changes are performed in the thread. However, since there is only one AWT thread, so to perform tasks in the AWT thread it may take considerable time to complete, which is not desirable. Therefore, Swing applications often require additional worker threads for long running time, the tasks related to the same UI.
Each task corresponds to a background thread and a single thread method (single-background-thread) The method of working in some cases are very desirable. Each task a thread approach works very well when only a small amount of long-running tasks. And as long as scheduling predictability is not very important, a single method on background thread work very well, such as low-priority background tasks is the case. However, most server applications are for processing large amounts of short-term tasks or subtasks, it is often desirable to have a mechanism to allow low-cost effectively handle some of these tasks as well as resource management measures and the timing predictability. Thread pools offer these advantages.
Work queue
on the practical implementation of the thread pool, the term "thread pool" somewhat misleading, because the thread pool "obvious" implementation does not always produce the results we want in most cases. The term "thread pool" first appeared in the Java platform, so it may be less product-oriented object method. However, the term is still widely used to continue with.
Although we can easily implement a thread pool class, where the client class waiting for an available thread, the thread passes to task for execution, when the task is completed then returned to the thread pool, but this method but there are several potential Negative impact. For example, in the pool is empty, what happens? Trying to find the thread pool to pass the task of the caller will pool is empty, the caller waits for an available pool of threads, the thread will block it. One of the reasons why we often want to use a background thread to prevent thread is submitted is blocked. Completely blocked the caller, as in "obvious" cases to achieve the thread pool, you can put an end to us trying to solve the problem.
We usually want the same fixed set of worker threads combining work queue that uses wait () and notify () to notify the waiting thread new work has arrived. The work queue is typically implemented as a linked list with some associated monitor object. Listing 1 shows an example of a simple combination of work queue. Although Thread API does not impose special requirements on the use of the Runnable interface, but using Runnable object queue of this model is the scheduler work queue and the public contract.
Listing 1. Working with a thread pool queue
import java.util.LinkedList;

public class WorkQueue

{

private final int nThreads;
private final PoolWorker[] threads;
private final LinkedList queue;

public WorkQueue(int nThreads) {
    this.nThreads = nThreads;
    queue = new LinkedList();
    threads = new PoolWorker[nThreads];
    for (int i = 0; i < nThreads; i++) {
        threads[i] = new PoolWorker();
        threads[i].start();
    }
}

public void execute(Runnable r) {
    synchronized (queue) {
        queue.addLast(r);
        queue.notify();
    }
}

private class PoolWorker extends Thread {
    public void run() {
        Runnable r;
        while (true) {
            synchronized (queue) {
                while (queue.isEmpty()) {
                    try {
                        queue.wait();
                    } catch (InterruptedException ignored) {
                    }
                }
                r = (Runnable) queue.removeFirst();
            }

            // If we don't catch RuntimeException,

            // the pool could leak threads

            try {
                r.run();
            } catch (RuntimeException e) {
                // You might want to log something here
            }
        }
    }
}

}

You may have noticed that Listing 1 is achieved using notify () rather than notifyAll (). Most experts recommend using notifyAll () instead of notify (), and for good reason: use notify () risk elusive, only used under certain conditions, this method is appropriate. On the other hand, if used properly, Notify () having more desirable performance features than notifyAll (); in particular, Notify () caused much less context switches, it is important that the server application.
Listing 1 is an example of the work queue to meet the safe use notify () needs. Therefore, please be careful, please continue to use it in your program, but using notify () in other cases.
The risk of using the thread pool
, although the thread pool is a powerful mechanism to build multi-threaded applications, but using it is not without risks. Concurrent with the risk of all applications built thread pool vulnerable to any other multi-threaded applications vulnerable, such as synchronization errors and deadlock, it vulnerable to a small number of other risks specific to the thread pool, such as deadlock related to the pool, inadequate resources and thread leak.
Deadlock
any multi-threaded applications have the risk of deadlock. When each of a set of processes or threads are waiting for an event that only another process in the set can be caused when we say that this set of processes or threads deadlock. The simplest case of deadlock is: A thread holds an exclusive lock on object X, Y and waiting for the lock of the object, and the thread B holds an exclusive lock on the object Y, but waiting for the object X lock. Unless there is some way to break the lock wait (Java lock does not support this method), otherwise deadlocked thread will wait forever.
Although any multithreaded programs have the risk of deadlock, but the thread pool has introduced another deadlock may, in that case, all the pool threads are blocked waiting for the implementation of the results already in another task queue the task, but the task was unoccupied because no thread can not run. When the thread pool is used to implement simulation involves many interactive objects, objects can be simulated with each other to send queries that followed as a task execution queue, the query object and synchronous waiting for a response, this happens.
Insufficient resources
One advantage is that the thread pool: relative to other alternative scheduling mechanisms (some of us have already discussed), they are usually performed very well. But this is only when properly adjust the thread pool size. Thread consume a lot of resources, including memory and other system resources, including. In addition to the required Thread object memory, each thread requires a lot of two possible execution call stack. In addition, JVM might create a native thread for each Java thread, these native thread will consume additional system resources. Finally, although switching between threads scheduling overhead is very small, but if there are many threads, context switches may seriously affect the performance of the program.
If the thread pool is too large, it is likely to seriously affect those resources consumed by the thread system performance. Switching between threads will be a waste of time, and the use of threads beyond than you actually need may lead to lack of resources, because the pool threads are consumed some resources, and these resources could be more effectively used for other tasks. In addition to its own thread resources used outside, the work may require additional resources when service requests, such as JDBC connections, sockets or file. These are also limited resources, there are too many concurrent requests may also cause failure, for example, can not be assigned JDBC connection.
Concurrent error
Thread pools and other queuing mechanisms rely on the use wait () and notify () method, these methods are difficult to use. If the encoding is not correct, then the notice may be lost, resulting thread remains idle, despite the work queue to be processed. When using these methods, care must be taken; even the experts can make mistakes on them. And the best use of existing, already know to achieve that work, for example in the following without having to write your own pool util.concurrent package discussed.
Thread leakage
is a serious risk of various types of thread pool thread leak when removing a thread to perform a task, and when the thread has not returned to the pool after the task is completed, this happens from the pool. A situation occurs thread leak occurs when a task throws a RuntimeException or Error. If the pool does not catch them, then the thread will exit and the size of the thread pool will be permanently reduced by one. When the number of times this happens enough, eventually the thread pool is empty, and the system will stop, because there is no available thread to handle the task.
Some tasks may wait forever for some resource or input from the user, and can not guarantee these resources become available, users may also have gone home, and so the task will be permanently stopped, these tasks will lead to stop and thread leakage same problem. If a thread is permanently consumed with such a task, it actually was removed from the pool. For such tasks, they should either be given only to their own threads, or just let them wait for a limited time.
Request overload
is just overwhelmed by requests to the server, this is possible. In this case, we might not want to each incoming requests are queued to our work queues, task because in the queue waiting to be executed may consume too many system resources and cause a lack of resources. In this case decide what to do depends on your own; in some cases, you can simply discard the request, relying on higher-level protocols retry the request later, you can also use a response stating that the server is temporarily busy to reject the request.
Guidelines for effective use of the thread pool
As long as you follow a few simple rules, the thread pool can be a very effective way to build server applications:
Do not line up to wait for other tasks that task synchronization results. This may lead to that form of deadlock as described above, in the kind of deadlock, all threads are occupied by some of the tasks, which in turn queued tasks waiting for the results, but they can not perform these tasks, because all threads are busy.
When the time may be very long operation using the combination of threads to be careful. If such a program must wait for I / O completion of such a resource, then please specify the maximum wait time, and then the task is invalid or re-queued for later execution. Do to ensure that this: By a thread may be released to a successful completion of the mission, which will eventually make some progress.
Understand the task. To effectively adjust the thread pool size, you need to understand is queued tasks and what they are doing. They are CPU-bound (CPU-bound) do? They are I / O limitations (I / O-bound) do? Your answers will affect how you tune your application. If you have a different job classes that have very different characteristics, then you set up multiple work queues for different tasks class might make sense, so you can adjust each pool accordingly.
Resize pool
===
resize the thread pool is basically avoid two types of errors: too few threads or too many threads. Fortunately, for most applications, the margin between too much and too little rather wide.
Recall that: in the application thread has two main advantages, although slower in operation such as waiting for I / O, but allowed to continue processing and may utilize multiple processors. , Was added when the number of threads running close N having N application on the computing machine processor-limiting additional threads may improve the total processing capability, and additional threads will not work when the number of threads exceeds N . In fact, too many threads may even reduce performance because it would lead to additional context switching overhead.
Best thread pool size depends on the nature and the number of available processors in the task work queue. If in a system having N processors on only one queue, wherein all calculated nature of the tasks, the thread pool having a generally N or N + maximum CPU utilization when a thread.
For those tasks may need to wait for I / O to complete (for example, reading from the socket mission HTTP request), number of processors required to make the size exceeds the available pool, because not all threads have been working. By using profiling, you can estimate the proportion of a typical waiting time between requests (WT) and service time (ST). If we call this ratio WT / ST, then for a system having N processors, the need for some N * (1 + WT / ST ) threads to keep the processors are fully utilized.
Processor utilization is not the only consideration to adjust the thread pool size in the process. With the growth of the number of thread pool, you may run the scheduler can be used to limit terms of memory, or limit other system resources, such as sockets, open file handles or database connections, etc.
Without having to write your own pool
Doug Lea wrote an excellent open source library of concurrency utilities util.concurrent, which includes mutex, semaphore, such as performing very well under concurrent access to the queue and hash table like collections and several work queue implementation. The package is an effective PooledExecutor class, to achieve the correct work queue based thread pool widely used. You do not try to write your own thread pool, this error-prone, you may consider using a reverse util.concurrent some utilities. See Resources for links and more information.
util.concurrent library also inspired JSR 166, JSR 166 is a Java Community Process (Java Community Process (JCP)) Working Group, they are going to develop concurrent Java class libraries included in the java.util.concurrent package in a group utility, this package should be used for Java development kit 1.5 release.
Thread pool is a useful tool in organizing server application. It is very simple in concept, but in the implementation and use of a pool, but need to pay attention to several issues, such as deadlock, lack of resources and wait () and notify () complexity. If you find that your application requires a thread pool, consider using one of Executor class util.concurrent, for example PooledExecutor, without having to write from scratch. If you want to create your own threads to handle short-lived job, then you should definitely consider using a thread pool instead.
JDK comes with the thread pool who introduced the general category:
1, newFixedThreadPool create a specified number of worker threads the thread pool. Whenever submit a task to create a worker thread, if the number of worker threads the thread pool reaches the maximum number of initial tasks will be submitted to the pool into the queue.
package test;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class ThreadPoolExecutorTest {
public static void main(String[] args) {
ExecutorService fixedThreadPool = Executors.newFixedThreadPool(3);
for (int i = 0; i < 10; i++) {
final int index = i;
fixedThreadPool.execute(new Runnable() {
public void run() {
try {
System.out.println(index);
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
}
}
}

2, newCachedThreadPool create a cached thread pool. This type of thread pool is characterized by:
1) the number of worker threads to create a virtually unlimited (in fact, there is a limit, the number is Interger MAX_VALUE), this can be flexible to add a thread pool thread...
2) If there is no time to submit the task to the thread pool, that is, if the worker thread has been idle for a specified time (default is 1 minute), then the worker thread will automatically terminate. Upon termination, but if you submitted a new task, the thread pool to re-create a worker thread.
package test;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class ThreadPoolExecutorTest {
public static void main(String[] args) {
ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
for (int i = 0; i < 10; i++) {
final int index = i;
try {
Thread.sleep(index * 1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
cachedThreadPool.execute(new Runnable() {
public void run() {
System.out.println(index);
}
});
}
}
}

3, newSingleThreadExecutor create a single threaded Executor, that is, only to create a unique worker thread to perform the task, if this thread abnormal end, there will be another to replace it, to ensure the implementation of the order (I think this is its characteristic). The greatest feature of the single worker thread is guaranteed to perform various tasks in sequence, and at any given time there will be no more than one thread is active.
package test;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

public class ThreadPoolExecutorTest {
public static void main(String[] args) {
ExecutorService singleThreadExecutor = Executors.newSingleThreadExecutor();
for (int i = 0; i < 10; i++) {
final int index = i;
singleThreadExecutor.execute(new Runnable() {
public void run() {
try {
System.out.println(index);
Thread.sleep(2000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
});
}
}
}

4, newScheduleThreadPool create a fixed-size thread pool, but also support the timing and periodicity of task execution, similar to the Timer. (This thread pool is temporarily not fully understand the principle thorough)
Package Penalty for the Test;

import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;

public class ThreadPoolExecutorTest {
public static void main(String[] args) {
ScheduledExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(5);
scheduledThreadPool.schedule(new Runnable() {
public void run() {
System.out.println("delay 3 seconds");
}
}, 3, TimeUnit.SECONDS);
}
}

Delay 3 seconds represents performed.
Periodically performing the following sample code:
Package Test;

import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;

public class ThreadPoolExecutorTest {
public static void main(String[] args) {
ScheduledExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(5);
scheduledThreadPool.scheduleAtFixedRate(new Runnable() {
public void run() {
System.out.println("delay 1 seconds, and excute every 3 seconds");
}
}, 1, 3, TimeUnit.SECONDS);
}
}

Delayed execution represents every 3 seconds 1 second.
Summary:
a .FixedThreadPool is a typical and excellent thread pool, which has a thread pool to improve program efficiency and saving advantage when creating the thread consumption overhead. However, when the thread pool idle, that is, when the thread pool is not runnable tasks, it will not release the worker, but also take up some system resources.
two. When CachedThreadPool feature is when a thread pool idle, that is, the thread pool is not runnable tasks, it will release the worker thread, the worker thread to release resources occupied. However, but the emergence of new tasks, but also create a new worker, but also a certain overhead. Also, when using CachedThreadPool, we must pay attention to the number of control tasks, otherwise, due to the large number of threads to run simultaneously, great cause system down.

Reproduced in: https: //www.jianshu.com/p/8a1357d538ce

Guess you like

Origin blog.csdn.net/weixin_34309543/article/details/91205997