Thread pool and connection pool

Http request initiated by the user at the client request over the network with a Web server and through the three-way handshake to establish a connection, the server receives a request from a thread pool for this re-assigning work requests a worker thread, the worker thread starts processing business logic in dealing with business procedural logic, apply to a database connection pool jdbc connection, perform the CRUD, finished returned resource request returns the result. This process is every Web developers are very familiar scene face every day. There are two very important elements, in this scenario thread pool and connection pool .
Thread pool and connection pool is a pool of resources, their concept and role confusion.
Thread pool thread management, aims to make full use of IO due to obstruction caused by CPU resources.
Connection pool to manage connections, re-use connection resources to create a higher cost.
Connection pooling is working in work thread, their position and role is completely different from the method parameter settings are completely different.
As shown below:

Thread pool function

Multithreaded program can maximize the use of computing power of multicore processors, improving throughput and performance of the software, but if not managed and controlled thread excessive memory usage, context switch will cause the CPU are adversely influences. So the role is to manage the thread pool thread, basic capabilities include:

  • Thread creation (including setting the thread's name, priority, daemon and other property, limit the total number of threads)
  • Receiving the task, the worker thread scheduling tasks
  • Listening thread running capture
  • Thread destroyed

Works thread pool

JDK thread pool implementation class is ThreadPoolExecutor, let's look at the source notes how self-explanatory:
ExecutorServicethe use of thread pooling resources to perform the task, it is the main problem is to optimize a large number of asynchronous task execution thread and at the same time, the task resources management.
Class member variables from the point of view of the composition of ThreadPoolExecutorthe main constituent members are ctl workQueue workers threadFactory rejectedExecutionHandler
ctl: ctl Control of that abbreviation, it is a field represents two properties, one is the thread pool running, the other is the number of active threads in the thread pool. In the process of running the thread pool operations, it will check both states to determine the behavior of the thread pool. ThreadPool provided calculated state value, thread pool number of threads, and ctl worth Bit Operation functions
workers: working set of threads, tasks (runnable) the ThreadPoolExecutor be performed encapsulated Workerin Workerthe interior encapsulated runnable task worker, and needs to be performed .
workQueue: blocking queue, for loading the task to be executed has not been performed
threadFactory: thread creation factory, which defines a newthread () method, a thread is created by the custom factory, for example, set the name of the thread priority and other attributes. For example Tomcat thread pool to use TaskThreadFactoryto create a thread, it will give the thread defined groupName, namePrefix, daemon, the priority of several properties.
RejectedExecutionHandler: threaded tasks refusal handler, when you submit the task to the thread pool, as if unable to be received will be used rejectHandler refuse thread. Common strategies have refused to look at four categories:

  • Throws RejectedExecutionException when Abort policy after the thread pool is full, submit a new job, this is the default deny policy.
  • Discard policy is a task directly discarded when submitted.
  • CallerRuns strategy will be submitted at the time of the failure, mission submitted directly by the thread submit tasks.
  • DiscardOldest strategy discards the earliest filing with the task.
    Having described the composition of the thread pool, the following describes the process flow thread pool:

  • When you submit the task to the thread pool, first check the current is greater than the number of threads running coreSize, if not more than create a core thread to perform the task.
  • If more than coreSise, it is determined that the task queue is full, if not full put the task is added to the task queue waiting for a free thread of execution.
  • If the task queue is full, it is determined that the current number of threads is greater than maxSize, if not more than the new thread is created to perform tasks.
  • If you have already reached maxSize, refused to execute strategy.
    It is noteworthy that, in the process thread pool thread pool will continue to check the status, if a non-state Running, it will no longer receive the task, but refused to execute strategy.

Thread pool has its own state:RUNNING SHUTDOWN STOP TIDYING TERMINATED

  • RUNNING: receiving a task and perform the task or tasks into the queue for execution.
  • SHUTDOWN: no longer receive any task, but the task will be executed in the task is completed.
  • STOP: no longer receive any task, no longer handle the task queue, the execution of all interrupt tasks
  • TIDYING: All task has ended, the number of worker threads to 0, callback terminated (), the default terminated () is an empty implementation, the user can override this method to monitor TIDYINGreachability status.
  • TERMINATED: terminated () method has been performed.

The actual use of the thread pool

Finished thread pool Here we look at the practical application of the principles of the thread pool scene.
In the actual project application process, the user should be based on different scenarios, to match different thread pool parameters, such as Executorstools to create 5 provides a different thread pool method:

  • newFixThreadPool: characterized by a fixed number of threads, unbounded queue. It applies to uneven number of tasks, is not sensitive to memory pressure, a single load sensing system scenario.
  • Cached: Features queue-free capacity, does not limit the maximum number of threads. For applications that require low latency, short mission scenarios.
  • SingleThread: characterized by a single-threaded, unbounded queue. For asynchronous execution, guarantee the order of the scenes.
  • Scheduled: characterized by the use DelayedWorkQueue when the task queue for recurring tasks scene
    Here we analyze how to use Tomcat Connector thread pooljava public void createExecutor() {
    internalExecutor = true;
    TaskQueue taskqueue = new TaskQueue();
    TaskThreadFactory tf = new TaskThreadFactory(getName() + "-exec-", daemon, getThreadPriority());
    executor = new ThreadPoolExecutor(getMinSpareThreads(), getMaxThreads(), 60, TimeUnit.SECONDS,taskqueue, tf);
    taskqueue.setParent( (ThreadPoolExecutor) executor);
    }
  • The minimum number of threads (default): 10
  • The maximum number of threads (default): 200
  • Task Queue: TaskQueue, TaskQueue inherited from LinkedBlockingQueue, but rewrote offer()and poll()other methods. It is noteworthy that a rewrite offer()method, as long as the work does not reach the number of threads has not reached maxSize, can not be inserted successfully, the result of this is that after the submission into the thread pool will be immediately worker threads that all requests. Until the maximum number of worker threads. If you reach the maximum number of worker threads, the task will be added to the task queue.

Connection pooling feature

When we use JDBC to access the database, create a JDBC connection to go through to establish a connection (including TCP three-way handshake) a series of processes, authentication, authorization, resource allocation and initialization. Usually it takes 100ms or longer. CRUD operations are typically time-consuming generally 10 ~ 50ms. If you create a connection time of frequent consumption it is very large, so our program is optimized to manage the connection, whenever a connection is not completed their destruction, but to keep them assigned to the next worker reused.
Provide external connection pool:

  • Get connected
  • Releasable connection
    defined configuration parameters:
  • Maximum number of connections
  • The initial number of connections
  • The minimum number of idle connections
    internals:
  • Keep-alive connection
  • Free recycling
  • Availability check

Works connection pool

Druid mainly by the array, ReentrantLock signals and two empty amounts and notEmpty composition.
An array is connected to a storage container, every time a client thread requests a connection, take the last position from the array, create a connection, the responsibility of creating a thread dedicated connection, the connection is responsible for the destruction of destruction by a thread dedicated connection.
Client thread, a thread to create a connection, under lock and destroying threads through two work together to coordinate the work condition.

Actual connection pool

Connection pool parameters provided as the case may require, because of different traffic codes, deployed in different configuration of the machine, the necessary parameters are not the same. So the actual needs of service parameter set pressure test, observation tuning.
The main role of the connection pool is connection multiplexing, if the value is too small, not enough, can cause obstruction of the work of the thread, pull down the throughput. If the value is too high, the client and server can cause excessive maintenance costs.

How maxSize setting:
we need the system to be deployed on a single node, and then subjected to pressure test, test the system maximum load is. By measuring pressure, continuously improve the number of concurrent clients. Such as the initial number of concurrent clients 10, 20, 30 is sequentially incremented and the like, unless they reach a performance bottleneck, TPS measured pressure will increase the number of concurrent clients is increased, the response time as customers generally end number of concurrent increases, but the rate of increase is not obvious. When the number of concurrent client reaches a certain threshold, TPS is no longer growing, but declined, the response time of the jump in there, this time is the system for maximum performance. M when the number of active connections can be obtained when this system bottlenecks (maxSize advance set to a large value should not be reached).
In order to connect out of the pool of sufficient size, we have doubled the margin period of distribution, so maxSize = n * 2.
Further, the n connection pool as the alarm threshold, the alarm configuration.

How to set initialSize:
connection pool initial size, typically set to a smaller value. However, if a large online traffic, on-line service, the connection is not enough, a short time will create a large number of threads waiting for connections created problems, as the case may be initalSize set slightly larger.
If you are already running online service, you can set the number of active threads for the production of flat peak.

How minIdle settings:
Set the number of active connections to the system at low flow-peak hours

Guess you like

Origin www.cnblogs.com/xuerge/p/xian-cheng-chi-yu-lian-jie-chi.html