Article Directory
1 thread pool to understand
Under the default configuration, Tomcat
creates a binding thread pool for each connector ( 最大线程数:200
). In most cases you do not need to change this configuration (unless the maximum number of threads increases to meet the needs of high load). But Tomcat
like in every worker thread thread-local
to cache some, such as context PageContext
and label cached object.
For this reason, there will be hope Tomcat
can be turned off to clean out the thread of memory in some cases. In addition, each connector maintains its own thread pool, then, according to the capacity of a server to set ( 线程数
) the highest value will become more difficult. The answer to these problems is to use a 共享执行器
.
By having all use the same connectors 共享执行器
, you can request a maximum number of concurrent expectations of the entire application can be carried by configuration. Actuators also allow the thread pool with extended leisure time shrink Busy. At least in theory, this is the ...
org.apache.catalina.core.StandardThreadExecutor
Tomcat
The default standard used, the actuator is built StandardThreadExecutor
.
Configuration Document Access: http: //tomcat.apache.org/tomcat-6.0-doc/config/executor.html.
These configuration options, there are a misnomer parameters maxIdleTime
, here are some things closed on standards and idle threads need to know.
Standard actuators internally uses a java.util.concurrent.ThreadPoolExecutor
. It has a variable by the size of the worker thread pool work, once the thread finishes a task, it will wait for a blocking queue until a new task to come. Or it waits until a set time, then will "time out", the thread will be closed. The key point here is the first side to complete a task thread will first be assigned new tasks, the thread pool to comply with a FIFO ( FIFO
) mode. During our review of how it will affect Tomcat
the actuator when we need to always pay attention to this.
maxIdleTime
Actually minIdleTime
due Java ThreadPoolExecutor
of FIFO
behavior, each thread may be closed before the wait for a minimum of maxIdleTime
time to accept the new task. In addition, also due to the thread pool FIFO
behavior, because the first to enter idle thread will be assigned a new task priority, so before it is closed it must wait at least maxIdleTime
not any time to request entry. This effect is virtually impossible to set the size of the actuator for an average load (concurrent requests) for the thread pool, it will be more incoming requests to speed the deployment of this size. It looks as if nothing different, but for the web
server is concerned, the impact may become significant.
For example, 40 at the same time request came. Thread pool will be expanded to 40 to accommodate the load. After a period of, at the same time just entered a request. Let's say the implementation of the end of each request needs 500 ms, which means that in the next period of time, takes 20 seconds to put the entire thread pool thread to perform again (remember FIFO
). Unless you put your maxIdleTime set to 20 seconds or less, otherwise the thread pool will always hold 40 threads, even if you are never more than one concurrent amount. But you also do not want too small of your maxIdleTime set - this will result in too quickly risks you face thread is closed.
Conclusions
In order to match the average load, rather than a ratio of incoming requests, it can be expected to obtain a thread pool behavior, is more appropriate actuator should be based on a 后进先出
( LIFO
) mode. If the thread pool is able to minimize idle threads waiting to enter the priority in the allocation of tasks, the server will be able to better close the thread (with a more predictable way) in the lower load stage. In the above example that simple enough, the initial load to load 40 after a period of time is maintained at 1, a LIFO
thread pool to be able to maxIdleTime
rationally adjust the size to 1 after the stage. Of course, not always require you to use this strategy, but if your goal is to minimize the resources held by Tomcat, unfortunately, is a standard actuator might not be what you expect of.
Reference links:
Linux, Apache Tomcat cluster with multiple load balancing
Nginx Tomcat cluster load balancing solve
example explanation Tomcat component installation + Nginx reverse proxy Tomcat + Apache using mod_jk and mod_proxy reverse proxy and load balancing
under CentOS 6.5 using the Rsyslog + LogAnalyzer + MySQL log server deployment