Tomcat and JVM tuning and optimization

1.Tomcat优化

<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
        maxThreads="500" minSpareThreads="20" maxSpareThreads="50" maxIdleTime="60000"/>

<Connector executor="tomcatThreadPool"
               port="8080" protocol="HTTP/1.1"
               URIEncoding="UTF-8"
               connectionTimeout="30000"
               enableLookups="false"
               disableUploadTimeout="false"
               connectionUploadTimeout="150000"
               acceptCount="300"
               keepAliveTimeout="120000"
               maxKeepAliveRequests="1"
               compression="on"
               compressionMinSize="2048"
               compressableMimeType="text/html,text/xml,text/javascript,text/css,text/plain,image/gif,image/jpg,image/png" 
               redirectPort="8443" />

maxThreads :Tomcat uses threads to process each request it receives. This value represents the maximum number of threads that Tomcat can create. The default value is 200. The
minSpareThreads:minimum number of idle threads is the number of initialized threads when Tomcat starts, which means that so many are opened even if no one is using it. Empty thread waiting, the default value is 10.
maxSpareThreads:The maximum number of spare threads. Once the number of threads created exceeds this value, Tomcat will close socket threads that are no longer needed.
For the parameters configured above, the maximum number of threads is 500 (generally enough for a server). It should be set reasonably according to your actual situation. The larger the setting, the more memory and CPU will be consumed, because the CPU is tired of thread context switching and has no energy to provide request services. The minimum number of idle threads 20. The maximum idle time of a thread is 60 seconds. Of course, the maximum number of thread connections allowed is also subject to the kernel parameter settings of the operating system. The setting depends on your own needs and environment. Of course, threads can be configured in "tomcatThreadPool" or directly in "Connector", but they cannot be configured repeatedly.
URIEncoding:Specify the URL encoding format of the Tomcat container. The language encoding format is not as easy to configure as other WEB server software, and needs to be specified separately.
connnectionTimeout:Network connection timeout, unit: milliseconds, set to 0 means never timeout, this setting has hidden dangers. Usually it can be set to 30000 milliseconds, which can be modified appropriately according to the actual situation of detection.
enableLookups:Whether to reverse check the domain name to return the host name of the remote host, the value is: true or false, if set to false, the IP address will be returned directly, in order to improve the processing capability, it should be set to false.
disableUploadTimeout:Whether to use a timeout mechanism when uploading.
connectionUploadTimeout:Upload timeout time. After all, file uploading may take more time. This can be adjusted according to your own business needs, so that the servlet has a longer time to complete its execution. It needs to be used in conjunction with the previous parameter to take effect. .
acceptCount:Specifies the maximum queue length for incoming connection requests when all available threads for processing requests are used. Requests exceeding this number will not be processed. The default is 100.
keepAliveTimeout:The maximum retention time for long connections (milliseconds), which indicates how long Tomcat will keep the connection before the next request. The default is to use the connectionTimeout time, and -1 means unlimited timeout.
maxKeepAliveRequests:Indicates the maximum number of requests the connection can support before the server shuts down. Connections that exceed this number of requests will also be closed, 1 means disabled, -1 means unlimited number, the default is 100, and it is generally set between 100 and 200.
compression:Whether to perform GZIP compression on the response data, off: to prohibit compression; on: to allow compression (text will be compressed), force: to compress in all cases, the default value is off, which can effectively reduce pages after compressing data Generally, it can be reduced by about 1/3 to save bandwidth.
compressionMinSize:Indicates the minimum value of the compressed response. The packet will be compressed only when the size of the response packet is greater than this value. If the compression function is enabled, the default value is 2048.
compressableMimeType:Compression type, specifies which types of files are compressed.
noCompressionUserAgents="gozilla, traviata":For the following browsers, compression is not enabled.
If the code has been separated from static and dynamic, data such as static pages and pictures do not need to be processed by Tomcat, so there is no need to configure compression in Tomcat.

2.JVM调参

CATALINA_OPTS="
-server 
-Xms6000M 
-Xmx6000M 
-Xss512k 
-XX:NewSize=2250M 
-XX:MaxNewSize=2250M 
-XX:PermSize=128M
-XX:MaxPermSize=256M  
-XX:+AggressiveOpts 
-XX:+UseBiasedLocking 
-XX:+DisableExplicitGC 
-XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC 
-XX:MaxTenuringThreshold=31 
-XX:+CMSParallelRemarkEnabled 
-XX:+UseCMSCompactAtFullCollection 
-XX:LargePageSizeInBytes=128m 
-XX:+UseFastAccessorMethods 
-XX:+UseCMSInitiatingOccupancyOnly
-Duser.timezone=Asia/Shanghai 
-Djava.awt.headless=true"

The memory limit of JVM under 32-bit system: it cannot exceed 2GB, then your Tomcat must be optimized at this time, and you need to pay attention to some skills, and on 64-bit operating system, neither the system memory nor the JVM is subject to such a limit of 2GB .
For JMX remote monitoring is also set here, the following is the configuration in the 64-bit system environment.
The above configuration can basically achieve: the
system response time is increased;
the JVM recycling speed is increased without affecting the system response rate; the
JVM memory is maximized;
the thread blocking situation is minimized.
Detailed explanation of common parameters of JVM:
-server:It must be used as the first parameter. It has good performance when there are multiple CPUs. There is also a mode called -client, which is characterized by a relatively fast startup speed, but the runtime performance and memory management efficiency are not high. Usually use For client applications or development and debugging, this mode is enabled by default when running Java programs directly in a 32-bit environment. Server mode is characterized by slow startup speed, but high runtime performance and memory management efficiency. It is suitable for production environments. This mode is enabled by default in a JDK environment with 64-bit capability, and this parameter can be omitted.
-Xms:Indicates the size of the Java initialization heap. -Xms and -Xmx are set to the same value to avoid the JVM re-applying for memory repeatedly, resulting in performance ups and downs. The default value is 1/64 of the physical memory. The default (MinHeapFreeRatio parameter can be adjusted) free heap memory is less than At 40%, the JVM grows the heap up to the maximum limit of -Xmx.
-Xmx:Indicates the maximum Java heap size. When the memory required by the application exceeds the maximum heap size, the virtual machine will prompt a memory overflow and cause the application service to crash. Therefore, it is generally recommended that the maximum heap size be set to 80% of the maximum available memory. How do I know that my JVM can use the maximum value, use the java -Xmx512M -version command to test, and then gradually increase the value of 512, if the execution is normal, it means that the specified memory size is available, otherwise an error message will be printed, the default value is 1/4 of the physical memory, the default (MinHeapFreeRatio parameter can be adjusted) when the free heap memory is greater than 70%, the JVM will reduce the heap until the minimum limit of -Xms.
-Xss:Indicates the stack size of each Java thread. After JDK 5.0, the stack size of each thread is 1M, and the previous stack size of each thread is 256K. Adjust according to the memory size required by the threads of the application. Under the same physical memory, reducing this value can generate more threads, but the operating system still has a limit on the number of threads in a process and cannot be generated indefinitely. The empirical value is in Around 3000~5000. Generally, for small applications, if the stack is not very deep, 128k should be enough. For large applications, it is recommended to use 256k or 512K. Generally, it is not easy to set more than 1M, otherwise it is easy to appear out of memory. This option has a large performance impact and requires rigorous testing.
-XX:NewSize:Set the young generation memory size.
-XX:MaxNewSize:Set the maximum memory size of the new generation and the new generation.
-XX:PermSize:Set of the persistent generation
-XX:MaxPermSize:. Set the maximum memory size of the persistent generation. The permanent generation does not belong to the heap memory, and the heap memory only includes the new generation and the old generation.
-XX:+AggressiveOpts:As the name suggests (aggressive), enabling this parameter will cause your JVM to use the newly added optimization techniques (if any) whenever the JDK version is upgraded.
-XX:+UseBiasedLocking:Enable an optimized thread lock, we know that in our appserver, each http request is a thread, some requests are short and some requests are long, there will be a phenomenon of request queuing, and even thread blocking will occur, this optimization The thread lock enables automatic optimal allocation of thread processing within your appserver.
-XX:+DisableExplicitGC:No explicit call to "System.gc()" is allowed in program code. Every time you manually call System.gc() at the end of the operation, the price paid is that the system response time is seriously reduced, just like the principle explained in Xms and Xmx, calling GC in this way will cause the system's JVM to fluctuate.
-XX:+UseConcMarkSweepGC:Set the old generation to concurrent collection, that is, CMS gc. This feature is only available in
subsequent versions of jdk1.5. It uses gc estimation trigger and heap occupancy trigger. We know that frequent GC will cause
the ups and downs of the JVM and thus affect the efficiency of the system. Therefore, after using CMS GC, the response time of each GC is very short when the number of GCs increases. For example, using CMS
GC After the observation of jprofiler, GC is triggered a lot of times, and each GC takes only a few milliseconds.
-XX:+UseParNewGC:Use multi-threaded parallel collection for the new generation, so that the collection is fast. Pay attention to the latest JVM version. When -XX:+UseConcMarkSweepGC:using , -XX:UseParNewGC will be automatically enabled. Therefore, if the parallel GC of the young generation does not want to be turned on, it can be turned off by setting -XX:-UseParNewGC.
-XX:MaxTenuringThreshold:Set garbage maximum age. If it is set to 0, the new generation objects directly enter the old generation without passing through the Survivor area. For applications with more old age (applications that require a lot of resident memory), the efficiency can be improved. If this value is set to a larger value, the new generation object will be replicated multiple times in the Survivor area, which can increase the survival time of the object in the new generation, increase the probability of being recycled in the new generation, and reduce the frequency of Full GC , doing so can improve service stability to some extent. This parameter is only valid during serial GC. The setting of this value is an ideal value obtained after monitoring by the local jprofiler, and cannot be generalized and copied.
-XX:+CMSParallelRemarkEnabled:In the case of using UseParNewGC, try to reduce the mark time.
-XX:+UseCMSCompactAtFullCollection:In the case of using concurrent gc, prevent memoryfragmention, organize the live object, and reduce memory fragmentation.
-XX:LargePageSizeInBytes:Specify the page size of the Java heap. The size of the memory page cannot be set too large, which will affect the size of Perm.
-XX:+UseFastAccessorMethods:Use get, set methods to convert to native code, fast optimization of primitive types.
-XX:+UseCMSInitiatingOccupancyOnly:The concurrent collector starts collecting only after the oldgeneration has used the initialized scale.
-Duser.timezone=Asia/Shanghai:Set the user's time zone.
-Djava.awt.headless=true:This parameter is generally used at the end. The function of this parameter is like this. Sometimes we will use some charting tools such as jfreechart in our J2EE project, which is used to output streams such as GIF/JPG in web pages. In the winodws environment, generally our app server will not encounter any problems when outputting graphics, but in the linux/unix environment, we often encounter an exception that causes your pictures to display well in the winodws development environment, but in the linux/unix environment However, it cannot be displayed below, so add this parameter to avoid such a situation.
-Xmn: The memory space size of the new generation, note: the size here is (eden+ 2 survivor space). Different from the New gen shown in jmap -heap. The entire heap size = young generation size + old generation size + permanent generation size. In the case of keeping the heap size unchanged, after increasing the young generation, the size of the old generation will be reduced. This value has a great impact on system performance, and Sun officially recommends setting it to 3/8 of the entire heap.
-XX:CMSInitiatingOccupancyFraction:The parallel collector starts garbage collection when the heap is full, for example, when there is not enough space to accommodate newly allocated or promoted objects. For the CMS collector, long waits are undesirable because the application continues to run (and allocate objects) during concurrent garbage collections. Therefore, in order to complete the garbage collection cycle before the application runs out of memory, the CMS collector starts before the parallel collector. Because different applications have different object allocation patterns, the JVM collects runtime data on actual object allocation (and deallocation) and analyzes this data to determine when to initiate a CMS garbage collection cycle. This parameter setting is very tricky. Basically, if (Xmx-Xmn)*(100-CMSInitiatingOccupancyFraction)/100 >= Xmn, there will be no promotion failed. For example, in the application, Xmx is 6000, Xmn is 512, then Xmx-Xmn is 5488M, that is, the old generation has 5488M, CMSInitiatingOccupancyFraction=90 means that the old generation starts to execute concurrent garbage collection (CMS) when the old generation is 90% full. , then the remaining 10% space is 5488*10% = 548M, so even if all objects in Xmn (that is, 512M in the new generation) are moved to the old generation, the space of 548M is enough, so as long as the above requirements are met formula, there will be no promotion failed during garbage collection, so the setting of this parameter must be associated with Xmn.
-XX:+CMSIncrementalMode:This flag turns on incremental mode for the CMS collector. Incremental mode often suspends the CMS process in order to make a complete concession to the application thread. Therefore, the collector will take longer to complete the entire collection cycle. Therefore, incremental mode should only be used when normal CMS cycles are found to be too disruptive to application threads after testing. Since modern servers have enough processors to accommodate concurrent garbage collections, this happens rarely, but for the CPU case.
-XX:NewRatio:The ratio of the young generation (including Eden and two Survivor areas) to the old generation (excluding the persistent generation), - XX:NewRatio=4indicates that the ratio of the young generation to the old generation is 1:4, and the young generation accounts for 1/5 of the entire stack, Xms When =Xmx and Xmn is set, this parameter does not need to be set.
-XX:SurvivorRatio:The size ratio of Eden area to Survivor area, set to 8, which means that the ratio of 2 Survivor areas (there are 2 Survivor areas of equal size in the young generation of JVM heap memory by default) and 1 Eden area is 2:8, that is, 1 The Survivor area occupies 1/10 of the entire young generation size.
-XX:+UseSerialGC:Set up the serial collector.
-XX:+UseParallelGC:Set to parallel collector. This configuration is only valid for the young generation. That is, the young generation uses parallel collection, while the old generation still uses serial collection.
-XX:+UseParallelOldGC:Configure the old generation garbage collection method as parallel collection, and JDK6.0 began to support parallel collection of the old generation.
-XX:ConcGCThreads:Also called -XX:ParallelCMSThreads in earlier JVM versions, defines the number of threads for concurrent CMS processes to run. For example value=4 means that all phases of the CMS cycle are executed with 4 threads. While more threads speed up concurrent CMS processes, they also introduce additional synchronization overhead. Therefore, for a specific application, it should be tested to determine whether increasing the number of CMS threads can actually improve performance. If the flag is not set, the JVM will calculate the default number of parallel CMS threads based on the value of the -XX:ParallelGCThreads parameter in the parallel collector.
-XX:ParallelGCThreads:Configure the number of threads of the parallel collector, that is, how many threads to perform garbage collection together at the same time. It is recommended to configure this value equal to the number of CPUs.
-XX:OldSize:Set the size of the old generation memory allocated by JVM startup, similar to the initial size of the new generation memory The
-XX:NewSize:
above are some commonly used configuration parameters, some of which can be replaced. The configuration idea needs to consider the garbage collection mechanism provided by Java. The heap size of a virtual machine determines how much time and how often the virtual machine spends collecting garbage. The acceptable rate of garbage collection is application dependent and should be adjusted by analyzing the timing and frequency of actual garbage collection. If the heap size is large, then full garbage collection will be slow but less frequent. If you align the heap size with your memory needs, full collections are fast, but more frequent. The purpose of heap sizing is to minimize garbage collection time in order to maximize the processing of client requests within a certain amount of time. When benchmarking, in order to ensure the best performance, set the heap size large to ensure that garbage collection does not occur during the entire benchmarking process.
If the system spends a lot of time collecting garbage, reduce the heap size. A full garbage collection should take no more than 3-5 seconds. If garbage collection becomes the bottleneck, then you need to specify the size of the generation, examine the detailed output of garbage collection, and study the effect of garbage collection parameters on performance. When adding processors, remember to increase memory, because allocation can be done in parallel, while garbage collection is not.

3.常见的 Java 内存溢出有以下三种

(1) java.lang.OutOfMemoryError: Java heap space —-JVM Heap(堆)溢出

The JVM automatically sets the value of the JVM Heap when it starts up. The initial space (ie -Xms) is 1/64 of the physical memory, and the maximum space (-Xmx) cannot exceed the physical memory. It can be set by using options such as -Xmn -Xms -Xmx provided by the JVM. The size of the Heap is the sum of Young Generation and Tenured Generation. In the JVM, if 98% of the time is used for GC, and the available Heap size is less than 2%, this exception message will be thrown.

解决方法:手动设置 JVM Heap(堆)的大小。

(2) java.lang.OutOfMemoryError: PermGen space —- PermGen space溢出。

The full name of PermGen space is Permanent Generation space, which refers to the permanent storage area of ​​memory. Why does the memory overflow? This is because this memory is mainly stored by the JVM to store Class and Meta information. When the Class is loaded, it is placed in the PermGen space area. It is different from the Heap area where the Instance is stored, and the GC of sun will not be there. The PermGen space is cleaned up when the main program is running, so if your APP loads a lot of CLASS, it is very likely that the PermGen space will overflow.

解决方法: 手动设置 MaxPermSize 大小

(3) java.lang.StackOverflowError —- 栈溢出

The stack overflows, and the JVM is still a stack-based virtual machine, which is the same as C and Pascal. The calling process of the function is reflected in the stack and pop-off stack. There are so many "layers" of calling constructors that the stack overflows. Generally speaking, the general stack area is much smaller than the heap area, because the function call process is often not more than a thousand layers, and even if each function call requires 1K space (this is approximately equivalent to declaring 256 in a C function). variable of type int), then the stack area only needs 1MB of space. Usually the stack size is 1-2MB.
Usually recursion should not have too many levels of recursion, it is easy to overflow.

解决方法:修改程序。

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325771029&siteId=291194637