About Linux environment tuning (tomcat)

Preface

First of all you need to know the server's memory and cpu. Then set the java-related environmental Depending on the configuration of the server. You can view and usage of each process

 

Linux-related command

Free command - Memory (top command can also display the memory usage, but can not set units)

display memory free command using the system and idle , including physical memory, interaction zone memory (the swap) and the kernel buffer memory.

command Note
free Displays in memory Kb
free -m Displays in Mb memory
free -g Displays have been Gb memory

Depending on the system display units are shown below:

Parameters as follows:

  • Mem: represents the physical memory statistics, the remaining memory if the machine is very small, typically less than 20% of the total memory, it is determined that the system is not enough physical memory
  • Swap: indicates the use of a swap partition on the hard disk, such as the small remaining space, the system needs to pay attention to the current load and memory usage, when the value of the used Swap is greater than 0, then the operating system is not enough physical memory, hard drive memory has begun to use the .
  • total: represents the physical memory statistics
  • used: represents the total allocated to (include buffers and cache) the number of used, but may not actually use partial caching
  • free: represents the unallocated memory
  • shared: shared memory representation
  • buff / cache: represents the number of buffers allocated but unused system
  • available: represents the number of available system allocated but unused

 

        Check system memory for each thread percentage of cases: (above execute command to see the situation of each program memory process memory usage, as shown below, the first column is the percentage of memory occupied by the process, which applications accounted for the memory can be seen more for troubleshooting)

# 如下的命令感觉比Top要好点,可以看执行命令的全路径,这样就比简写要好点.

ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r | less


 

Shows the following contents enter the command:

  • % MEM: memory represents the percentage occupied by the process
  • % CPU: CPU represents the percentage occupied by the process

 

Top command - including memory usage, cup, process

top command to view the system's real-time load, including processes, CPU load, memory usage, and so on;

Top input commands directly, can see the information as shown below in which the upper half of the memory usage, the following process is the use of the individual.

 

The upper half - this is also very important part of feeling

As the upper half of FIG.% Cpu lower main portion when the meaning of the parameters. The following section is free of the above can be seen free comment

 

  • us: user space occupied by the percentage of CPU
  • sy: kernel space occupied by the percentage of CPU
  • ni: priority processes within the user process space occupied by the percentage of CPU changed
  • id: idle CPU percentage
  • wa: percentage of CPU time waiting for input output
  • hi: CPU hardware services in the total time it takes to break
  • si: CPU soft interrupt service total time spent
  • st: Steal Time if you want to deploy a virtual environment (for example: Amazon EC2) steal time is one of the performance indicators you want to follow. If the value of this indicator is high, then that machine condition is very bad

 

The bottom half - this is the easy part

  • PID: Process ID
  • User: process owner
  • PR: Priority
  • NI: nice value, a negative value indicates high priority, low priority upright
  • VIRT: total amount of virtual memory used by the process
  • RES: process used, not swapped out of physical memory size
  • SHR: shared memory size
  • S: Process Status
  • % CPU: the last update to the current occupancy percentage of CPU time
  • % MEM: The percentage of physical memory used by the process
  • TIME +: Total CPU time used by the process
  • COMMAND: command name, the command line

Other top command

       After entering the top of the real-time interface, default sort according to CPU usage by "shift + m" button to sort process in accordance with the memory usage , you can see which processes are currently in the system memory overhead of "big."

       top order, press f key to enter the interface column selection sort, can select where to display the column, the column should be sorted according to what other information, described with reference to choose whether to display the red box that column line, the superscript * is displayed

 

The number of cores and the number of cpu CUP

  Note that the default of Netty work twice the number of cores, o intensive two times the number of cores, the number of computationally intensive core 1 times, to avoid the overhead of a thread context switch.

# 总核数 = 物理CPU个数 X 每颗物理CPU的核数 
# 总逻辑CPU数 = 物理CPU个数 X 每颗物理CPU的核数 X 超线程数


# 查看物理CPU个数
cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l


# 查看每个物理CPU中core的个数(即核数)
cat /proc/cpuinfo| grep "cpu cores"| uniq


# 查看逻辑CPU的个数
cat /proc/cpuinfo| grep "processor"| wc -l

 

tomcat parameter optimization

 

\ Bin \ catalina.sh set JAVA_OPTS

Before catalina.sh of cygwin = false sentence plus Java_opts

JDK 1.8 in PermSize and MaxPermGen longer valid. JDK 1.8 Conclusion generation has permanently replaced by the absence metaSpace.

As shown by the following example:

JAVA_OPTS="$JAVA_OPTS -server -Xms2048M -Xmx3072M -Xss1024k
-XX:+AggressiveOpts
-XX:+UseParallelGC -XX:+UseBiasedLocking"

 

  • -server:    Be sure to as the first argument, the best performance when multiple CPU, there is a mode called -client, characterized by a faster start, but runtime performance and memory management efficiency is not high, usually for clients end application development or debugging, directly in the 32-bit Java programming environment is enabled by default in this mode. Features Server mode is to start more slowly, but is high runtime performance and memory management efficiency, suitable for a production environment, this mode is enabled by default in the JDK environment with 64-bit capability, you can not configure the parameters.

  • -Xms:   to initialize memory, maximum memory directly and consistent, to avoid repeated memory allocation, reducing the efficiency of a direct up is the greatest, jvm also do not have extended memory, and save time.

  • -Xmx:   indicates the maximum Java heap size, when the application needs of the heap memory beyond the maximum value of the virtual machine will prompt memory overflow and cause applications to crash the service,it is generally recommended that the maximum heap is set to the maximum of available memory 80% (also say half of physical memory). How do I know I can use the JVM maximum use java -Xmx512M -version command to test, and then gradually increase the value of 512, if executed properly it means a specified amount of available memory,otherwise it will print an error message,the default value 1/4 of physical memory, the default (MinHeapFreeRatio parameters may be adjusted) of greater than 70% of free memory heap, the JVM will reduce the minimum limit of the stack until -Xms.
  • -Xmn:   memory size of the new generation, Note: The size here is (eden + 2 survivor space). And jmap -heap New gen displayed is different. Whole new generation heap size = size + size + permanent generations old generation size. In the case of the same size to ensure that the heap, the new generation is increased, it will reduce the size of the old generation  (This value greater impact on system performance, Sun officially recommended configuration for the entire heap 3/8. 3 and 4 is generally set to one of the points Xmx)
  • -Xss:    it is the memory occupied by each thread, that is, each thread stack size, easy to overflow too small, too large will result in a reduction of the number of threads created because the maximum capacity is limited. Here maximum memory 1280M, can theoretically create a thread around 1000, almost 500 supports concurrent access (concurrent access Oh, that is, the same time the user clicks 500), it has been quite it. Means that each Java thread stack size, JDK 5.0 after each thread stack size is 1M, before each thread stack size is 256K. Depending on the application threads to adjust the size of the required memory in the same physical memory, reducing this value can generate more threads, but the operating system on the number of threads within a process still limited, not unlimited generation, experience in about 3000 to 5000. Generally small application, if the stack is not very deep, it should be enough for 128k, 256k recommended for large applications or 512K, are generally easy to set up more than 1M, or else out ofmemory prone. This option is relatively large impact on performance requires rigorous testing.

  • -XX: + AggressiveOpts:   the role of its name (aggressive), this parameter is enabled, every time the JDK version upgrade your JVM will use optimization techniques (if any) the latest addition.
  • -XX: + UseParallelGC:   disposed parallel collector. This configuration is valid only for the young generation. That the young generation parallel collector, while still using the old generation of the serial collector.
  • -XX: + UseBiasedLocking: enable an optimized lock the thread,we know in our appserver, each http request is a thread, some short and some request request long, there will be requests queued phenomenon, or even appear thread is blocked, the optimized thread lock allows for optimal deployment of automatic threading within your appserver.
  • -XX: NewSize: Set new generation of memory size.
  • -XX: MaxNewSize: set the maximum size of the new generation of the new generation of memory
  • -XX: PermSize: set the permanent generation memory size.
  • -XX: MaxPermSize: set the maximum permanent generation memory size, does not belong to the permanent generation of heap memory, heap memory contains only the new generation and the old era.

 

By setting check whether the onset Tomcat

  • 1. First, conf / tomcat-users.xml inside file, add the following code in front of the </ tomcat-users>
<role rolename="manager-gui"/>  

# username就是登陆账号 password就是密码

<user password="admin" roles="manager-gui" username="tomcat"/>  
  • 2. Modify / webapps / manager / the META-INF / context.xml file directory, as follows:

<Valve className="org.apache.catalina.valves.RemoteAddrValve"  
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1|\d+\.\d+\.\d+\.\d+" />  

  • 4. Check confirmed the assigned memory

 

 

 

./conf/server.xml Tuning

 

connection pool

Open annotated default connection pool configuration, and the following change

<Executor name="tomcatThreadPool" namePrefix="catalina-exec-"  
         maxThreads="150" 
         minSpareThreads="100"   
         prestartminSpareThreads="true" 
         maxQueueSize="100"/>   
  • name: the name of the thread
  • namePrefix: Thread Prefix
  • maxThreads: The maximum number of concurrent connections, configure the default 200 is not generally recommended setting 500 to 800, according to their hardware facilities and real business needs.
  • minSpareThreads: Tomcat started threads initialized, the default value of 25   
  • prestartminSpareThreads: tomcat in the initialization initialization value minSpareThreads when he is not on the set value trueminSpareThreads nothing effects.
  • maxQueueSize: maximum number of queue, exceeds the request is rejected

Connection Configuration

And modify the connection configuration changed to the following:

<Connector port="8080" protocol="HTTP/1.1"  
        connectionTimeout="20000"  
        redirectPort="8443"    
        executor="tomcatThreadPool"  
        enableLookups="false"   
        maxIdleTime="60000"
        acceptCount="100"   
        maxPostSize="10485760" 
        acceptorThreadCount="2"    
        disableUploadTimeout="true"   
        URIEncoding="utf-8"
        keepAliveTimeout ="6000"  
        maxKeppAliveRequests="500"  />  
  • port: port.  
  • protocol: transmission connector is used.  
  • executor: the name of the thread pool used by the connector
  • enableLookups: Disable DNS queries
  • maxIdleTime: thread idle time, after this time, the idle thread will be destroyed, the default value is 6000 (one minute), in milliseconds.
  • acceptCount: Specifies processing request may be used when all the threads are used, the number of requests may be placed in the queue process, more than the number of the request will not be processed, default settings 100.
  • maxPostSize: limiting the content size, in bytes to POST FORM URL request parameter embodiment, the default is 2097152 (2 megabytes), 10485760 to 10M. If you want to disable the limit, you can set to -1.
  • acceptorThreadCount: receiving a number of threads for connection, the default value is 1. This generally means when the change is needed because the server is a multi-core CPU, if it is configured as a generally multicore CPU 2.
  • disableUploadTimeOut: Allow Servlet container, is performed using a longer connection time-out value, so Servlet longer time to complete its execution, the default is false
  • keepAliveTimeout - expressed before the next request came, tomcat how long to maintain the connection. This means that if the client requests there have been over, and did not exceed the expiration time, the connection will remain.
  • maxKeepAliveRequests - indicates the maximum number of requests supported by the connection. The number of requests exceeding the connection will also be shut down (at this time returns a Connection: close header to the client). (maxKeepAliveRequests = "1" disables use of the long connector) (disable 1, -1 means to limit the number, default 100 generally disposed between 100 and 200)

 

Compression on the tomcat

      Compression will increase the burden on Tomcat, the best use Nginx + Tomcat or Apache + Tomcat way, compression handed over to Nginx / Apache to do it.

      Tomcat is compressed after the client requests the server corresponding resource, the resource file from the server-side compression, and then output to the client, the client's browser is responsible for decompressing and browsing. Relative to normal browsing HTML, CSS, Javascript and Text, it can save about 40% of the traffic. More importantly, it can dynamically generated, including CGI, PHP, JSP, ASP, Servlet, SHTML pages and other output can be compressed, the compression efficiency is also high.

 

Tomcat's IO optimization

       I.e. Tomcat Connector modified operating mode, Tomcat Connector (Connector the Tomcat) has bio, nio, aprthree operating modes.

  • BIO:  synchronous blocking IO, each request should create a thread to handle, thread overhead is relatively large, tomcat7 or less, in default Linux system in this way. Disadvantages: high concurrency, large number of threads, a waste of resources.
  • NIO:  asynchronous non-blocking IO, asynchronous IO in Java can handle a large number of requests processed by a small number of threads, Tomcat8 default in this way in the Linux system. Tomcat7 must modify the protocol Connector configuration properties to start:
<Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" 
         connectionTimeout="20000" redirectPort="8443"/> 

 

  • APR:  big kill APR, namely Apache Portable Run-time libraries, from the operating system level to solve the ioblocking problem. aprTomcat is running on the preferred mode of highly concurrent applications .Tomcat7 or Tomcat8 start the system default Win7 or more in this way. If you install Linux aprand native, Tomcat will start direct supportapr.(这个详细的使用暂时,后面有空在写清楚点[email protected])

 

Linux's TCP

 

 Linux system optimization - the maximum number of TCP connections to transfer large

   linuxAs a server, when socketrunning high concurrent TCPprogram, usually occurs at the connection establishment can not establish a connection to the case of the production environment certain number after several tests, each time a connection is established to 1000 found no longer to establish tcpthe connection, why?

      This is because Linuxthe platform, regardless of writing the client or server program, during high concurrent TCPconnection process, will be subject to the highest number of concurrent users on a single system can simultaneously process open file limit to the number ( This is because the system is each TCPconnection should create a sockethandle, each sockethandle is also a file handle ). To turn up the TCPmaximum number of connections, you must modify the user process open file limit.

 

View system allows limit the number of files in the current user process open

ulimit -n 

This means that the current process for each user allows up to open the 1024 file, the default is 1024

Modify /etc/security/limits.conffiles in the file 最后add the following:
 

vim /etc/security/limits.conf

Add to

root soft nofile 65535
root hard nofile 65535
* soft nofile 65535
* hard nofile 65535

 

  • The first argument specifies modify roota user file open limit, using the '*' sign indicates a user to modify all restrictions;
  • The second parametersoft , or hardyou want to modify the soft limit or hard limit;
  • The third argument 65535 specifies the new limit values you want to modify, that is the maximum number of open files (Please note that the soft limit value is less than or equal to the hard limit)  nofilewith an upper limit, not infinite, nofile 65535 that is an upper limit of 65535

        Check again what can be found pieces limit has been changed to: 65535 ( some Linux operating systems, such as (Ubuntu) does not allow configuration *, ubuntu the rootuser must write out, other users can be used *in place of) the need to reboot the system to take effect.

 

Linux system-level view of the maximum number of open file limit, use the following command

There may be different for each server, a Linux system is calculated based on the system hardware resources like when you start

cat /proc/sys/fs/file-max   #我的是584864 这个根据服务器硬件计算得出的

Note : This indicates that this system allows up Linux open (i.e., comprising the sum of all the user opens the file) file 95288, the Linux system hard limit level, all user-level open file should not exceed the limit value . Usually this is a hard limit system-level Linux system at boot time calculated based on the best system hardware resource situation while the maximum number of open file limit, if not special needs, should not modify this limit, unless you want to open files for the user-level restrictions set value exceeds this limit

 

 Linux system optimization -TCP / IP kernel parameter optimization

修改 /etc/sysctl.conf
生效: sysctl -p

net.ipv4.tcp_mem = 196608  262144  393216  
#(4G 内存机器 使用,TCP连接最多约使用1.6GB内存 , 393216*4096/1024/1024=1.6G)
#内核分配给TCP连接的内存,单位是Page,1 Page = 4096 Bytes

net.ipv4.tcp_mem = 524288  699050  1048576  
#(8G 内存使用,TCP连接最多约使用4GB内存)         

#为每个TCP连接分配的读、写缓冲区内存大小,单位是Byte
net.ipv4.tcp_rmem = 4096      8192    4194304
net.ipv4.tcp_wmem = 4096      8192    4194304
#                  最小内存  缺省内存  最大内存
# 一般按照缺省值分配,上面的例子就是读写均为8KB,共16KB
#1.6G 内存服务器, TCP内存能容纳的连接数,约为  1600MB/16KB = 100K = 10万
#4.G TCP内存能容纳的连接数,约为  4000MB/16KB = 250K = 25万

net.core.somaxconn= 4000
#(端口最大的监听队列的长度)
#同时,修改下全局配置
# echo 4000 > /proc/sys/net/core/somaxconn 定义了系统中每一个端口最大的监听队列的长度,这是个全局的参数,默认值为128,

net.ipv4.tcp_syncookies = 1
#表示开启SYN Cookies。当出现SYN等待队列溢出时,启用cookies来处理,可防范少量SYN攻击,默认为0,表示关闭;

net.ipv4.tcp_tw_reuse = 1
#表示开启重用。允许将TIME-WAIT sockets重新用于新的TCP连接,默认为0,表示关闭;

net.ipv4.tcp_tw_recycle = 1
#表示开启TCP连接中TIME-WAIT sockets的快速回收,默认为0,表示关闭;

net.ipv4.tcp_fin_timeout = 30
#修改系統默认的 TIMEOUT 时间。

net.ipv4.tcp_keepalive_time = 1200  
#表示当keepalive起用的时候,TCP发送keepalive消息的频度。缺省是2小时,改为20分钟。

net.ipv4.ip_local_port_range = 10000 65000  
#表示用于向外连接的端口范围。缺省情况下很小:32768到61000,改为10000到65000。
#(注意:这里不要将最低值设的太低,否则可能会占用掉正常的端口!)

net.ipv4.tcp_max_syn_backlog = 8192
#表示SYN队列的长度,默认为1024,加大队列长度为8192,可以容纳更多等待连接的网络连接数。

net.ipv4.tcp_max_tw_buckets = 5000
#表示系统同时保持TIME_WAIT的最大数量,如果超过这个数字,TIME_WAIT将立刻被清除并打印警告信息。默 认为180000,改为5000。

net.ipv4.tcp_max_orphans = 65536
#当orphans达到32768个时,会报Out of socket memory,此时占用内存 32K*64KB=2048MB=2GB
#(每个孤儿socket可占用多达64KB内存),实际可能小一些

net.ipv4.tcp_orphan_retries = 1
#孤儿socket废弃前重试的次数,重负载web服务器建议调小,设置较小的数值,可以有效降低orphans的数量

net.ipv4.tcp_retries2
#活动TCP连接重传次数,超过次数视为掉线,放弃连接。缺省值:15,建议设为 2或者3.

net.ipv4.tcp_synack_retries
#TCP三次握手的syn/ack阶段,重试次数,缺省5,设为2-3

net.core.netdev_max_backlog = 2048
# 网络设备的收发包的队列大小

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Published 70 original articles · won praise 51 · Views 150,000 +

Guess you like

Origin blog.csdn.net/cuiyaonan2000/article/details/104695360