Nginx, Tomcat and Redis configuration file description

redis.conf:
1. Add the log file configuration logfile path
2. maxclients client maximum number of connections, modify 10000
3. NO-appendfsync-ON-aof the rewrite the rewriting or writing rdb file, whether or not performed fsync modify operation to yes, the operation to prevent clogging fsync
4. cluster-node-timeout cluster detection timeout modified to 15,000 milliseconds, the cluster is easy to prevent interruption
5. slowlog-max-len the number of slow query log 512 modified to facilitate subsequent troubleshooting inquiries slow
number of 6. hash-max-ziplist-entries hash ziplist type memory structure used, the default is 512, 128 to modify
the maximum hash-max-ziplist-value ziplist value allows entry value of the number of bytes, default 64, 1024 recommended
storage strategy to optimize the structure of the data hash

the server.xml:
1. Protocol = "org.apache.coyote.http11.Http11NioProtocol" modification request access protocol specified for the non-blocking mode nio, tomcat7 set org.apache.coyote.http11.Http11NioProtocol, tomcat8 recommended set org. apache.coyote.http11.Http11Nio2Protocol
2.connectionTimeout = "5000" to modify the connection wait timeout, Tomcat default configuration of 20 seconds to 5 seconds to modify the configuration, so break the link as soon as possible when Tomcat blocked
3.maxThreads = "500" Connector inside the current maximum number of active threads in the thread pool, the default is 200, as will be appreciated Tomcat request can be processed simultaneously, modified to 500
4.minSpareThreads minimum number = "50" standby thread, the internal thread pool that is maintained Connector minimum the number of threads, the default is 20, the minimum number of threads maintained at a relatively idle, you can reduce the frequent destruction of threads to create the overhead
5.maxConnections = "500" the maximum number of links in the current Connector can support, default and maxThreads consistent, If greater than this value, even if the server creates a new request processing task, the task thread pool We can only wait in line, if less than this value, there will be a thread pool idle threads will waste system resources, so you can be consistent and maxThreads
6.acceptCount = "200"
7.enableLookups = "false" This property is set to true, when you call request.getRemoteHos () method performs a DNS query to return to the remote client's actual host name. Set to false, skips directly query returns the IP address in order to improve performance.
8.keepAliveTimeout = "300000" in the long length of the connector connection timeout, in milliseconds, and nginx consistent set to 300 seconds, 300 seconds without the use of the connector by closing the connection
9.maxKeepAliveRequests = "500" per long connection can handle the maximum number of requests is provided to be closed 500, i.e., the elongated connector 500 uses
10.processorCache = "700" processor object protocol processor cache to improve performance. This attribute specifies the number of objects may Processor cache. -1 means is not limited, the default is 200. The Servlet3.0 asynchronous processing is not applicable, and the value is preferably maxThreads same. If the Servlet3.0 asynchronous processing, the values of both the number of concurrent requests is preferably used and the desired maxThreads larger.
11.autoDeploy = "false" is set to false to turn off automatic deployment capabilities, or Tomcat will appBase regular testing and xmlBase directory.

the catalina.sh:
1.JAVA_OPTS = "- Xms8192m -Xmx8192m -Xss1024K -XX: UseG1GC -XX: + UseStringDeduplication"
memory -Xms initial heap size, JVM initial assignment
-Xmx maximum heap size. For server-side deployment, and -Xms -Xmx often set to a value, it saves the program is running to adjust the heap memory allocated Processed
-Xss thread stack size, a size of memory allocated for each thread, jdk1.5 + version after the default 1M
-XX: UseG1GC garbage collector specified by the JVM collector is G1, G1 is the server type of collector for a multi-core, a large memory machines. It is in maintaining high throughput, high probability to meet the target GC (garbage collection, can be understood as garbage collection) pause time. G1 with those collectors need a larger heap (6GB above) and there is a delay requirement GC (stable and predictable pause time less than 0.5 seconds) is
-XX: + UseStringDeduplication enable repeated string deleted. String using the same deduplication most cases reduces memory footprint string string object.
Note: The default recovery G1 collector triggers heap memory usage ratio of 45%, which means that if you set the maximum heap memory to 8G, then the heap memory used up to 3.6G when the trigger garbage collection, if you want to adjust the occupancy ratio, by setting -XX: to modify the parameters InitiatingHeapOccupancyPercent

nginx.conf:
1.worker_processes. 8; the number of work processes nginx operation is generally provided in the core or cores cpu X2
2.worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000; nginx each work process cpu affinity with each a process using different priority the CPU
3.worker_rlimit_nofile 65535; this means that when a directive is nginx process opens up the number of file descriptors, the theoretical value should be opened up to the number (ulimit -n) file divided by the number of nginx process, but nginx allocation request is not so uniform, consistent with the best value of ulimit -n.
4.use epoll; nginx uses epoll event model, high efficiency
5.worker_connections 65535; a single worker process allows clients to the maximum number of connections
6.multi_accept off; tell nginx new connection after receiving a notification of acceptance as many connections, the default is on, the set on, in a serial manner plurality worker to handle the connection, i.e. a connection to be awakened only one worker, the other in the sleep state, after setting to off, a plurality of parallel manner worker to handle the connection , that is, a connection will wake up all the worker, until the connection is allocated, there is no right to continue dormant connection. When much of your server connections, turn on this parameter will have a reduced load, but when the throughput of large servers for efficiency, you can turn off this parameter.
7.sendfile on; open efficient file transfer mode, nginx sendfile instruction specifies whether the function call sendfile output files for general application to on
8.tcp_nopush on; sendfile must be effective in the ON mode, to prevent clogging network, reducing the number of active network segment (to be sent with a response header and body part of the start, rather than one by one transmission.)
9. keepalive_requests 500; each long can handle the maximum number of connection request is provided to the 500, i.e., the elongated connector 500 uses closed
10.keepalive_timeout 300s 300s; nginx client and holding the long connection timeout, consistent with the upstream tomcat the second parameter is the band request header returned keepalive time
11.tcp_nodelay on; network congestion is prevented, but to encompass the parameter is valid keepalived
12.client_header_buffer_size 4k; client request buffer size of the head, this you can set the page size for your system, usually a request header size does not exceed 1k, but due to the general paging system should be greater than 1k, so here set page size. Page size can be ordered getconf PAGESIZE made.
13.open_file_cache max = 65535 inactive = 20s; this will open the specified file cache is not enabled by default, max specify the number of buffers, recommendations and open the same number of files, inactive refers to how long after the delete file is not requested cache.
14.open_file_cache_valid 30s; this means checking how long a cache of useful information.
15.open_file_cache_min_uses 1; provided with a minimum inactive time open_file_cache instruction parameter file number, if this number is exceeded, the file descriptor has been opened in the cache, the above example, if a file has not been used once in the inactive time it will be removed.
16.client_header_timeout 15; timeout request header. We also put some low this setting, if more than this time is not sending any data, Nginx returns request time out error
17.client_body_timeout 15; timeout setting request body. We can also put some low this setting, more than this time did not send any data, and above the same error
18.reset_timedout_connection on; tell nginx closed does not respond to client connections. This will release the memory space occupied by the client.
19.send_timeout 15; client response timeout time, the timeout time is limited to the time between two events, and if this time is exceeded, the client does not have any activity, close the connection nginx
20.server_tokens off; nginx performed does not make faster, but it can be turned off in the wrong page nginx version number, this is good for security.
21.keepalive 200; each worker process up to the number of long-established connection, set algorithm: Each (number of keepalive Tomcat * Tomcat number) / (worker number of processes * Nginx Nginx's number)
22.proxy_http_version 1.1; execution nginx version 1.1 tomcat connection service, a default long connection
23.proxy_set_header connection ""; connection request header length, Keepalive mode should be set to null
24.proxy_read_timeout 65; nginx tomcat request time is set to 65 seconds, i.e., 65 No response to the second tomcat nginx returns to the client request times out
25.proxy_send_timeout 65; nginx sending a request to the tomcat timeout set to 65 seconds, i.e., 65 seconds nginx unsuccessfully sends a request to the tomcat returns to the client request times out
26.proxy_buffer_size 4k; instruction sets buffer size, the buffer a first portion upstream from the tomcat response, the response can be found in the head portion
27.proxy_buffers 16 32k; this command specifies the number and size of the buffer in response to the upstream server
28.proxy_busy_buffers_size 64k; read the response from the upstream server when the instruction to specify the allocation in response to the sending client buffer space, the set value is typically set to twice the proxy_buffers
29.proxy_temp_file_write_size 64k; the instruction control process to block background data nginx time, the larger the value, the blocking process the longer the time

Guess you like

Origin www.cnblogs.com/chen/p/11242736.html