Nginx parameters, performance optimization configuration

[TOC]

Nginx parameters, performance optimization configuration

Linux system level

conntrack parameters

Generally, set nf_conntrack_max to 200w, nf_conntrack_buckets 1/4 or 1/2 nf_conntrack_max, to prevent the barrel is too large to affect performance.

$ cat /proc/sys/net/netfilter/nf_conntrack_buckets
524288

$ cat /proc/sys/net/netfilter/nf_conntrack_max
2097152
复制代码

Backlog queue

  • net.core.somaxconn

    • Maximum number of connections can be queued Nginx accept. If resulted in generally too small Nginx performance issues can view the kernel log you found the state
    • Together with instructions for adjusting NGINX listen.
  • net.core.netdev_max_backlog

    • Packet sent to the rate buffer before the CPU card; this value may be increased to improve the performance of the machine have a high bandwidth

File descriptor

File descriptor is the operating system resources used to represent the file open and connected. May be connected to a maximum of two Nginx file descriptors. For example, when as a reverse proxy, and client connected to a file descriptor, a file descriptor, and a rear end connected upstream.

  • sys.fs.file-max

    • Linux file system allows a maximum number of description
    • cat /proc/sys/fs/file-max
  • nofile

    • The maximum number of file descriptors allowed the application level, the general settings /etc/security/limits.conffile

Ports

  • net.ipv4.ip_local_port_range

    • port range of ports
  • For pressure measurement end, if a short link

    • net.ipv4.tcp_tw_reuse = 1
      • It represents the open port multiplexing. TIME-WAIT sockets allow re-used for new TCP connection, the default is 0, indicating off;
    • net.ipv4.tcp_tw_recycle = 1
      • Represents a rapid recovery of open TCP connections TIME-WAIT sockets, the default is 0, it off.

Nginx level

Open Files

  • worker_rlimit_nofile 65535;
    • Maximum file descriptor open files nginx worker processes (RLIMIT_NOFILE)

Worker Processes Number of processes

  • worker_processes

    • Usually set to auto, so a CPU is a Nginx worker processes
  • worker_connections

    • The maximum number of connections per worker process, generally have to turn up in the case of large flow of this number, such as 655350

Keepalive Connections long connection

  • keepalive_requests

    • The maximum number of client requests may be sent using a keepalive
  • keepalive_timeout

    • The maximum timeout to keep the connection active
  • keepalive

    • It remains open for each worker process remains idle upstream to the number of active connections.

In order to keep the rear end of upstream long link can be used, you must do the following configuration:

proxy_http_version 1.1;
proxy_set_header Connection "";   
复制代码

Events【multi_accept】

multi_accept instructions below, the default is off, nginx worker processes only once to receive a new connection, if turned represent nginx worker processes may receive all new connections once:

Syntax:	multi_accept on | off;
Default:	
multi_accept off;
Context:	events
复制代码
events {
        worker_connections 655350; 
        multi_accept on;
}
复制代码

However, generally you do not need to open, to avoid uneven come up with some of the issues the request, although the overall performance at the scene a short connection will be improved, but little advantage.

Logging

access log

Logging each request will consume a lot of CPU and I / O cycles, a method for reducing the influence is to enable access to the log buffer. Nginx not perform a separate buffer for each use, after the write log entry, but a series of entries buffer, and the operation thereof will be written to a file together.

In access_log increase buffer = size instruction can enable this feature; so, when the buffer is full will be written to the log file, flush = time parameter can periodically log buffer in the brush to the log file,

error log

error_log /usr/local/nginx/logs/nginx_error.log error;

Note that here the error log , means that only error-level or more will be printed to the error log in. Here level include warn, error , crit, alert, emerg. Usage instructions refer to the official website of error_log

log level nginx Lua reference Nginx log level constants

Online, generally above the level of error, alarm processing micro-channel through the enterprise

Sendfile

The operating system the sendfile () system call to copy data from a file descriptor to another descriptor, commonly used to achieve zero-copy, which can be accelerated TCP data transmission. To make it possible to use NGINX, sendfile comprising instructions or http server location context or context. Then, NGINX be cached content on the disk or to the socket, without any context switch to the user space, which makes writing very fast, and consume less CPU cycles. However, since the sendfile () to copy the user data around the space, so it is not subject to regular NGINX chain of handling and change the contents of the filter (such as gzip) a. When sendfile configuration context contains instructions and activation instructions to change the contents of the filter, NGINX automatically disables the context sendfile. Official sendfile described as follows, attention, directio will automatically disable sendfile:

In this configuration, sendfile() is called with the SF_NODISKIO flag which causes it not to block on disk I/O, but, instead, report back that the data are not in memory. nginx then initiates an asynchronous data load by reading one byte. On the first read, the FreeBSD kernel loads the first 128K bytes of a file into memory, although next reads will only load data in 16K chunks. This can be changed using the read_ahead directive.
复制代码

Use as follows:

http {
    sendfile on;
}
复制代码

Limits

Setting limits are designed to prevent the client consume too many resources Nginx server cause some problems; commonly used to demote.

  • limit_conn and limit_conn_zone

    • Limit the number of client connections to accept Nginx, such as a restriction on the ip
    • Prevent a single client to open too many connections and consumption of resources exceeds its share of resources.
  • limit_rate

    • Each client is connected to the rate limiting response; a plurality of client connection, then each connection is able to achieve this rate
    • Prevent the system from being overloaded some clients, thereby ensuring a higher quality of service to all clients.
  • limit_req and limit_req_zone

    • Nginx processing request rate limits; limit_rate and similar advantages.
  • max_conns

    • max_conns server command parameters, maximum connection limit upstream of the rear end, the rear end upstream represents the maximum acceptable number of connections

Buffers

  • client_body_buffer_size

    • The maximum size of the requested data is read from the client
  • client_header_buffer_size

    • Request header maximum data read from client
  • client_max_body_size

    • Set the maximum size of the client side request, or exceeds the limit 413 returns Request Entity Too Large
  • large_client_header_buffers

    • Set the maximum size of large client headers

Recommended values ​​are as follows:

client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 4 4k;
复制代码

Timeout

  • client_header_timeout & client_body_timeout

    • When the request is started, nginx client sends waiting timeout header or body
  • keepalive_timeout

    • nginx long connection remains idle timeout time
  • send_timeout

    • In response to a timeout of the client; if the timeout is closed nginx active connection

Recommended values ​​are as follows:

client_body_timeout 12;
client_header_timeout 12;
keepalive_timeout 15;
send_timeout 10;
复制代码

There are some connection timeout:

proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
复制代码

Other configurations

The main arrangement

user  www www;

worker_processes auto;

error_log  /usr/local/nginx/logs/nginx_error.log  error;
pid    /usr/local/nginx/nginx.pid;

worker_rlimit_nofile 65535;

events
{
    use epoll;
    worker_connections 65535;
}

复制代码

tcp_nodelay

Use as follows:

http {
    sendfile on;
    tcp_nodelay on;
}
复制代码

gzip

gzip on;
    	gzip_vary on;
    	gzip_proxied any;
    	gzip_comp_level 1;
    	gzip_buffers 16 8k;
    	gzip_http_version 1.1;
    	gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
复制代码

Open File Cache

reference

Tuning NGINX for Performance

How to Tune and Optimize Performance of Nginx Web Server

[ "I welcome the attention of the public micro-channel number: Linux server system development, quality article send back will vigorously by the public micro-channel number"]

Guess you like

Origin juejin.im/post/5d7a4320f265da03b31bfc14