Nginx performance optimization have this is enough!

1, the number of worker processes running Nginx

The number of worker processes running Nginx general settings of the CPU core or cores x2. If you do not know the number of cpu core, you can command top after pressing 1 see it, you can also check / proc / cpuinfo file grep ^ processor / proc / cpuinfo | wc -l

[root@lx~]# vi/usr/local/nginx1.10/conf/nginx.conf
worker_processes 4;
[root@lx~]# /usr/local/nginx1.10/sbin/nginx-s reload
[root@lx~]# ps -aux | grep nginx |grep -v grep
root 9834 0.0 0.0 47556 1948 ?     Ss 22:36 0:00 nginx: master processnginx
www 10135 0.0 0.0 50088 2004 ?       S   22:58 0:00 nginx: worker process
www 10136 0.0 0.0 50088 2004 ?       S   22:58 0:00 nginx: worker process
www 10137 0.0 0.0 50088 2004 ?       S   22:58 0:00 nginx: worker process
www 10138 0.0 0.0 50088 2004 ?       S   22:58 0:00 nginx: worker process

2, Nginx running CPU affinity

For example, 4-core configuration:

worker_processes 4;
worker_cpu_affinity 0001 0010 0100 1000

For example, 8-core configuration:

worker_processes 8;
worker_cpu_affinity 00000001 00000010 00000100 0000100000010000 00100000 01000000 10000000;

worker_processes up to open eight more than 8 performance will not be improved, and stability becomes lower, so the eight processes enough.


3, Nginx maximum number of open files

worker_rlimit_nofile 65535;

This means that when a directive is nginx process opens up the number of file descriptors, the theoretical value should be opened up to the number (ulimit -n) file divided by the number of nginx process, but nginx allocation request is not so uniform, so the best and ulimit -n consistent value.

Note: The configuration file resource constraints can /etc/security/limits.conf settings for each user root / user, etc., or * for all users to set.

*   soft nofile   65535
*   hard nofile   65535

You log into force (ulimit -n)


4, Nginx event handling model

events {
  use epoll;
  worker_connections 65535;
  multi_accept on;
}

nginx uses epoll event model, high processing efficiency.
work_connections a single worker process to allow end customers maximum number of connections, this value is generally to develop according to server performance and memory, the actual maximum value is multiplied by the number of worker processes work_connections.

We actually fill a 65,535 enough, these values ​​are considered complicated by a concurrent website to reach such a large number, it can be considered a major stations up!

multi_accept tell nginx new connection after receiving a notification of acceptance as many connections, the default is on, and set it to on, more worker in a serial manner to handle the connection, a connection that is only one worker wakes up, the other in hibernation, after setting off, multiple worker in a parallel manner to handle the connection, a connection that is awakens all of the worker, until the connection is allocated, there is no right to continue dormant connection. When much of your server connections, turn on this parameter will have a reduced load, but when the throughput of large servers for efficiency, you can turn off this parameter.


5, open and efficient transmission mode

http {
  include mime.types;
  default_type application/octet-stream;
  ……

  sendfile on;
  tcp_nopush on;
  ……
}
  • Include mime.types: media type, include a just another content file containing instructions in the current file.

  • default_type application / octet-stream: the default media type is sufficient.

  • sendfile on: Open efficient file transfer mode, nginx sendfile instruction specifies whether the function call sendfile output files for general application to on, if the application used for downloading the disk IO heavy duty applications, may be set to off, and to balance the disk network I / O processing speed and reduce the load on the system. Note: If the picture is not displayed properly put into this off.

  • tcp_nopush on: sendfile must be effective in the open mode, to prevent clogging network, reducing the number of active network segment (to be sent with a response header and body part of the start, rather than one by one transmission.)

6, connection time

The main purpose is to protect the server resources, CPU, memory, control the number of connections, because the connection is established also need to consume resources.

keepalive_timeout 60;
tcp_nodelay on;
client_header_buffer_size 4k;
open_file_cache max=102400 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 1;
client_header_timeout 15;
client_body_timeout 15;
reset_timedout_connection on;
send_timeout 15;
server_tokens off;
client_max_body_size 10m;
  • keepalived_timeout: keep the client connection session timeout, over this time, the server disconnects the link.

  • tcp_nodelay: also prevent network congestion, but to bear only valid in keepalived parameters.
  • client_header_buffer_size 4k: client request buffer size of the head, this can be set according to the size of your paging system, a request header size is generally no more than 1k, but since the paging systems have typically greater than 1k, so here set tab size. Page size can be ordered getconf PAGESIZE made.

  • open_file_cache max = 102400 inactive = 20s: this will open the specified file cache is not enabled by default, max specify the number of buffers, recommendations and open the same number of files, inactive refers to delete cache files after much time has not been requested.

  • open_file_cache_valid 30s: This refers to how long a cache of checking for valid information.

  • open_file_cache_min_uses 1: Use the minimum inactive time open_file_cache instruction parameter file number, if this number is exceeded, the file descriptor has been opened in the cache, the above example, if a file has not been used within a time inactive, it They will be removed.

  • client_header_timeout: set the timeout request header. We can also put some low this setting, if more than this time does not send any data, nginx returns request time out error.

  • client_body_timeout request timeout setting body. We can also put some low this setting, more than this time did not send any data, same as above error.

  • reset_timeout_connection: tell nginx closed the connection client does not respond. This will release the memory space occupied by the client.

  • send_timeout: client response timeout time, the timeout time is limited to the time between two events, and if this time is exceeded, the client does not have any activity, Nginx close the connection.

  • server_tokens: nginx does not make execution faster, but it can be turned off in the wrong page nginx version number, this is good for security.
  • client_max_body_size: upload file size limit.

7, fastcgi tune

fastcgi_connect_timeout 600;
fastcgi_send_timeout 600;
fastcgi_read_timeout 600;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
fastcgi_temp_path/usr/local/nginx1.10/nginx_tmp;
fastcgi_intercept_errors on;
fastcgi_cache_path/usr/local/nginx1.10/fastcgi_cache levels=1:2 keys_zone=cache_fastcgi:128minactive=1d max_size=10g;
  • fastcgi_connect_timeout 600: specified time-out is connected to the rear end FastCGI.

  • fastcgi_send_timeout 600: FastCGI timeout request to transmit.

  • fastcgi_read_timeout 600: FastCGI timeout designated to receive a response.

  • fastcgi_buffer_size 64k: a first portion specific read response FastCGI much needed buffer size of the default buffer. fastcgi_buffers instructions per block size, this value may be smaller.

  • fastcgi_buffers 4 64k: specify how much and how much local buffers used to buffer FastCGI response to the request, the page size if a php script is generated 256KB, then the buffer will be assigned to four 64KB cache, if the page size is larger than 256KB , then the part will be larger than 256KB cache to fastcgi_temp_path specified path, but this is not a good way, because the data processing speed of the memory faster than disk. Generally this value should be an intermediate value of the page size of the site php script produced, page size If the site most of the script is generated 256KB, you can set this value to "8 32K", "4 64k" and so on.

  • fastcgi_busy_buffers_size 128k: suggestions set to be twice the fastcgi_buffers busy time buffer.

  • fastcgi_temp_file_write_size 128k: fastcgi_temp_path when writing the data block how much the default value is twice fastcgi_buffers of the value may be reported if the load 502BadGateway setting up hours.

  • fastcgi_temp_path: temporary cache directory.

  • fastcgi_intercept_errors on: this command specifies whether transfer and 4xx 5xx error message to the client, or to allow the use of nginx error_page processing error. Note: Static file does not exist will return 404 pages, but php page returns a blank page!

  • fastcgi_cache_path /usr/local/nginx1.10/fastcgi_cachelevels=1:2 keys_zone = cache_fastcgi: 128minactive = 1d max_size = 10g: fastcgi_cache cache directory, you can set the directory hierarchy, such as 1: 2 will generate 16 * 256 subdirectories, cache_fastcgi is this the name buffer space, cache is how much memory (such popular content nginx placed directly memory, faster access), inactive indicates the default expiration time, if the cache data is not accessed within the expiration time will be removed, max_size represent up with how much hard disk space.

  • fastcgi_cache cache_fastcgi: # represents the FastCGI cache turned on and give it a name. Open useful buffer, it can reduce the CPU load, and to prevent errors 502 released. Buffer name cache_fastcgi created for proxy_cache_path instruction.

  • fastcgi_cache_valid 200 302 1h: # used to specify the response time code cache, the value of Example 200 and 302 represents the response buffer for one hour, and to be used in conjunction fastcgi_cache.

  • fastcgi_cache_valid 301 1d: The 301 response caching day.

  • fastcgi_cache_valid any 1m: The other answer cache 1 minute.

  • fastcgi_cache_min_uses 1: This instruction is used to set the URL request through the same number of times will be cached.

  • fastcgi_cache_key HTTP: // $ $ REQUEST_URI Host : This command is used to set the web cache Key value, nginx generally combined into proxy_cache_key md5 hash value stored according to the Key $ The Host (domain name), $ request_uri (the route request) variables. .

  • fastcgi_pass: Specifies FastCGI server listening port and address, or other machine may be present.

to sum up:

nginx caching functions: proxy_cache / fastcgi_cache

  • Proxy_cache role is to cache backend server content, it may be anything, including static and dynamic.

  • Fastcgi_cache role is to cache fastcgi generated content, in many cases is dynamic content generated by php.

  • proxy_cache nginx cache reduces the number of communications with the rear end, the rear end of saving transmission time and broadband.

  • fastcgi_cache cache reduces the number of communications with the nginx php, but also reduce the pressure php and database (MySQL) a.

8, gzip tune

Using gzip compression, it may save our bandwidth and speed up transfer speeds, better experience, but also cost savings for us, so this is a priority.

Nginx compression is enabled you need to ngx_http_gzip_module module, apache using mod_deflate.

Generally, we need to compress the contents: text, js, html, css, for pictures, video, flash or something is not compressed, but also note that we use gzip feature is the need to consume the CPU!

gzip on;
gzip_min_length 2k;
gzip_buffers   4 32k;
gzip_http_version 1.1;
gzip_comp_level 6;
gzip_typestext/plain text/css text/javascriptapplication/json application/javascript application/x-javascriptapplication/xml;
gzip_vary on;
gzip_proxied any;
gzip on;   #开启压缩功能
  • gzip_min_length 1k: Set the minimum allowable number of bytes of compressed page, number of pages taken from the header bytes Content-Length header, the default value is 0, no matter what page are compressed, it is recommended to set to be larger than 1K, if possible small and 1K the more pressure will be greater.

  • gzip_buffers 4 32k: compression buffer size, represented by the application of the stream buffer 4 units 32K memory as a result of compression, the default is to apply the same size as the original data memory to store the results gzip compression.

  • gzip_http_version 1.1: compressed version, set for identifying HTTP protocol version, the default is 1.1, currently most browsers already support GZIP decompression, use the default.

  • gzip_comp_level 6: compression ratio, to specify the GZIP compression ratio, the minimum compression ratio of the fastest processing speed, compression 9, faster than the maximum transmission speed, but the process is slow, consumes relatively CPU resources.

  • gzip_types text / css text / xml application / javascript: used to specify the type of compression, 'text / html' type will always be compressed. Default value: gzip_types text / html (the default does not js / css file compression)

    • Compression type, MIME type matches the compression;

    • Wildcards can not text / *;

    • text / html default already compressed (whether or not specified);

    • Which kind of compression settings can refer to a text file conf / mime.types.
  • gzip_vary on: varyheader support, change options allows front-end cache server cache after GZIP compressed pages, such as Squid caching through nginx with compressed data.

9, expires cache tuning

Cache, mainly for the case to change the opportunity to images, css, js and other elements less use, in particular, images, occupied bandwidth, we can set the picture in the browser local cache 365d, css, js, html can cache a 10 to day, so that the user first opens to load a little slower, second, it is very fast! Caching, we need to need to expand the ranks of the cache out, Expires cache configuration in the server field inside.

location ~* \.(ico|jpe?g|gif|png|bmp|swf|flv)$ {
expires 30d;
#log_not_found off;
access_log off;
}

location ~* \.(js|css)$ {
expires 7d;
log_not_found off;
access_log off;
}

Note: log_not_found off; if there is no error in recording the error_log. Default Yes.

to sum up:

expire functional advantages:

  • expires can reduce bandwidth site to buy, and cost savings;

  • While improving user access experience;

  • Reduce the pressure on services, cost-saving server, is a very important feature of web services.

expire features Disadvantages:

  • Cached page or update the data, the user may still see old content, but affect the user experience.

Solution: a first cache shorter time, for example: one day, but not complete, unless the update frequency of greater than one day; the second object to the cache renamed.

The site does not want to be cached content:

  • Web site traffic statistical tools;

  • Frequently updated files (google's logo).

10, anti-hotlinking

Prevent others directly from your Web site references, pictures, links, consuming your resources and network traffic, so we made several solutions:

  1. Watermark, branding, your bandwidth, server enough;

  2. Firewall, direct control, provided that you know the IP source;

  3. The following security chain strategy is to directly give error 404.
location ~*^.+\.(jpg|gif|png|swf|flv|wma|wmv|asf|mp3|mmf|zip|rar)$ {
valid_referers noneblocked www.benet.com benet.com;
if($invalid_referer) {
  #return 302 http://www.benet.com/img/nolink.jpg;
  return 404;
  break;
}
access_log off;
}

Parameter enables the following form:

  • none: meaning does not exist Referer header (represented empty, that is directly accessible, such as open a picture directly in the browser).
  • blocked: camouflage means according to the firewall Referer header, such as: "Referer: XXXXXXX".
  • server_names: a list of one or more servers, version 0.5.33 can later use the "*" wildcard in the name.

11, the kernel parameter optimization

  • fs.file-max = 999999: parameters of the process (such as a worker process) the maximum number of handles can be opened at the same time, this parameter limits the maximum number of concurrent connections a straight line, to be configured according to the actual situation.

  • net.ipv4.tcp_max_tw_buckets = 6000: This parameter indicates the operating system to allow the maximum number of sockets in TIME_WAIT, if more than this number, TIME_WAIT sockets will be cleared immediately and print a warning message. This parameter defaults to 180,000 excess TIME_WAIT sockets will slow Web server. Note: Active closing the connection the server generates connection TIME_WAIT state

  • net.ipv4.ip_local_port_range = 1024 65000: Allow the system to open a range of ports.

  • net.ipv4.tcp_tw_recycle = 1: Enable timewait rapid recovery.

  • net.ipv4.tcp_tw_reuse = 1: open reuse. TIME-WAIT sockets allow re-used for new TCP connection. It makes sense for the server, because the server will always be connected to a large number of TIME-WAIT state.

  • net.ipv4.tcp_keepalive_time = 30: This parameter indicates when the keepalive is enabled, TCP transmission frequency of keepalive messages. The default is 2 hours, if some small set, you can quickly clean up invalid connection.

  • net.ipv4.tcp_syncookies = 1: open SYN Cookies, when the SYN queue overflow occurs, enable cookies to deal with.

  • net.core.somaxconn = 40960: web application backlog in default function will give us listen kernel parameters.

  • net.core.somaxconn: limited to 128, but nginx default is defined NGX_LISTEN_BACKLOG 511, it is necessary to adjust this value. Note: For a TCP connection, Server and Client needs to establish a network connection via three-way handshake when the three-way handshake is successful, we can see the status of the port to transition from LISTEN ESTABLISHED, then you can start transferring data on this link. each is listening (the listen) state port monitor has its own queue length of the queue listener listen and use the parameters such as the port somaxconn program () function related. somaxconn defines the maximum length of each port monitoring system queue, which is a global parameter, the default value is 128, for processing a regular high-load environment new web service connection, the default of 128 is too small. This value is recommended for most environments increased to 1024 or more. Large listen queues to prevent denial of service DoS *** would also help.

  • net.core.netdev_max_backlog = 262144: fast rate for each network when the received packet rate interface than the core processing of these packages, the maximum number of queues to allow data packets.

  • net.ipv4.tcp_max_syn_backlog = 262144: this parameter labeled TCP three-way handshake to establish the maximum length of the stage to accept SYN request queue, the default is 1024, it is set too large a number can make Nginx busy time to accept a new connection in the event, Linux will not the loss of the connection request initiated by the client.

  • net.ipv4.tcp_rmem = 10240 87380 12582912: This parameter defines the minimum TCP receiving buffer (for receiving TCP sliding window), the default value, a maximum value.

  • net.ipv4.tcp_wmem = 10240 87380 12582912: This parameter defines the TCP send buffer (TCP for transmission sliding window) minimum, default value, maximum value.

  • net.core.rmem_default = 6291456: This parameter indicates the acceptance of the kernel socket buffer default size.

  • net.core.wmem_default = 6291456: This parameter indicates the kernel socket transmit buffer default size.

  • net.core.rmem_max = 12582912: This parameter indicates the kernel socket accepts a maximum size of the buffer.

  • net.core.wmem_max = 12582912: This parameter indicates the maximum size of the kernel socket buffer is sent.

  • net.ipv4.tcp_syncookies = 1: irrespective of the performance parameters, for solving the TCP SYN ***.

    Below posted a full kernel optimization settings:

    fs.file-max = 999999
    net.ipv4.ip_forward = 0
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    kernel.sysrq = 0
    kernel.core_uses_pid = 1
    net.ipv4.tcp_syncookies = 1
    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    kernel.shmmax = 68719476736
    kernel.shmall = 4294967296
    net.ipv4.tcp_max_tw_buckets = 6000
    net.ipv4.tcp_sack = 1
    net.ipv4.tcp_window_scaling = 1
    net.ipv4.tcp_rmem = 10240 87380 12582912
    net.ipv4.tcp_wmem = 10240 87380 12582912
    net.core.wmem_default = 8388608
    net.core.rmem_default = 8388608
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    net.core.netdev_max_backlog = 262144
    net.core.somaxconn = 40960
    net.ipv4.tcp_max_orphans = 3276800
    net.ipv4.tcp_max_syn_backlog = 262144
    net.ipv4.tcp_timestamps = 0
    net.ipv4.tcp_synack_retries = 1
    net.ipv4.tcp_syn_retries = 1
    net.ipv4.tcp_tw_recycle = 1
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_mem = 94500000 915000000 927000000
    net.ipv4.tcp_fin_timeout = 1
    net.ipv4.tcp_keepalive_time = 30
    net.ipv4.ip_local_port_range = 1024 65000

    Sysctl -p implementation of the kernel changes to take effect.


12, to optimize the number of connections on the system

linux default value of open files is 1024. View the current system values:

# ulimit -n
1024

Description server allows only open at 1024 file.

Use ulimit -a can view all of the current limits of the system, using ulimit -n can view the current maximum number of open files.

Newly installed linux default only 1024, when the load as large servers, it is easy to encounter error: too many open files. Therefore, we need to change large increase in the last /etc/security/limits.conf:

*       soft    nofile         65535
*       hard    nofile         65535
*       soft    noproc         65535
*       hard    noproc         65535

Guess you like

Origin blog.51cto.com/4534309/2462483