Some optimizations about Nginx (break through 100,000 concurrent)

worker_processes 8;

The number of nginx processes is recommended to be specified according to the number of cpus, generally a multiple of it.

worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;

Allocate CPUs to each process. In the above example, 8 processes are allocated to 8 CPUs. Of course, you can write more than one, or allocate one process to multiple CPUs.

worker_rlimit_nofile 102400;

This instruction refers to the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of open files (ulimit -n) divided by the number of nginx processes, but nginx distribution requests are not so uniform, so it is best to use ulimit The value of -n remains the same.

use epoll;

Using epoll's I/O model, needless to say.

worker_connections 102400;

The maximum number of connections allowed per process, theoretically the maximum number of connections per nginx server is worker_processes*worker_connections.

keepalive_timeout 60;

keepalive timeout.

client_header_buffer_size 4k;

The buffer size of the client request header. This can be set according to your system paging size. Generally, the header size of a request will not exceed 1k. However, since the general system paging is larger than 1k, the paging size is set here. The page size can be obtained with the command getconf PAGESIZE.

open_file_cache max=102400 inactive=20s;

This will specify the cache for open files. It is not enabled by default. Max specifies the number of caches. It is recommended to be consistent with the number of open files. Inactive refers to how long the file has not been requested to delete the cache.

open_file_cache_valid 30s;

This is how often to check the cache for valid information.

open_file_cache_min_uses 1;

The minimum number of times the file is used during the inactive parameter time in the open_file_cache directive. If it exceeds this number, the file descriptor will always be opened in the cache. As in the above example, if a file is not used once during the inactive time, it will be moved. remove.

Optimization of kernel parameters

net.ipv4.tcp_max_tw_buckets = 6000

The number of timewaits, the default is 180000.

net.ipv4.ip_local_port_range = 1024 65000

The range of ports that the system is allowed to open.

net.ipv4.tcp_tw_recycle = 1

Enable timewait fast recycling.

net.ipv4.tcp_tw_reuse = 1

Enable reuse. Allow TIME-WAIT sockets to be reused for new TCP connections.

net.ipv4.tcp_syncookies = 1

Enable SYN Cookies. When the SYN waiting queue overflows, enable cookies to process.

net.core.somaxconn = 262144

The backlog of the listen function in the web application will limit the net.core.somaxconn of the kernel parameter to 128 by default, and the NGX_LISTEN_BACKLOG defined by nginx defaults to 511, so it is necessary to adjust this value.

net.core.netdev_max_backlog = 262144

The maximum number of packets allowed to be sent to the queue when each network interface is receiving packets faster than the kernel can process them.

net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system that are not associated with any one user file handle. If this number is exceeded, the orphan connection will be reset immediately and a warning message will be printed. This limit is only to prevent simple DoS attacks, you can't rely too much on it or artificially reduce this value, but should increase this value (if you increase the memory).

net.ipv4.tcp_max_syn_backlog = 262144

The maximum recorded number of connection requests that have not received client acknowledgment. For systems with 128M memory, the default value is 1024, and for systems with small memory it is 128.

net.ipv4.tcp_timestamps = 0

Timestamps avoid serial number wrapping. A 1Gbps link will definitely encounter previously used serial numbers. Timestamps allow the kernel to accept such "abnormal" packets. It needs to be turned off here.

net.ipv4.tcp_synack_retries = 1

In order to open the connection to the peer, the kernel needs to send a SYN with an ACK in response to the previous SYN. This is the second handshake in the so-called three-way handshake. This setting determines how many SYN+ACK packets the kernel sends before giving up the connection.

net.ipv4.tcp_syn_retries = 1

The number of SYN packets sent before the kernel gives up on establishing a connection.

net.ipv4.tcp_fin_timeout = 1

If the socket is requested to be closed by the local end, this parameter determines how long it will remain in the FIN-WAIT-2 state. The peer can fail and never close the connection, or even crash unexpectedly. The default value is 60 seconds. The usual value for the 2.2 kernel is 180 seconds, you can press this setting, but keep in mind that even if your machine is a lightly loaded web server, there is a risk of memory overflow due to a large number of dead sockets, FIN- WAIT-2 is less dangerous than FIN-WAIT-1 because it can only eat up to 1.5K of memory, but they have a longer lifespan.

net.ipv4.tcp_keepalive_time = 30

How often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours.

A complete kernel optimization configuration


Click ( here ) to collapse or open

  1. net.ipv4.ip_forward = 0
  2. net.ipv4.conf.default.rp_filter = 1
  3. net.ipv4.conf.default.accept_source_route = 0
  4. kernel.sysrq = 0
  5. kernel.core_uses_pid = 1
  6. net.ipv4.tcp_syncookies = 1
  7. kernel.msgmnb = 65536
  8. kernel.msgmax = 65536
  9. kernel.shmmax = 68719476736
  10. kernel.shmall = 4294967296
  11. net.ipv4.tcp_max_tw_buckets = 6000
  12. net.ipv4.tcp_sack = 1
  13. net.ipv4.tcp_window_scaling = 1
  14. net.ipv4.tcp_rmem = 4096 87380 4194304
  15. net.ipv4.tcp_wmem = 4096 16384 4194304
  16. net.core.wmem_default = 8388608
  17. net.core.rmem_default = 8388608
  18. net.core.rmem_max = 16777216
  19. net.core.wmem_max = 16777216
  20. net.core.netdev_max_backlog = 262144
  21. net.core.somaxconn = 262144
  22. net.ipv4.tcp_max_orphans = 3276800
  23. net.ipv4.tcp_max_syn_backlog = 262144
  24. net.ipv4.tcp_timestamps = 0
  25. net.ipv4.tcp_synack_retries = 1
  26. net.ipv4.tcp_syn_retries = 1
  27. net.ipv4.tcp_tw_recycle = 1
  28. net.ipv4.tcp_tw_reuse = 1
  29. net.ipv4.tcp_mem = 94500000 915000000 927000000
  30. net.ipv4.tcp_fin_timeout = 1
  31. net.ipv4.tcp_keepalive_time = 30
  32. net.ipv4.ip_local_port_range = 1024 65000

一个简单的nginx优化配置文件


点击(此处)折叠或打开

  1. user www www;
  2. worker_processes 8;
  3. worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000;
  4. error_log /www/log/nginx_error.log crit;
  5. pid /usr/local/nginx/nginx.pid;
  6. worker_rlimit_nofile 204800;
  7. events
  8. {
  9. use epoll;
  10. worker_connections 204800;
  11. }
  12. http
  13. {
  14. include mime.types;
  15. default_type application/octet-stream;
  16. charset utf-8;
  17. server_names_hash_bucket_size 128;
  18. client_header_buffer_size 2k;
  19. large_client_header_buffers 4 4k;
  20. client_max_body_size 8m;
  21. sendfile on;
  22. tcp_nopush on;
  23. keepalive_timeout 60;
  24. fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2
  25. keys_zone=TEST:10m
  26. inactive=5m;
  27. fastcgi_connect_timeout 300;
  28. fastcgi_send_timeout 300;
  29. fastcgi_read_timeout 300;
  30. fastcgi_buffer_size 16k;
  31. fastcgi_buffers 16 16k;
  32. fastcgi_busy_buffers_size 16k;
  33. fastcgi_temp_file_write_size 16k;
  34. fastcgi_cache TEST;
  35. fastcgi_cache_valid 200 302 1h;
  36. fastcgi_cache_valid 301 1d;
  37. fastcgi_cache_valid any 1m;
  38. fastcgi_cache_min_uses 1;
  39. fastcgi_cache_use_stale error timeout invalid_header http_500;
  40. open_file_cache max=204800 inactive=20s;
  41. open_file_cache_min_uses 1;
  42. open_file_cache_valid 30s;
  43. tcp_nodelay on;
  44. gzip on;
  45. gzip_min_length 1k;
  46. gzip_buffers 4 16k;
  47. gzip_http_version 1.0;
  48. gzip_comp_level 2;
  49. gzip_types text/plain application/x-javascript text/css application/xml;
  50. gzip_vary on;
  51. server
  52. {
  53. listen 8080;
  54. server_name ad.test.com;
  55. index index.php index.htm;
  56. root /www/html/;
  57. location /status
  58. {
  59. stub_status on;
  60. }
  61. location ~ .*\.(php|php5)?$
  62. {
  63. fastcgi_pass 127.0.0.1:9000;
  64. fastcgi_index index.php;
  65. include fcgi.conf;
  66. }
  67. location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|js|css)$
  68. {
  69. expires 30d;
  70. }
  71. log_format access '$remote_addr - $remote_user [$time_local] "$request" '
  72. '$status $body_bytes_sent "$http_referer" '
  73. '"$http_user_agent" $http_x_forwarded_for';
  74. access_log /www/log/access.log access;
  75. }
  76. }

关于FastCGI的几个指令

fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=TEST:10m inactive=5m;

这个指令为FastCGI缓存指定一个路径,目录结构等级,关键字区域存储时间和非活动删除时间。

fastcgi_connect_timeout 300;

指定连接到后端FastCGI的超时时间。

fastcgi_send_timeout 300;

向FastCGI传送请求的超时时间,这个值是指已经完成两次握手后向FastCGI传送请求的超时时间。

fastcgi_read_timeout 300;

接收FastCGI应答的超时时间,这个值是指已经完成两次握手后接收FastCGI应答的超时时间。

fastcgi_buffer_size 16k;

指定读取FastCGI应答第一部分需要用多大的缓冲区,这里可以设置为fastcgi_buffers指令指定的缓冲区大小,上面的指令指定它将使用1个16k的缓冲区去读取应答的第一部分,即应答头,其实这个应答头一般情况下都很小(不会超过1k),但是你如果在fastcgi_buffers指令中指定了缓冲区的大小,那么它也会分配一个fastcgi_buffers指定的缓冲区大小去缓存。

fastcgi_buffers 16 16k;

指定本地需要用多少和多大的缓冲区来缓冲FastCGI的应答,如上所示,如果一个PHP脚本所产生的页面大小为256k,则会为其分配16个16k的缓冲区来缓存,如果大于256k,增大于256k的部分会缓存到fastcgi_temp指定的路径中,当然这对服务器负载来说是不明智的方案,因为内存中处理数据速度要快于硬盘,通常这个值的设置应该选择一个你的站点中的php脚本所产生的页面大小的中间值,比如你的站点大部分脚本所产生的页面大小为256k就可以把这个值设置为16 16k,或者4 64k 或者64 4k,但很显然,后两种并不是好的设置方法,因为如果产生的页面只有32k,如果用4 64k它会分配1个64k的缓冲区去缓存,而如果使用64 4k它会分配8个4k的缓冲区去缓存,而如果使用16 16k则它会分配2个16k去缓存页面,这样看起来似乎更加合理。

fastcgi_busy_buffers_size 32k;

这个指令我也不知道是做什么用,只知道默认值是fastcgi_buffers的两倍。

fastcgi_temp_file_write_size 32k;

在写入fastcgi_temp_path时将用多大的数据块,默认值是fastcgi_buffers的两倍。

fastcgi_cache TEST

开启FastCGI缓存并且为其制定一个名称。个人感觉开启缓存非常有用,可以有效降低CPU负载,并且防止502错误。但是这个缓存会引起很多问题,因为它缓存的是动态页面。具体使用还需根据自己的需求。

fastcgi_cache_valid 200 302 1h; fastcgi_cache_valid 301 1d; fastcgi_cache_valid any 1m;

为指定的应答代码指定缓存时间,如上例中将200,302应答缓存一小时,301应答缓存1天,其他为1分钟。

fastcgi_cache_min_uses 1;

缓存在fastcgi_cache_path指令inactive参数值时间内的最少使用次数,如上例,如果在5分钟内某文件1次也没有被使用,那么这个文件将被移除。

fastcgi_cache_use_stale error timeout invalid_header http_500;

不知道这个参数的作用,猜想应该是让nginx知道哪些类型的缓存是没用的。 以上为nginx中FastCGI相关参数,另外,FastCGI自身也有一些配置需要进行优化,如果你使用php-fpm来管理FastCGI,可以修改配置文件中的以下值:

<value name="max_children">60</value>

同时处理的并发请求数,即它将开启最多60个子线程来处理并发连接。

<value name="rlimit_files">102400</value>

最多打开文件数。

<value name="max_requests">204800</value>

每个进程在重置之前能够执行的最多请求数。

几张测试结果

静态页面为我在squid配置4W并发那篇文章中提到的测试文件,下图为同时在6台机器运行webbench -c 30000 -t 600 http://ad.test.com:8080/index.html命令后的测试结果:

使用netstat过滤后的连接数:

php页面在status中的结果(php页面为调用phpinfo):

php页面在netstat过滤后的连接数:

未使用FastCGI缓存之前的服务器负载:

此时打开php页面已经有些困难,需要进行多次刷新才能打开。上图中cpu0负载偏低是因为测试时将网卡中断请求全部分配到cpu0上,并且在nginx中开启7个进程分别制定到cpu1-7。
使用FastCGI缓存之后:

此时可以很轻松的打开php页面。

这个测试并没有连接到任何数据库,所以并没有什么参考价值,不过不知道上述测试是否已经到达极限,根据内存和cpu的使用情况来看似乎没有,但是已经没有多余的机子来让我运行webbench了。

 

http://blog.csdn.net/madun/article/details/8660109

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327029547&siteId=291194637