Nginx traffic limit

Nginx traffic limit can effectively prevent DDOS attacks, reduce server load, and relieve hardware and network pressure. The function depends on the limit_req module. The current version of this module is integrated by default, so there is no need to compile and install it separately.

Reproduced information:

About the difference between limit_req and limit_conn: https://www.cnblogs.com/zhoulujun/p/12183179.html

Nginx limit access rate and maximum number of concurrent connections module description: https://www.cnblogs.com/wjoyxt/p/6128183.html

Super-detailed analysis of limit_req module burst parameters under Nginx: https://blog.csdn.net/hellow__world/article/details/78658041

About the difference between limit_req and limit_conn
what is the difference between connection and request?

Nginx current limiting configuration – limit_req and limit_conn (to prevent DDOS attacks)
https://www.cnblogs.com/andrew-303/p/12272099.html

Connection is a connection, which is often called a tcp connection, a complete state machine established through three-way handshake. To establish a connection, a three-way handshake is necessary.
request refers to the request, that is, the http request, the tcp connection is stateful, but the http built on top of tcp is a stateless protocol. By opening a
web page, you can see through wareshark that after a connection is established (that is, three-way handshake After), before the connection is disconnected (that is, before four waved hands), there will be a lot of http requests, which is their difference: that is, there will be one or more requests in the life cycle of a connection, which is to speed up Efficiency, avoiding the three-way handshake to establish a connection for each request, the current HTTP/1.1 protocol supports this feature, called keepalive.

limit_conn_zone $binanry_remote_addr zone=conn_zone:1m;

limit_conn conn_zone 1;  

This kind of configuration shows that when using ip as the key to restrict each ip to access the lmit.html file, at most one can be online, otherwise the rest will return unavailable.

This is the case where a quiescent count can be achieved regardless of how long.

For example, if your connection has not been released, even if you send more requests through this connection, as long as I can handle it, then I will help you deal with it.

However, if you only need to process 2 requests, but these two requests are sent at the same time using two connections, then I can only process one of them, not the other. That's his difference.

limit_req_zone $binary_remtoe_addr zone=req_zone:1m rate=1r/s; #这里需要为共享内存配置一个速率rate,

limit_conn zone=req_zone;

Indicates: For each ip, the speed of processing requests does not exceed 1 request per second.

It can be seen that this is a speed quantity (while the above one is a digital quantity, there is still an intuitive difference between speed and number..)

It is to send 100 requests at the same time (whether through 100 connections or 1 connection), as long as your request speed exceeds 1 per second, then I will reject you.

For burst, it is recommended to read: Super detailed analysis of burst parameters of limit_req module under Nginx https://blog.csdn.net/hellow__world/article/details/78658041

Here is the summary excerpt as follows:

limit_req zone=req_zone;
Strictly process requests according to the rate configured in limti_req_zone . If the rate
exceeds the range of rate processing capabilities, direct drop
means that there is no delay for received requests

limit_req zone=req_zone burst=5;
process requests according to the rate configured in limti_req_zone
and set up a buffer queue with a size of 5. Requests in the buffer queue will wait for processing slowly and
exceed the length of the burst buffer queue and rate processing capacity The request is directly discarded,
showing that there is a delay in the received request

limit_req zone=req_zone burst=5 nodelay;
process requests according to the rate configured in limti_req_zone
and set up a buffer queue with a size of 5. When a request arrives, a peak processing capacity will burst out. Requests, discarded directly
After the peak request is completed, the buffer queue cannot put any more requests. If rate=10r/m, and there is no request coming again during this period, the buffer queue will be able to reply to a buffer request every 6 s, until it returns to the position where 5 requests can be buffered.

Friends still have some doubts about this zone. If you have any questions, you can comment below and discuss together. For example, someone may ask, one customer service terminal occupies 5, then 327680 can only accommodate 65536 customer service terminals, then the 65537th The customer service will return a 503 error

limit_conn_zone
syntax:

Syntax: limit_conn_zone key zone=name:size;
Default: —
Context: http
See the syntax above, limit_conn_zone can only be used in the http segment, for example:

http {
    
    
limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
    
    
listen 80;
server_name www.tomener.com tomener.com;
location / {
    
    
root /var/www/tomener;
index index.php index.html index.htm;
limit_conn addr 5; #是限制每个IP只能发起5个连接
limit_rate 100k; #限速为 100KB/秒
}
}
}

For the relationship:
key => $binary_remote_addr #Binary IP address
name => addr #A random name, for example, you can take it as abc
size => 10m #Space size, here is
a binary ip address of 10 megabytes in 32 It occupies 32 bytes on a 64-bit machine and 63 bytes on a 64-bit machine, so how much can 10M store? Calculate, 10x1024x1024/32 = 327680, which means it can store 326780 ip addresses (32 bits), 64 bits can store 163840 ip

1. key: The key can be said to be a rule, which is an identification of the client connection. For example, the IP address is used above. For example, we can use $query_string, for example: /index.php?mp=138944093953, then we will The number of connections can be limited according to the value of mp. For more nginx built-in variables, please refer to http://nginx.org/en/docs/varindex.html

2. zone: shared memory space, function: save the number of connections corresponding to each key

3. size: the size of the shared memory space, such as 1M, 10M, 100K.
When the shared memory space is exhausted, the server will return 503 (Service Temporarily Unavailable) errors for all subsequent requests

limit_conn_log_level command
Syntax: limit_conn_log_level info | notice | warn | error;
Default: limit_conn_log_level error;
Context: http, server, location
Description: When the maximum number of connections is reached, the log level will be recorded.

limit_conn_status command
Syntax: limit_conn_status code;
Default: limit_conn_status 503;
Context: http, server, location
Description: When the limit is exceeded, the returned response status code is 503 by default. Now you know why the above returns 503 (Service Temporarily Unavailable) service temporarily unavailable

example:

1. At the same time limit the maximum concurrent connection of ip and virtual host

http {
    
    
limit_conn_zone binaryremoteaddrzone=perip:10m;limitconnzonebinaryremoteaddrzone=perip:10m;limitconnzoneserver_name zone=perserver:10m;
server {
    
    
location / {
    
    
limit_conn perip 10;
limit_conn perserver 1000;
}
}
}

Finally, there is a requirement encountered in the company. There is a project that requests a service interface on the server, but the program requests too frequently, causing the service to request too fast, which is detected by a third party as malicious crawling. title. It is said that the program should be modified to reduce the request rate, but it happens that the person who developed the program cannot be contacted, so the only solution is to modify the traffic limit on the interface of our Nginx proxy, so as to achieve the purpose of reducing the request frequency.


There are only two main configuration files where Nginx modifies :

vim nginx.conf
http {
    
    
    include       mime.types;
    default_type  application/octet-stream;
    limit_req_zone $binary_remote_addr zone=one:10m rate=2r/m;

Configuration file for the virtual server:

        location  /xxx {
    
    
          limit_req zone=one burst=30 nodelay;
          proxy_pass      http://127.0.0.1:8090;
          proxy_set_header Host   $host;
          proxy_set_header X-Real-IP      $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

Write a verification script when done

/bin/bash
while :
do
echo 开始执行调用
curl -s -w '%{http_code}' "http://xxx.xxx.cn/xxx/xxx
echo 睡眠10秒后再次请求
sleep 10
done

Run it, because we set it to accept a request in 30 seconds, and the cache pool is 5 times, that is to say, when running this script, the 503 return value appears when the 7th time is run, and then the frequency of 200 is 503 every two times. the verification is successful

Guess you like

Origin blog.csdn.net/qq_35855396/article/details/118314934