Nginx flow control

Traffic restrictions (rate-limiting), Nginx is a very practical, but it is often misunderstood and configuration functions error. We can be used to limit the number of users within a given time HTTP request. Request, can be a simple Home GET requests, POST requests can also be a login form.

Traffic restrictions can be used for security purposes, such as password cracking can slow down the rate of violence. Limited by the rate of incoming requests to the real user's typical values ​​and identify the target URL address (the log), it can also be used against DDOS attacks. More commonly, the function is used to protect the application server is not the upstream simultaneously crushed by too many users request.

This article will introduce the Nginx  traffic restrictions  basics and advanced configuration, "traffic restrictions" also applies in the Nginx Plus.

How Nginx limiting

Nginx the "flow restriction" using a leaky bucket algorithm ( Leaky algorithm bucket ), the algorithm is widely used in communications and packet switched computer network, to handle emergency situations when the limited bandwidth. Like, pour a bucket in the mouth, bottom of the barrel in a leaky bucket. If the barrels leak rate greater than the rate of pour of the tub bottom, the water inside the tub will overflow; Similarly, in processing the request, on behalf of the water from the client's request, on behalf of the bucket "FIFO scheduling algorithm" (the FIFO) waits in a queue request is processed, the bottom of the barrel leaked water leaving the buffer is representative of server processes the request, on behalf of barrels spilled water is discarded and the request is not processed.  

The basic configuration of the current limiting

"Flow restriction" two major configuration instructions, limit_req_zoneand limit_req, as shown below:

limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;

server {
    location /login/ {
        limit_req zone=mylimit;

        proxy_pass http://my_upstream;
    }
}

limit_req_zoneDirective defines the flow restriction related parameters, and limit_reqinstructions to enable a flow restriction in the context appears (for example, all requests for the "/ login /" a).

limit_req_zoneHTTP commands are generally defined in a block, it can be used in multiple contexts, it requires the following three parameters:

  • Key  - request characteristics defined limit applies. Examples of Nginx variables $binary_remote_addr, save the client IP address in binary form. This means that we can each different IP address restrictions that the requested rate by a third parameter settings. (Using the string variable ratio because the IP address of the client $remote_addr, take up less space)

  • Zone  - defines an IP address for storing the state of each restriction request and a URL to access the shared memory region frequency. Information stored in the shared memory area, meaning that can be shared between the Nginx worker processes. Definition of two parts: by zone=keywordidentifying the name of the region, followed by a colon and the area size. 16000 state information of the IP address, about 1MB, so the sample can be stored in the region 160 000 IP addresses.

  • Rate Penalty  - defines the maximum rate of requests. In an example, a rate of no more than 10 requests per second. Nginx fact milliseconds granularity tracking request, the rate limit corresponds to a request every 100 milliseconds. Not allowing "emergency situation" (see next section), this means that the previous request arrives within 100 milliseconds request will be denied.

When Nginx need to add a new entry is insufficient storage space, the old entry will be deleted. If the release is still not enough space to accommodate the new record, Nginx will return a  503 status code (Service Temporarily Unavailable). In addition, in order to prevent memory is exhausted, when Nginx each time you create a new entry, deleting unused entries within a maximum of two 60 seconds.

limit_req_zoneInstruction set flow limits and parameters of the shared memory area, but in fact does not limit the rate request. It is necessary by the addition limit_reqinstruction, the flow restriction in certain applications locationor serverblocks. In the example above, we have to /login/limit traffic requests.

Each IP address is now limited to only 10 requests per second /login/, more specifically, the previous URL request is not the request within 100 milliseconds.

Handle emergencies

If we received two requests within 100 milliseconds, how do? For the second request, Nginx client will return a status code 503. This is probably not the result we want, because the applications tend to be bursty in nature. On the contrary, we want to buffer any excess of the request, and then process them in a timely manner. Our next update configuration, limit_requse burstparameters:

location /login/ {
    limit_req zone=mylimit burst=20;
    proxy_pass http://my_upstream;
}

burstParameter defines (in the example of the case where the zone exceeds the prescribed rate mylimitregion, a rate limited to 10 requests per second, every 100 milliseconds, or a request), the client can initiate a request number. A request arrives in the request will be put into the queue 100 ms, we would set the queue size to 20.

This means that, if the transmission request 21 from a given IP address, Nginx will immediately send the first request to the upstream server cluster, then the remaining 20 the request is queued. A transfer request is then queued every 100 milliseconds, only when the number of requests that the incoming request queued in queue exceeds 20, Nginx 503 will be returned to the client.

No queuing delay

Configuration burstparameters will make communication more fluid, but may not be practical, because the configuration for your site seems slow. In the example above, the first queue 20 to wait 2 seconds before the packet is forwarded, at this time the response back to the client may no longer be useful. To resolve this situation, you can burstadd the parameters nodelayparameters:

location /login/ {
    limit_req zone=mylimit burst=20 nodelay;

    proxy_pass http://my_upstream;
}

Use nodelayparameters, Nginx will according to burstthe parameter assignment queue position, velocity and configured to limit the application, rather than clean-up requests in the queue for forwarding. Conversely, when a request arrives "too early", as long as the dispensing position in the queue, Nginx immediately forwards the request. The position of the queue is marked as "taken" (occupied), and will not be released for use another request until after a period of time before being released (in this example, the 100 ms later).

As described above assumption, the queue 20 vacancies, 21 requests arrive simultaneously from a given IP address that is. Nginx immediately forwards the request 21, and the queue tag 20 occupied positions, and a release position every 100 milliseconds. If it is 25 requests arrive simultaneously, Nginx 21 will immediately forward those requests, the queue tag 20 occupied positions, and returns a status code 503 to reject the remaining four request.

It is now assumed, 101 milliseconds, while the other 20 after reaching a first request set of the request is forwarded. There will be only a queue position is released, so Nginx forwarding a request 503 returns a status code and the other 19 to reject the request. If 501 milliseconds have passed, five positions 20 is released before a new request arrives, it is forwarded 5 Nginx reject the request immediately and another 15.

Effect is equivalent to 10 requests per second "flow restriction." If you wish to allow the request is not limited in the case between two spaced embodiment "traffic limit" nodelayparameter it is practical.

Note:  For most deployments, we recommend the use burstand nodelayparameters to configure limit_reqcommand.

Advanced Configuration example

By basic "traffic restrictions" in conjunction with other Nginx functions, we can achieve a more fine-grained traffic restrictions.

whitelist

The following example shows how to enforce "traffic restrictions" on any request not in the white list:

geo $limit {
    default         1;
    10.0.0.0/8         0;
    192.168.0.0/64     0;
}

map $limit $limit_key {
    0 "";
    1 $binary_remote_addr;
}

limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;

server {
    location / {
        limit_req zone=req_zone burst=10 nodelay;

        # ...
    }
}

This example uses both geoand mapinstructions. geoWill block the corresponding IP address in the white list $limitvariable is assigned a value of 0 to the other is not in the white list is assigned a value of 1. We then use these values into a map key, as follows:

  • If the $limitvalue of the variable is zero, $limit_keythe variable is assigned to an empty string
  • If the $limitvalue of the variable is 1, $limit_keythe variable will be assigned to the client IP address in binary form

With the use of two instructions, the IP address of the whitelist $limit_keyvariable is assigned to an empty string is assigned the IP address of the client is not in the whitelist. When limit_req_zonethe first parameter is the empty string, does not apply "traffic restrictions", the IP addresses in the white list (subnet 10.0.0.0/8 and 192.168.0.0/24) is not restricted. All other IP addresses will be limited to 5 requests per second.

limit_reqInstruction to limit the application / the location block in a limiting configuration allows up to 10 over packet burst, and does not delay forwarding.

location contains multiple limit_reqinstructions

We can configure the plurality of location in a block limit_reqof instructions. When conforms to a given request all restrictions have been applied, which means that the most stringent restrictions. For example, a plurality of instructions have developed delayed, the longest delay will be used. Similarly, the request is rejected by the influence some instructions, other instructions allow even help.

The extension of the previous example whitelist IP address "flow restriction" to apply:

http {
    # ...

    limit_req_zone $limit_key zone=req_zone:10m rate=5r/s;
    limit_req_zone $binary_remote_addr zone=req_zone_wl:10m rate=15r/s;

    server {
        # ...
        location / {
            limit_req zone=req_zone burst=10 nodelay;
            limit_req zone=req_zone_wl burst=20 nodelay;
            # ...
        }
    }
}

IP addresses to the whitelist will not match the first "flow restriction", but will be matched to a second req_zone_wl, and is limited to 15 requests per second. IP address is not within the two limits can be matched to the white list, so that more application limits: 5 requests per second.

Configuration-related functions

Logging

By default, Nginx recording request will flow restriction due to delay or discarded in the log, as follows:

2015/06/13 04:20:00 [error] 120315#0: *32086 limiting requests, excess: 1.000 by zone "mylimit", client: 192.168.1.2, server: nginx.com, 
request: "GET / HTTP/1.0", host: "nginx.com"
Log entry includes a field:
  • limiting requests - log entries indicate that the "flow restriction" is requested
  • excess - every millisecond over a corresponding number of request "flow restriction" configuration
  • zone - defined embodiment "flow restriction" region
  • client - the client IP address originating the request
  • server - server IP address or host name
  • request - the actual HTTP request initiated by the client
  • host - the value of the HTTP header host

By default, Nginx to errorlevel to record the request is rejected, as in the example above [error]shown (Ngin recording the time delay request to a lower level, typically infothe level). To change Nginx logging level, use the limit_req_log_levelcommand. Here, we will be denied the request logging level is set to warn:

location /login/ {
    limit_req zone=mylimit burst=20 nodelay;
    limit_req_log_level warn;
    
    proxy_pass http://my_upstream;
} 

The error code is sent to the client

In general, the client traffic exceeds the configured limit, Nginx response status code 503 (Temporarily The-Service Unavailable Papers with) . Use limit_req_statusinstruction code set to another state (e.g. below 444 status codes):

location /login/ {
    limit_req zone=mylimit burst=20 nodelay;
    limit_req_status 444;
}

Designated locationrefuse all requests

If you want to reject all requests for a specified URL address, rather than just their speed, only need locationto configure block deny all instructions:

location /foo.php {
    deny all;
}

to sum up

The foregoing has covered the "flow restriction" Many of the features and Nginx Nginx Plus provided, including different loation rate setting request HTTP request, to the "flow restriction" configuration burstand nodelayparameters. Also it covers the different "traffic restrictions" in the advanced configuration for the client IP address whitelist and blacklist applications, describes how to logging request was denied and delayed.

 

 

Transfer: https://legolasng.github.io/2017/08/27/nginx-rate-limiting/#%E6%80%BB%E7%BB%93

https://www.nginx.com/blog/rate-limiting-nginx/

Guess you like

Origin www.cnblogs.com/zjfjava/p/10947264.html