nginx current limit fuse

1. Token Bucket Algorithm

img

The algorithm idea is:

Tokens are generated at a fixed rate and cached in token buckets;

When the token bucket is full, excess tokens are discarded;

Requests must consume proportional tokens to be processed;

When the token is not enough, the request is cached.

2. Leaky Bucket Algorithm

img

The algorithm idea is:

Water (requested) is poured into the bucket from above and flows out (processed) from below the bucket;

The water that is too late to flow out is stored in a bucket (buffer) and flows out at a fixed rate;

Water overflows when bucket is full (discard).

The core of this algorithm is: cache requests, uniform speed processing, and discard redundant requests directly.

Compared with the leaky bucket algorithm, the token bucket algorithm is different in that it not only has a "bucket", but also a queue. This bucket is used to store tokens, and the queue is used to store requests.

In terms of function, the most obvious difference between the leaky bucket algorithm and the token bucket algorithm is whether to allow the processing of burst traffic (burst). The leaky bucket algorithm can forcibly limit the real-time transmission (processing) rate of data, and does not make additional deal with;

The token bucket algorithm can allow a certain degree of burst transmission while limiting the average transmission rate of data.

The Nginx speed limit module by request rate uses the leaky bucket algorithm, which can forcibly ensure that the real-time processing speed of requests will not exceed the set threshold.

3. Configuration case

1、limit_conn_zone

1.1 nginx configuration

http{
    
    
limit_conn_zone $binary_remote_addr zone=one:10m;
server
{
    
    
......
limit_conn one 10;
......
}
}

Among them, "limit_conn one 10" can be placed in the server layer and valid for the entire server, or it can be placed in the location and only valid for a single location. This configuration indicates that the number of concurrent connections of the client can only be 10.

1.2 Results

The ab tool 20 concurrently requests nginx, and you can see Complete requests: 20 Failed requests: 9 (because the number of concurrent connections to an ip in the nginx configuration is 10, and the number of successes in the result is +1 because the counter starts from zero, so it will One more; in the nginx log, you can also see that there are 9 requests returning 503)

2、limit_req_zone

2.1 nginx configuration

http{
    
    
limit_req_zone $binary_remote_addr zone=req_one:10m rate=1r/s;
server
{
    
    
......
limit_req zone=req_one burst=120;
......
}
}

Among them, "limit_req zone=req_one burst=120" can be placed in the server layer and valid for the entire server, or it can be placed in the location and only valid for a single location.

rate=1r/s means that each address can only request once per second, that is to say, the token bucket burst=120 has a total of 120 tokens, and only 1 new token is added per second, 120 tokens After sending, the extra request will return 503..

3、ngx_http_upstream_module

3.1 Through the introduction of official documents,

This module has a parameter: max_conns can limit the flow of the server, but unfortunately it can only be used in the commercial version of nginx. However, after the nginx1.11.5 version, the official has separated this parameter from the commercial version, that is to say, it can be used as long as we upgrade the widely used nginx1.9.12 and 1.10 versions in production (through testing, we can see , in the old version of nginx, if this parameter is added, the nginx service cannot be started).

3.2 configuration

upstream xxxx{
    
    
server 127.0.0.1:8080 max_conns=10;
server 127.0.0.1:8081 max_conns=10;
}

3.3 Results

Use two machines to send 20, 30, and 40 concurrent requests to nginx with the ab tool: It can be seen that no matter how many concurrent requests, there are only 12 successful requests, and the number of successful requests will be 2 more. At the same time, the test results of 1.2 are successful The number of times is also *+1*, here are two machines, based on this consideration, increase the number of machines to three, and the number of successes is 13 .

Here is a hypothesis, the number of successful requests will be +1* according to the client's *+1 (here is just an assumption)** Note: There are still very important points. max_conns is for a single server in the upstream, not all; nginx has a parameter: worker_processes, max_conns is for each worker_processes;

Guess you like

Origin blog.csdn.net/robinhunan/article/details/131008101