Nginx Configuration Nginx limiting limiting configuration

Nginx limiting configuration

This paper by way of example, 2904628156 progressive approach to explain Nginx limiting configuration, is a simple official document positive supplement.

Nginx limiting the use of a leaky bucket algorithm, if you are interested algorithm, the venue Wikipedia first reading. But understand this algorithm, does not affect this article.

Author: Programmer Zhao Xin

https://www.cnblogs.com/xinzhao/p/11465297.html

Empty bucket

We start with the simplest of limiting configuration:


limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit; proxy_pass http://login_upstream; } } 
  • $ Binary_remote_addr for the client ip limiting;
  • zone = ip_limit: 10m limiting rule name is ip_limit, allowing the use of 10MB of memory to record the state of the flow restrictor corresponding ip;
  • rate = 10r / s velocity of flow restrictor 10 requests per second
  • location / login / login to limit the current

Limiting speed of 10 requests per second, 10 times if there is a request arrives at the same time free nginx, they can get to perform it?

Empty bucket

Leaky bucket is leaking request of uniform. 10r / s is how uniform it? Leaking a request every 100ms.

In such a configuration, the bucket is empty, all in real time can not leak out of the request will be rejected out.

Therefore, if the request 10 arrives at the same time, only one request can be executed, the other will be rejected.

This is not very friendly, most business scenarios we hope this 10 request can be implemented.

Burst

We configure change it, solve the problem of a


limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12; proxy_pass http://login_upstream; } } 
  • burst = 12 leaky bucket size is set to 12

Burst

Logically called leaky bucket, to implement a FIFO queue, the request can not be performed is temporarily cached.

Such leakage rate is still 100ms a request, but concurrent come, temporarily not requested to be performed, you can cache up. Only when the queue is full time, he will refuse to accept new requests.

Such leaky bucket at the same time limiting, but also played a role in load shifting.

In such a configuration, if there are 10 requests arrive simultaneously, they will be executed sequentially, one per 100ms performed.

Although implemented, but the line was implemented, delayed greatly increased in many scenes is still unacceptable.

NoDelay

继续修改配置,解决Delay太久导致延迟增加的问题


limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12 nodelay; proxy_pass http://login_upstream; } } 
  • nodelay 把开始执行请求的时间提前,以前是delay到从桶里漏出来才执行,现在不delay了,只要入桶就开始执行

NoDelay

要么立刻执行,要么被拒绝,请求不会因为限流而增加延迟了。

因为请求从桶里漏出来还是匀速的,桶的空间又是固定的,最终平均下来,还是每秒执行了5次请求,限流的目的还是达到了。

但这样也有缺点,限流是限了,但是限得不那么匀速。以上面的配置举例,如果有12个请求同时到达,那么这12个请求都能够立刻执行,然后后面的请求只能匀速进桶,100ms执行1个。如果有一段时间没有请求,桶空了,那么又可能出现并发的12个请求一起执行。

大部分情况下,这种限流不匀速,不算是大问题。不过nginx也提供了一个参数才控制并发执行也就是nodelay的请求的数量。


limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12 delay=4; proxy_pass http://login_upstream; } }

本文以示例的形式,2904628156由浅入深讲解Nginx限流相关配置,是对简略的官方文档的积极补充。

Nginx限流使用的是leaky bucket算法,如对算法感兴趣,可移步维基百科先行阅读。不过不了解此算法,不影响阅读本文。

作者:程序员赵鑫

https://www.cnblogs.com/xinzhao/p/11465297.html

空桶

我们从最简单的限流配置开始:


limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit; proxy_pass http://login_upstream; } } 
  • $binary_remote_addr 针对客户端ip限流;
  • zone=ip_limit:10m 限流规则名称为ip_limit,允许使用10MB的内存空间来记录ip对应的限流状态;
  • rate=10r/s 限流速度为每秒10次请求
  • location /login/ 对登录进行限流

限流速度为每秒10次请求,如果有10次请求同时到达一个空闲的nginx,他们都能得到执行吗?

Empty bucket

漏桶漏出请求是匀速的。10r/s是怎样匀速的呢?每100ms漏出一个请求。

在这样的配置下,桶是空的,所有不能实时漏出的请求,都会被拒绝掉。

Therefore, if the request 10 arrives at the same time, only one request can be executed, the other will be rejected.

This is not very friendly, most business scenarios we hope this 10 request can be implemented.

Burst

We configure change it, solve the problem of a


limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12; proxy_pass http://login_upstream; } } 
  • burst = 12 leaky bucket size is set to 12

Burst

Logically called leaky bucket, to implement a FIFO queue, the request can not be performed is temporarily cached.

Such leakage rate is still 100ms a request, but concurrent come, temporarily not requested to be performed, you can cache up. Only when the queue is full time, he will refuse to accept new requests.

Such leaky bucket at the same time limiting, but also played a role in load shifting.

In such a configuration, if there are 10 requests arrive simultaneously, they will be executed sequentially, one per 100ms performed.

Although implemented, but the line was implemented, delayed greatly increased in many scenes is still unacceptable.

NoDelay

Continue to modify the configuration, problem solving Delay too long, resulting in increased latency


limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12 nodelay; proxy_pass http://login_upstream; } } 
  • nodelay the time to start execution of the request in advance, before a delay to leak from the bucket before the execution, now do not delay, as long as the barrel to begin

NoDelay

Either immediately executed or denied, not because of limiting increases delayed.

Because the request is uniform, the space of the barrel is fixed from the bucket spilled, and the final on average, five times or execute requests per second, the purpose of limiting or reached.

But it also has shortcomings, the current limit is the limit, but the limit a little less uniform. In the above configuration, if there are simultaneous requests reaches 12, 12 which then requests can be executed immediately, and then subsequent requests only into uniform tub, a 100ms performed. If there is no request for some time, empty the bucket, then another 12 simultaneous requests may appear to perform together.

In most cases, the current limit is not uniform, not a big problem. Nginx but it also provides a control parameter is the number of concurrent execution of requests nodelay.


limit_req_zone $binary_remote_addr zone=ip_limit:10m rate=10r/s; server { location /login/ { limit_req zone=ip_limit burst=12 delay=4; proxy_pass http://login_upstream; } }

Guess you like

Origin www.cnblogs.com/frg9910/p/11695709.html