On the current limit (lower) combat

Common applications limiting means

Application development common current limiting are what it? In fact, common limiting means are relatively simple, the key is limiting high concurrent services. In order to achieve efficient and effective on an LB limiting, common practice is Nginx + Lua or Nginx + Redis limiting service to implement the service, so the market are commonly used to achieve frame waf based Openresty. We look at a few of the more common current limiting mode.

Openresty + count shared memory to achieve flow restrictor

Look at the code limiting Code

lua_shared_dict limit_counter 10m;
server {
listen 80;
server_name www.test.com;
location / {
root html;
index index.html index.htm;
}

location /test {
access_by_lua_block {
local function countLimit()
local limit_counter =ngx.shared.limit_counter
local key = ngx.var.remote_addr .. ngx.var.http_user_agent .. ngx.var.uri .. ngx.var.host
local md5Key = ngx.md5(key)
local limit = 10
local exp = 300
local current =limit_counter:get(key)
if current ~= nil and current + 1> limit then
return 1
end
if current == nil then
limit_counter:add(key, 1, exp)
else
limit_counter:incr(key, 1)
end
return 0
end

local ret = countLimit()
if ret > 0 then
ngx.exit(405)
end
}
content_by_lua 'ngx.say(111)';
}
}

The above explain this simple code, unique to KEY same combination IP UA HOST URI is a URI with each user within 5 minutes to allow only 10 request, if the request is more than 10 times, returns to the state 405 code, if less than 10, processing continues to the later stages.
The results look access

curlhttp://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
<html>
<head><title>405 Not Allowed</title></head>
<body bgcolor="white">
<center><h1>405 Not Allowed</h1></center>
<hr><center>openresty/1.13.6.2</center>
</body>
</html>

This is an example of a simple counter flow restrictor

Openresty limit module connections and the number of requests

Limit the number of connection requests and the number of modules is lua-resty-limit-traffic. Based on its speed leaky bucket principle we said before.
Turn on the water side of the reservoir while water problems are. Injection speed herein is new velocity request / connection, the drainage velocity speed limit is arranged. Turn on the water when the water faster than the speed (performance of pool water appears), a delay value is returned. Caller to slow down the rate of water through ngx.sleep (delay). When the reservoir is full (expressed as current request / burst number of connections exceeds the set value), the error message is returned rejected. The caller needs to lose this part of the overflow.
Look at the configuration code

http {
lua_shared_dict my_req_store 100m;
lua_shared_dict my_conn_store 100m;

server {
location / {
access_by_lua_block {
local limit_conn = require "resty.limit.conn"
local limit_req = require "resty.limit.req"
local limit_traffic = require "resty.limit.traffic"

local lim1, err = limit_req.new("my_req_store", 300, 150)
--300r/s的频率,大于300小于450就延迟大概0.5秒,超过450的请求就返回503错误码
local lim2, err = limit_req.new("my_req_store", 200, 100)
local lim3, err = limit_conn.new("my_conn_store", 1000, 1000, 0.5)
--1000c/s的频率,大于1000小于2000就延迟大概1s,超过2000的连接就返回503的错误码,估算每个连接的时间大概是0.5秒,
local limiters = {lim1, lim2, lim3}

local host = ngx.var.host
local client = ngx.var.binary_remote_addr
local keys = {host, client, client}

local states = {}
local delay, err = limit_traffic.combine(limiters, keys, states)
if not delay then
if err == "rejected" then
return ngx.exit(503)
end
ngx.log(ngx.ERR, "failed to limit traffic: ", err)
return ngx.exit(500)
end

if lim3:is_committed() then
local ctx = ngx.ctx
ctx.limit_conn = lim3
ctx.limit_conn_key = keys[3]
end

print("sleeping ", delay, " sec, states: ",
table.concat(states, ", "))

if delay >= 0.001 then
ngx.sleep(delay)
end
}
log_by_lua_block {
local ctx = ngx.ctx
local lim = ctx.limit_conn
if lim then
local latency = tonumber(ngx.var.request_time)
local key = ctx.limit_conn_key
local conn, err = lim:leaving(key, latency)
if not conn then
ngx.log(ngx.ERR,
"failed to record the connection leaving ",
"request: ", err)
return
end
end
}
}
}
}

Simple comments can introduce it probably explains the parameters. Specific can be found in the official document under
https://github.com/openresty/lua-resty-limit-traffic
attention to the next, limiting the number of connections there leaving in log phase () call to dynamically adjust the requested time. Do not forget to call in leaving
took so long pit did not feel so what needs attention. When the test is to measure the effect, required under ngx.sleep, otherwise, a simple procedure, without any pressure, Nginx can be executed, there will be no delay. So it is necessary to test the delay time content to do the next stage of sleep, you can measure the effects.

Openresty limiting dynamic shared memory

As we use found that attacks or traffic to fight over when I usually processes are: first found by logging service there is traffic, and then query the attack IP or UID, and finally ban these IP or UID. It has been lagging behind. What we should do is direct interception by dynamic analysis when the traffic coming in, rather than lag interception, interception lag services are likely to flow killed.
Dynamic current limit is based on limiting the foregoing techniques.

lua_shared_dict limit_counter 10m;
server {
listen 80;
server_name www.test.com;

 

location / {
root html;
index index.html index.htm;
}

location /test {
access_by_lua_block {
local function countLimit()
local limit_counter =ngx.shared.limit_counter
local key = ngx.var.remote_addr .. ngx.var.http_user_agent .. ngx.var.uri .. ngx.var.host
local md5Key = ngx.md5(key)
local limit = 5
local exp = 120
local disable = 7200
local disableKey = md5Key .. ":disable"
local disableRt = limit_counter:get(disableKey)
if disableRt then
return 1
end
local current =limit_counter:get(key)
if current ~= nil and current + 1> limit then
dict:set(disableKey, 1, disable)
return 1
end
if current == nil then
limit_counter:add(key, 1, exp)
else
limit_counter:incr(key, 1)
end
return 0
end

local ret = countLimit()
if ret > 0 then
ngx.exit(405)
end
}
content_by_lua 'ngx.say(111)';
}
}

Look at this line results

curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
111
curl http://www.test.com/test
<html>
<head><title>500 Internal Server Error</title></head>
<body bgcolor="white">
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty/1.13.6.2</center>
</body>
</html>

General idea is relatively simple, once the discovery request trigger threshold (5 times 2 minutes), the request directly to the unique value into the blacklist 2 hours, after which the request if it is found on the blacklist, directly returns 503. If there is no trigger threshold, then the request to add unique value 1, the expiration time of this counter is two minutes, two minutes after a re-count. Basically meet our current dynamic current limiting.

At last

I am currently working in more common way of limiting to the above three types, the second module is oenresty official, it has been able to meet the needs of the vast majority of current limiting, to achieve the purpose of protection services. Simple controls limiting the use of openresty + shared.DICT very easy to implement, the shared.DICT into Redis can implement distributed limiting. Of course, a lot has been on the market particularly good open source gateway service framework includes waf functions, such as using more kong, orange, has a lot of big companies are using, and recently more popular apisix and so on. If there is demand, then we can focus the next.

On the current limit (on)

Guess you like

Origin www.cnblogs.com/feixiangmanon/p/11495314.html