Limiting examples of the token bucket algorithm

Limiting and demotion
 
Limiting purpose is to ensure the stability of the core services, commonly used in current limiting mode limited capacity downstream services, but are afraid a sudden surge in traffic (e.g., malicious reptiles, high-actuation holidays, etc.) caused due to excessive pressure downstream services the denial of service scenario. Common mode current limiting concurrency control and have controlled rate, is a limit on the number of concurrent, and concurrent access to a limiting rate.
 
The limit of the flow
 
About downgrade method of limiting the token bucket, bucket, counters, etc., we need to understand the current limiting algorithm based on token bucket
 
Limiting generally divided into distributed single flow restrictor and limiting, if the implementation of distributed, then limiting a necessary common services such as back-end storage redis , in nginx using the node lua read redis configuration information
 
About downgrade
 
Service pressure surge when the current business situation and flow pressure to downgrade the policy, in order to link servers and pages of some of the services to ensure that perform core tasks. While ensuring some even most of the tasks that customers can get the correct accordingly. The current request is not processed or an error, and returned to a default.
 
Description token bucket
 
 
Token bucket algorithm is a fixed storage capacity of the token bucket, added to the token bucket in accordance with a fixed rate.
 
  • Suppose limit r / S , it will be expressed per r a token in the bucket, or every 1 / r to increase a second token bucket
  • Tub store up to b tokens, when the bucket is full, the newly added tokens are discarded or rejected
  • When a n -byte size of the data packet arrives, removed from the bucket n tokens, then the packet is transmitted to the network
  • If the token bucket is less than n number, the token is not deleted, and the packet is flow restrictor ( either discarded or waiting in the buffer )
For generating tokens in the token bucket there are two approaches:
 
1 , open a timed task, the token is generated continuously by the timing task. This problem is the enormous consumption of system resources, such as an interface needs were visiting frequency limit for each user, there is 6W assumption that the system user, you need to open up to 6W timed task to maintain order in each bucket the number of cards, so the cost is enormous.
 
2 , each calculated before obtaining tokens, ideas for its implementation, if the current time is later than nextFreeTicketMicros , within the period of time is calculated how many tokens may be generated, the generated tokens in the token bucket is added and updates the data. Thus, only once when computing the token acquisition.
 
Token bucket algorithm
(Current time - time of the last visit) / * 1000 generates a token number per second
 
Specific steps are as follows:
 
1) Set the maximum number of tokens in the bucket, the number of tokens are generated per second redis cluster, the current number of remaining tokens in the bucket

2) distribution layer disposed

user  root;
worker_processes  2;
daemon off;#避免nginx在后台运行
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
   worker_connections  20480;#单个进程允许的客户端最大连接数
}


http {
    include       mime.types;
    #default_type  application/octet-stream;
    lua_code_cache off; #关闭代码缓存上线后去掉
    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    lua_shared_dict load 20k;
    lua_shared_dict redis_cluster_slot_locks 100k;
    lua_shared_dict redis_cluster_addr 20k;
    lua_shared_dict my_upstream_dict 1m;
    lua_package_path "/usr/local/openresty/lualib/project/common/lualib/?.lua;;/usr/local/openresty/lualib/project/common/resty-redis-cluster/lib/?.lua;;";
    lua_package_cpath "/usr/local/openresty/lualib/project/common/resty-redis-cluster/src/?.so;;";
    init_worker_by_lua_file /usr/local/openresty/lualib/project/init.lua;
    access_by_lua_file /usr/local/openresty/lualib/project/access.lua;
    gzip  on;
	#配置上游应用层服务器
	#动态均衡负载 hash
    upstream swoole_server_hash{
        hash $key;
        server 114.67.105.89:8002;
		upsync 114.67.105.89:8700/v1/kv/upstreams/servers upsync_timeout=20s upsync_interval=500ms upsync_type=consul strong_dependency=on;
		upsync_dump_path /usr/local/openresty/nginx/conf/servers.conf; #生成配置文件
		include /usr/local/openresty/nginx/conf/servers.conf;
    }
    #最少连接数
    upstream swoole_server_conn{
        least_conn;
        server 114.67.105.89:8002;
        upsync 114.67.105.89:8700/v1/kv/upstreams/servers upsync_timeout=20s upsync_interval=500ms upsync_type=consul strong_dependency=on;
        upsync_dump_path /usr/local/openresty/nginx/conf/servers.conf; #生成配置文件
        include /usr/local/openresty/nginx/conf/servers.conf;
    }
    #轮询
    upstream swoole_server_round{
        server 114.67.105.89:8002;
        upsync 114.67.105.89:8700/v1/kv/upstreams/servers upsync_timeout=20s upsync_interval=500ms upsync_type=consul strong_dependency=on;
        upsync_dump_path /usr/local/openresty/nginx/conf/servers.conf; #生成配置文件
        include /usr/local/openresty/nginx/conf/servers.conf;
    }
	server {
        listen       80;
		#路由匹配规则为 jd.com/546546.html
        if ( $request_uri ~* \/(\d+).html$) {
            set $key $1;
        }
        location /{
            set_by_lua_file $swoole_server /usr/local/openresty/lualib/project/set.lua
			proxy_set_header Host $host; 
			proxy_set_header X-Real-IP $remote_addr; 
			proxy_set_header REMOTE-HOST $remote_addr; 
			proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
			client_max_body_size 50m; 
			client_body_buffer_size 256k; 
			proxy_connect_timeout 30; 
			proxy_send_timeout 30; 
			proxy_read_timeout 60; 
			proxy_buffer_size 256k; 
			proxy_buffers 4 256k; 
			proxy_busy_buffers_size 256k; 
			proxy_temp_file_write_size 256k; 
			proxy_max_temp_file_size 128m; 
			proxy_pass http://$swoole_server;		
         }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        location ~ \.php/?.*   {
            root           /var/www/html;#php-fpm容器中的路径,不是nginx路径
            fastcgi_pass   114.67.105.89:9002;#对应容器的端口
            fastcgi_index  index.php;
            #为php-fpm指定的根目录
            fastcgi_param  SCRIPT_FILENAME $document_root$fastcgi_script_name; #加了这一项
            #定义变量$path_info,存放pathinfo信息
            set $path_info "";
            if ($fastcgi_script_name ~ "^(.+?\.php)(/.+)$") {
                #将文件地址赋值给变量 $real_script_name
                set $real_script_name $1;
                #将文件地址后的参数赋值给变量 $path_info
                set $path_info $2;
            }
            #配置fastcgi的一些参数
            fastcgi_param SCRIPT_NAME $real_script_name;
            fastcgi_param PATH_INFO $path_info;
            include       /usr/local/openresty/nginx/conf/fastcgi_params;
       }
    }
}

 3) init.lua script

    --启动器当中获取redis集群表
    local delay = 5
    local handler
    handler = function (premature)
        local resty_consul = require('resty.consul')
        local consul = resty_consul:new({
              host            = "114.67.105.89",
              port            = 8700,
              connect_timeout = (60*1000), -- 60s
              read_timeout    = (60*1000), -- 60s
          })
          --切换轮询等
        local res, err = consul:get_key("load") -- Get all keys
        if not res then
          ngx.log(ngx.ERR, err)
          return
        end
        ngx.log(ngx.ERR,"获取到是否要切换的标记",res.body[1].Value)
        ngx.shared.load:set("load",res.body[1].Value)
        local res, err = consul:list_keys("redis-cluster") -- Get all keys
        if not res then
            ngx.log(ngx.ERR, err)
            return
        end
        --获取集群
        local keys = {}
        if res.status == 200 then
            keys = res.body
        end
        local ip_addr = ''
        for key,value in ipairs(keys) do
            local res, err = consul:get_key(value)
            if not res then
                ngx.log(ngx.ERR, err)
                return
            end
            if table.getn(keys) == key then
                ip_addr = ip_addr..res.body[1].Value
            else
                ip_addr = ip_addr..res.body[1].Value..","
            end 
            ngx.shared.redis_cluster_addr:set("redis_addr",ip_addr)
        end
    end
   if( 0 == ngx.worker.id() ) then
        --第一次立即执行 ngx.timer.at:只执行一次
         local ok, err = ngx.timer.at(0, handler)
         if not ok then
           ngx.log(ngx.ERR, "第一次执行错误: ", err)
           return
         end
        --第二次定时执行
        local ok, err = ngx.timer.every(delay, handler)
        if not ok then
          ngx.log(ngx.ERR, "定时执行错误: ", err)
          return
        end
        ngx.log(ngx.ERR,"-----进程启动")
    end

4) set.lua script

local flag = ngx.shared.load:get("load")
local load_blance = ''
if tonumber(flag) == 1 then
    load_blance = "swoole_server_round"
elseif tonumber(flag) == 1 then
    load_blance = "swoole_server_conn"
else
    load_blance = "swoole_server_hash"
end
ngx.log(ngx.ERR,load_blance)
return load_blance

5) access.lua script

ngx.header.content_type = "text/html;charset=utf-8"
local ngx_re_split = require("ngx.re").split
local redis_cluster = require "rediscluster"
local redis_list = {}
local ip_addr = ngx.shared.redis_cluster_addr:get("redis_addr")
local ip_addr_table = ngx_re_split(ip_addr,",")
for key,value in ipairs(ip_addr_table) do
    local ip_addr = ngx_re_split(value,":")
    redis_list[key] = {ip=ip_addr[1],port=ip_addr[2]}
end
local config = {
    name = "testCluster",                   --rediscluster name
    serv_list = redis_list,
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connection_timout = 1000,               --timeout while connecting
    max_redirection = 5,                    --maximum retry attempts for redirection,
    auth = "binleen"                           --set password while setting auth
}
local red_c = redis_cluster:new(config)
ngx.update_time() --更新系统时间
local key = "{api_1_2000}"
--ngx.say(string.format("%.3f",ngx.now())*1000)
 --在redis嵌入lua脚本
local res, err = red_c:eval([[
    -- 通过url判断访问的是哪个服务
    local app_name = KEYS[1]                                         -- 标识是哪个应用
    local rareLimit = redis.call('HMGET',app_name,'max_burst','rate','curr_permits','last_second')
    local max_burst = tonumber(rareLimit[1])          -- 令牌桶存放的最大的容量(需要自己在redis集群里设置)
    local rate = tonumber(rareLimit[2])               --每秒生成令牌的个数(速率)
    local curr_permits = tonumber(rareLimit[3])       --当前令牌桶里剩余令牌个数(跟1S内的消耗有关系,)
    local last_second = tonumber(rareLimit[4])        --最后一次访问的时间
    local curr_second =  ARGV[1]                      --当前访问的时间
    local permits = ARGV[2]                           --当前这次请求消耗的令牌数
    local default_curr_permits = max_burst            --默认令牌数,默认添加10个
    --通过判断是否有最后一次的访问时间,如果满足条件,证明不是第一次获取令牌
    if (type(last_second)) ~= "boolean" and last_second ~= nil then
         --距离上次访问,按照速率大概产生多少个令牌
         local reverse_permits = math.floor((curr_second - last_second)/1000*rate)
         --如果访问时间较短,允许突发的数量
         local expect_curr_permits = reverse_permits + curr_permits
         --不能超过最大的令牌数,最终能使用的令牌数
         default_curr_permits = math.min(expect_curr_permits,max_burst)
    else
       --记录下访问时间 最后一次访问的时间为当前访问的时间 剩余令牌数=默认令牌数量-本次消耗令牌数
       local res,err = redis.call("HMSET",app_name,"last_second",curr_second,"curr_permits",default_curr_permits - permits)
       if res == "ok" then
            return 1
       end
    end
    --当前可使用的令牌 - 请求消耗的令牌 > 0 ,就表示能成功获取令牌
    if (default_curr_permits - permits) >=0 then
        --记录下访问时间 最后一次访问的时间为当前访问的时间 剩余令牌数=默认令牌数量-本次消耗令牌数
        redis.call("HMSET",app_name,"last_second",curr_second,"curr_permits",default_curr_permits - permits)
        return 1
    else
        --如果小于0,证明令牌不够
         redis.call("HMSET",app_name,"curr_permits",default_curr_permits)
         return 0
    end
]],1,key,(string.format("%.3f",ngx.now())*1000),1)
--集群 设置一下key,假设api有编号
if tonumber(res) == 1 then
     ngx.say("有可用令牌")
else
    ngx.say("无可用令牌")
end

 

Published 72 original articles · won praise 7 · views 10000 +

Guess you like

Origin blog.csdn.net/qq_39399966/article/details/103536497