First, the scene description
In the development interface of the server process, in order to prevent abuse of the client interface, server resource protection, in general, we will limit the number of calls for a variety of interfaces on the server. For example, for the number of times a user, who in a period of time (interval), such as one minute, call the server interface can not be greater than an upper limit (limit), for example, 100 times. If the number of users exceeds the upper limit of the call interface, then it refused to direct the user's request, returns an error message.
Service interface flow control strategy: diversion, demotion, current limiting. This article discusses the lower limit traffic policy, although reducing the frequency and amount of concurrent access to service interface, but in exchange for high-availability service interfaces and business application systems.
Second, the common limiting algorithm
1, the token bucket algorithm
leaky bucket (Leaky Bucket) algorithm is very simple idea, water (request) to the drain into the first bucket, bucket water at a constant speed (rate responsive to the interface), the inflow rate when the water overflows through direct Assembly ( Interface access frequency exceeds the response rate), and then rejects the request, the transmission rate can be seen that the leaky bucket algorithm can impose restrictions on the data in the following schematic:
There are two variables can be seen, it is the size of a barrel, how much water (burst) can be saved when the sudden increase in traffic support, and the other is the size of a bucket vulnerability (rate).
Because the leakage rate of leaky bucket parameters are fixed, so that, even if the network resource conflict does not exist (no congestion occurs), the token bucket algorithm does not make stream burst (Burst) to the port speed. Thus, the presence of the token bucket algorithm for projection flow characteristics of the hair is inefficient.
2, the token bucket algorithm
token bucket algorithm (Token Bucket), and the same effect Leaky Bucket algorithm, but in the opposite direction, more readily understood. Over time, the system will be a constant 1 / QPS interval (if QPS = 100, then interval is 10ms) in the bucket added Token (imagine leaks and loopholes contrary, there is a constant increase in the tap water), if the bucket is full is not coupled with a new request comes, we will each take a Token, if not Token can be picked them blocked or denial of service.
Another advantage is the token bucket can easily change the speed. Once the need to increase the rate, the demand to increase the rate of tokens into the bucket. Generally timing (for example 100 milliseconds) increased the tub to a certain number of tokens , the number of variants of some of the real-time calculation algorithm should increase the token.
Third, based on PHP + token bucket algorithm implemented Redis
<?php
namespace Api\Lib; /** * 限流控制 */ class RateLimit { private $minNum = 60; //单个用户每分访问数 private $dayNum = 10000; //单个用户每天总的访问量 public function minLimit($uid) { $minNumKey = $uid . '_minNum'; $dayNumKey = $uid . '_dayNum'; $resMin = $this->getRedis($minNumKey, $this->minNum, 60); $resDay = $this->getRedis($minNumKey, $this->minNum, 86400); if (!$resMin['status'] || !$resDay['status']) { exit($resMin['msg'] . $resDay['msg']); } } public function getRedis($key, $initNum, $expire) { $nowtime = time(); $result = ['status' => true, 'msg' => '']; $redisObj = $this->di->get('redis'); $redis->watch($key); $limitVal = $redis->get($key); if ($limitVal) { $limitVal = json_decode($limitVal, true); $newNum = min($initNum, ($limitVal['num'] - 1) + (($initNum / $expire) * ($nowtime - $limitVal['time']))); if ($newNum > 0) { $redisVal = json_encode(['num' => $newNum, 'time' => time()]); } else { return ['status' =