SpringCloud Alibaba - current limiting algorithm used in Sentinel

1. The current limiting algorithm used in Sentinel

1.1. Counter fixed window algorithm

1.1.1. Overview of Counter Fixed Window Algorithm

  • The counter fixed window algorithm is the simplest current limiting algorithm, and the implementation method is relatively simple. That is, by maintaining the count value within a unit time, every time a request is passed, the count value is increased by 1, and when the count value exceeds the preset threshold, other requests within the unit time are rejected. If the unit time has ended, the counter is cleared to start the next round of counting.
    insert image description here

1.1.2, the problem of counter fixed window algorithm

  • But this kind of implementation will have a problem, for example: Suppose we set the request threshold allowed to pass within 1 second as 99, if a user sends 99 requests in the last few milliseconds of the time window, and then in the next time window At the beginning, 200 requests were sent, so the user actually successfully requested 198 times in one second, which obviously exceeded the value but would not be limited. In fact, this is the critical value problem, so how to solve the critical value problem?
    insert image description here

1.2. Counter sliding capacity algorithm

1.2.1. Overview of counter sliding capacity algorithm

  • The counter sliding window method was born to solve the above-mentioned problems of fixed window counting. As mentioned earlier, there is a critical value problem in the fixed window. To solve this critical value problem, it is obvious that only one window cannot solve the problem. Suppose we still set 200 requests that are allowed to pass within 1 second, but here we need to divide the time of 1 second into multiple grids, assuming it is divided into 5 grids (the more grids, the smoother the traffic transition), the window of each grid The time size is 200 milliseconds, and every 200 milliseconds, the window is moved forward by one grid. For easy understanding, you can see the following figure:
    insert image description here
  • The above figure divides the window into 5 parts, and the number in each small window indicates the number of requests in this window, so by observing the above figure, we can see that the number of requests allowed to pass at the current time (200 milliseconds) should be 70 instead of 200 (as long as it exceeds 70, the current limit will be imposed), because when we finally count the number of requests, we need to accumulate the value of the current window, and then get the current number of requests to determine whether current limit is required.

1.2.2. The problem of counter sliding capacity algorithm

  • The sliding window current limiting method is actually a variant of the counter fixed window algorithm. Whether the flow transition is smooth or not depends on the number of window grids we set, that is, the statistical time interval. The more grids, the more accurate the statistics, but we don't know how many grids to divide.

1.3. Leaky Bucket Algorithm

1.3.1. Overview of the leaky bucket algorithm

  • The leaky bucket algorithm limits the egress traffic rate with a constant, so the leaky bucket algorithm can smooth out burst traffic. The leaky bucket is used as a traffic container and we can regard it as a FIFO queue. When the ingress traffic rate is greater than the egress traffic rate, because the traffic container is limited, the excess traffic will be discarded when the size of the traffic container is exceeded.
  • The figure below vividly illustrates the principle of the leaky bucket algorithm, in which the faucet is the inlet flow, the leaky bucket is the flow container, and the water flowing out at a constant speed is the outlet flow.
    insert image description here

1.3.2. The characteristics of the leaky bucket algorithm

  • The leaky bucket has a fixed capacity, and the egress flow rate is a fixed constant (outgoing request)
  • Ingress traffic can flow into the leaky bucket at any rate (incoming requests)
  • If the ingress traffic exceeds the capacity of the bucket, the incoming traffic overflows (new requests are rejected)
  • However, because the leaky bucket algorithm limits the outflow rate to a fixed constant value, the leaky bucket algorithm does not support bursty outflow traffic. But in reality, traffic is often bursty.

1.4. Token Bucket Algorithm

1.4.1. Overview of Token Bucket Algorithm

  • The token bucket algorithm is an improved version of the leaky bucket algorithm, which can support burst traffic. However, unlike the leaky bucket algorithm, the leaky bucket of the token bucket algorithm stores tokens instead of traffic.

1.4.2. How does the token bucket algorithm solve the problem of burst traffic

  • At the beginning, the token bucket is empty, we add tokens to the token bucket at a constant rate, and when the token bucket is full, the excess tokens are discarded. When a request comes, it will first try to obtain a token from the token bucket (equivalent to removing a token from the token bucket). If the acquisition is successful, the request will be released, and if the acquisition fails, the request will be blocked or rejected.
    insert image description here

1.4.3. Features of token bucket algorithm

  • A maximum of b tokens can be stored and issued. If the token bucket is full when the token arrives, the token will be discarded.
  • Whenever a request comes, it will try to remove a token from the bucket. If there is no token, the request will not pass.
  • The token bucket algorithm limits the average traffic, so it allows burst traffic (as long as there are tokens in the token bucket, the traffic will not be limited).

Guess you like

Origin blog.csdn.net/li1325169021/article/details/131756857