Common current limiting algorithm

Limiting: by concurrent access / limit the rate request, or a request within the time window to protect the system speed, the rate limit is reached once the service may be denied, or waiting queue, processing demotion

1, counting (fixed time window limiting algorithm):

Select a starting point of time, after the interface whenever there is a request comes, we will counter by one, if in the current time window, according to the rules limiting (100 times per second to allow access request), the cumulative number of occurrences exceeds limit access flow value, we refuse subsequent access request please. When the next time window, the counter is cleared to re-count.

Cons: limiting policy is too rough, can not cope with bursty traffic in the two time windows critical time

2, the current limiting sliding window algorithm:

1s in any window of time, the number of requests the interface can not exceed K times.

Maintaining a queue cycle K + 1, and for recording incoming request within 1s, [when the queue is full, position of the tail point is actually no data is stored, it would be circular queue memory space is wasted an array]

When a new request comes, we will be more than 1s request to this new request interval, removed from the queue. Then we will look at whether there is an idle position circular queue. If there is, put a new request is stored in the tail of the queue, if not, then the number of 1s in the request has exceeded the current limit value K, so the service request is rejected.

Disadvantages: only at selected times the upper limit of the flow granularity, for more fine-grained access frequency within the selected time is not limited in size.

Code circular queues:

/**
 * Air condition team head == tail
 * Force full condition (tail + 1)% n == head
 * When the queue is full, tail pointing position is actually no data is stored, so the circular queue wasting storage space of an array.
 */
public class CircularQueue {

    private String[] items;
    private int n; // queue size
    private int head = 0;
    private int tail = 0;

    public CircularQueue(int capacity) {
        items = new String[capacity];
        this.n = capacity;
    }



    public boolean enqueue(String item) {
        // queue is full
        if ((tail + 1) % n == head) return false;
        items[tail] = item;
        tail = (tail + 1) % n;
        return true;
    }

    public String dequeue() {
        if (head == tail) return null; // head == tail queue is empty
        String ret = items[head];
        head = (head + 1) % n;
        Return the right;

    }
}  

Commonly used algorithms limiting smoother: leaky bucket algorithm and the token bucket algorithm.

3, leaky bucket algorithm:

Water (request) to the drain into the first bucket, bucket water at a constant speed (rate responsive to the interface), the inflow rate when the water overflows through direct Assembly (access frequency response rate than the interface), then reject the request can be seen drain bucket algorithm can impose restrictions on the data transmission rate.

Cons: For a burst of traffic inefficiencies.

4, the token bucket: Google open source projects Guava in RateLimiter use of the token bucket control algorithm.

It will be a constant 1 / QPS interval (if QPS = 100, the interval is 10ms) was added to the bucket Token, if the bucket is not full already a plus. The new request comes, will each take a Token, if it took no Token can be blocked or denial of service.

Benefits: allow traffic to some extent unexpected.

Can easily change the speed. Once the need to increase the rate, the demand to increase the rate of tokens into the bucket. Generally timing (for example 100 milliseconds) increased the tub to a certain number of tokens, and some variants of the real-time calculation algorithm We should increase the number of tokens

Guess you like

Origin www.cnblogs.com/wjh123/p/11442632.html