SpringCloudGateWay of limiting

I. INTRODUCTION
in highly concurrent systems, often need to limit the current in the system makeup . On the one hand is to prevent the large number of requests the server is overloaded, resulting in service is not available, on the other hand is to prevent network attacks.
Limiting common method, such as Hystrix, isolation pool application threads, and thread pool load exceeds fusion go logic. In general, the application server (such as Tomcat container) used to control the concurrency by limiting the number of threads, and the flow velocity is also controlled by the average time window. Common current limiting latitude include current IP, URI and user access frequency to pass.
_ The current limit is usually completed at the gateway, such as nginx, openresty, kong, zuul, spring cloud gateway , etc., can be done at the application layer by aop.

2. Limiting algorithm
1. Counter algorithm
_ counter algorithm uses counters to achieve current limit is a little rough and simple, we generally limit the number of requests that can be passed in one second, such as limiting qps 100, idea is to start counting from the first request , in the next 1s, each request count increases by one, if the accumulated number reaches ve 100, subsequent requests will be completed. Department refused. After 1s, the count returns to 0 and restarts counting. Specifically implemented as follows: For each service call, can be added to counter 1 and Atomiclong incrementandget () method returns the latest value, and the latest value is compared with a threshold value. The implementation, I think we all know there is a drawback: If I passed the 100 requests in the first 10 milliseconds, a unit of time, then the back of 990 milliseconds can only stand and deny the request. We refer to this phenomenon as "spike phenomenon."

2. Leaky Bucket algorithm
to eliminate the "Rush phenomenon", the current limit can be achieved using the leaky bucket algorithm. Leaky bucket algorithm name is very vivid. Algorithm in a container, similar to a funnel everyday use. When a request comes in, the equivalent of water into the funnel, and then slowly and evenly flows out from the small opening. No matter how much traffic the above, the following flow remains unchanged. No matter what service the caller unstable, bucket algorithm will be used to limit the current leakage, and once every 10 milliseconds processing requests. Since the processing speed is fixed, the speed of the incoming request is unknown. There may be many requests suddenly appeared. Request can not be processed on the first bucket. Because it is a bucket, it is necessary to have a capacity limit. If the bucket is full, the new request will be discarded.

_ In the algorithm, it is possible to prepare a queue to hold requests, you can use the thread pool (scheduled for execution service) to regularly get requests from the queue and executes them, thereby obtaining a multiple concurrent execution.

After use the algorithm has shortcomings: short burst can not handle traffic.

three. Token bucket algorithm
in a sense, the token bucket algorithm is leaky bucket algorithm improvements. bucket algorithm can limit the rate of the call request, the token bucket algorithm may be an average calling rate limit, while allowing a degree of emergency calls. In the token bucket algorithm, there is a fixed number of tokens in the storage buckets. Algorithm, a mechanism can be a constant rate into the token bucket. Each request calls need to obtain the token. After only get a token to continue. Otherwise, the alternative is to wait for the token available directly or refused. Play token is a continuous action. If the number of tokens in the bucket reaches the upper limit, these tokens are discarded. So there is such a case. A large bucket of tokens available. In this case, incoming requests can be performed directly through a token. For example, the qps to 100, and a second current limiter after initialization, the tub 100 tokens. At this point, the service is not yet available. After startup, the flow restrictor 100 can withstand momentary request. Thus, not only the request waiting in the token bucket, which is equivalent to performing a certain rate.

_ Implement ideas: You can prepare a queue to hold the token, and the token generation on a regular basis to the queue by the thread pool. Each request, you can get a token from the queue and continue.

three. Spring Cloud gateway current limit
in SpringCloudGateway in a filter, the three filters mentioned above can realize his i

Guess you like

Origin www.cnblogs.com/blogst/p/10930384.html