Disclaimer: This article is a blogger original article, please indicate the source: http: //blog.csdn.net/deaidai https://blog.csdn.net/deaidai/article/details/91354576
Foreword
High concurrency scenarios can be seen everywhere, how to increase their vision for highly concurrent optimization of it? I hope this summary you can add a little interest to some solutions
here, "front end", as opposed to the request "back end", that is the back-end architecture "request entry" for the front end.
text
Limiting analysis
- Soft front-end component load balancing infrastructure (lvs / nginx / haproxy)
- OpenResty anti brush, limiting
- OpenResty for current limiting in several ways
- Redis limiting achieve
Limiting policy
- Limiting the total number of concurrent interfaces: ip according to which limit the number of concurrent connections
- Smoothing limits the number of request interfaces: an interface 120 to limit call only ip per minute (smoothing processing request, i.e., let two requests per second)
- Limiting the time window requests the interface: ip limit can only be called 120 times per minute, the interface (allowed time period at the beginning of the disposable miss request 120)
Limiting algorithm
- Leaky bucket restrictor
- Ip limit can only be called 120 times per minute, the interface (the size is 120, the smoothing processing the request, i.e., let two requests per second), over part of the tub into the waiting, if the barrel was full, the current limiting is performed
- Limiting token bucket algorithm
- Ip limit can only be called 120 times per minute interfaces, but allow some traffic burst (burst traffic exceeding the capacity of bucket capacity of the tub (120), directly rejected.