High thread concurrency solutions

In the development of highly concurrent systems, how to protect the system?

Three options: cache, demotion, limiting

 

(1) Cache: Cache is simple to understand from the database to query data stored in the cache system, such as Memcached or redis, then the next time come back to the acquired directly from the cache. The purpose of the cache is to enhance the system and an increased access speed system processing capacity.

(2) downgrade: downgraded when the service problems or affect the core processes, temporarily masked, or problem to be solved after the summit opened.

(3) limiting: limiting object is protected by concurrent access / limit the rate request, or a request within a time window system speed, the rate limit is reached once the service may be denied, or wait queue, downgrade and other treatment. Simple to understand is that there are 1000 requests (traffic) came within one second, and the system can only handle 100 / second, this time obviously can not handle over, it is clear that it will lead to system failures, then for this request flow, we can do a restriction, configure up to an acceptable request is 100 / sec, extra requests, how to do it, it is locked, a prompt return to direct the high number of the current request, please try again test, which is the current limit (limited to a maximum flow rate of a time window).

 

Common limiting algorithm: algorithm calculator

First, the principle of counter algorithm

Counter method is the algorithm in the simplest and most easily implemented an algorithm limiting, usually we will limit the number of seconds a request can pass through. For example, our provisions for the A interface, we can not for one minute visits over 100. Then we can do this: in the beginning, we can set a counter counter, whenever a request is over, a counter is incremented, if the value of counter 100 and the time interval is greater than the request with the first request further within 1 minute, then that too many requests; request interval if the first request is greater than 1 minute, and also limiting the value of counter flow range, the counter is reset.

 

 Small counter algorithm to achieve

 

        For technology, no amount of theory, not as good practice to understand, I want to look at how to achieve.

Look specific code:

public  class Counter {
   public  Long timeStamp = System.currentTimeMillis (); // current time 
  public  int reqCount = 0; // initialize counter 
  public  Final  int limit = 100; // within a time window of maximum number of requests 
  public  Final  Long interval The * 1000 = 60; // time window MS 

  public  Boolean limit () {
     Long now = System.currentTimeMillis ();
     IF (now <+ timeStamp interval the) {
       // within the time window 
      reqCount ++ ;
       //Determines whether the current time window exceeds the maximum number of requests control 
      return reqCount <= limit; 
    } the else { 
      timeStamp = now;
       // timeout after reset 
      reqCount =. 1 ;
       return  to true ; 
    } 
  } 
}

Note: The above code logic is very simple for the total number of statistics requests within 1 second, then the verification request number exceeds the limit number of requests, if in addition to one second, then reset requests .

However, this algorithm has a downside:

For more than a time period of seconds, the there will be a very serious problem, and that is the critical issue.

 

 We can see from the figure above, suppose you have a malicious user, he was in 0:59, and instantly sent 100 requests, and 1:00 and instantly sent 100 requests, so in fact the user in one second there, 200 requests transmitted instantly. We have just one minute up to a predetermined request 100, i.e. up to 1.7 per request, the user request burst reset node time window, we can instantly exceeds the rate limit. Possible for the user algorithm by this vulnerability, and instantly overwhelm our application.

Solutions to be updated

 

Guess you like

Origin www.cnblogs.com/mjtabu/p/12556999.html