zuul isolation mechanism

Reprinted from the article: https: //blog.csdn.net/farsight1/article/details/80078099

ZuulException REJECTED_SEMAPHORE_EXECUTION is an exception in the recent performance tests often encountered. Query data found that because zuul default each route directly to do with the semaphore isolation, and the default value is 100, that is, when the amount of a routing request signal is above 100 then the denial of service, return 500.

Semaphore isolation

Since the default value is too small, then retested in the signal amount of each gateway routing configuration improves.

routes:
    linkflow:
      path: /api1/**
      serviceId: lf
      stripPrefix: false
      semaphore:
        maxSemaphores: 2000
    oauth:
      path: /api2/**
      serviceId: lf
      stripPrefix: false
      semaphore:
        maxSemaphores: 1000

Semaphore two separate routes increased to 2000 and 1000. We then gatling test.

setUp(scn.inject(rampUsers(200) over (3 seconds)).protocols(httpConf))

This is our model, starting 200 users within 3s, five sequential access API. So there will be 1000 request. Machine configuration only two nuclear 16G, and a docker of the database. Therefore, the overall performance is not high.

 

 

 Look at the results there are still 57 KO, but there are many good proportion of the more than 900 KO before 1000 Request.

Thread isolation

EdgwareVersion of spring cloud provides another thread pool based on the isolation mechanism. It is also very simple to implement,

zuul:
  ribbon-isolation-strategy: THREAD
  thread-pool:
    use-separate-thread-pools: true
    thread-pool-key-prefix: zuulgw
     
hystrix:
  threadpool:
    default:
      coreSize: 50
      maximumSize: 10000
      allowMaximumSizeToDivergeFromCoreSize: true
      maxQueueSize: -1
      execution:
        isolation:
          thread:
            timeoutInMilliseconds: 60000

use-separate-thread-poolsMeans that each route has its own thread pool, instead of a share.
thread-pool-key-prefixThread pool specifies a prefix to facilitate debugging.
hystrixThe main thread pool size settings section, set here 10000, in fact, not the better. The larger thread pool load shifting effect is more significant, is the time for space. The overall load on the system will rise, leading to longer and longer response time, then when the response time exceeds a certain limit, in fact, the system can be considered unusable. You can see behind the data.

 

 

 This is not the case of the 500, 1000 Request returned to normal.

Compare

From the comparison of both isolated FIG few effects on isolated FIG semaphore, the thread is isolated FIG.

Response time distribution

 

 

 

 

Intuitively distribution can be found using threads isolated look better response within 600ms will be some more.

QPS

 

 

 

 

Two graphs show the number of Request and Response the same time.

Look at the semaphore isolation scenario, Response per second is increased gradually, but after reaching an order of magnitude, gateway to start a denial of service. Guess is over the limit or semaphore timeout?

Thread isolation of this is more interesting, you can see Request per second faster than the rate of increase above described system is attempting to receive more requests and then distributed to the thread pool. Look Response per second but began to fall at some point in time, because the thread continues to create consumes a lot of system resources, slow response. After the request because the less, reduce the load, Response began to lift. So the thread pool is not the bigger the better, we need to continue to debug to find a balance.

summary

Thread Pool provides a better signal than the amount of isolation mechanism, and found that from a practical testing can be completed more requests at high throughput scenarios. But semaphore isolation of smaller cost for the system itself is less than 10ms, apparently semaphore more appropriate.

 

Guess you like

Origin www.cnblogs.com/alimayun/p/11575436.html