BeetleX service gateways limiting and caching

Limiting and caching are related to two very important gateway function, the former security services is more reliable operation, the latter can greatly improve throughput applications. Beetlex.Bumblebee Micro Services Gateway provides two extensions to achieve these two functions, namely BeetleX.Bumblebee.ConcurrentLimits and BeetleX.Bumblebee.Caching. ConcurrentLimits provide concurrent IP or different Url limiting policy, and Caching can be configured with different cache strategy based on different Url. Next comes the use and configuration of these two plug-ins.

Plug reference

BumblebeeUse JWTneed to reference two plug-ins, respectively, Bumblebee.Configuration,BeetleX.Bumblebee.ConcurrentLimitsand BeetleX.Bumblebee.Caching. After the start can be loaded through the plug-in configuration management tools.

            g = new Gateway();
            g.HttpOptions(
                o =>
                {
                    o.Port = 80;
                    o.LogToConsole = true;
                    o.LogLevel = BeetleX.EventArgs.LogType.Error;
                });
            g.Open();
            g.LoadPlugin(
                typeof(Bumblebee.Configuration.Management).Assembly,
                typeof(Bumblebee.Caching.default_request_cached_reader).Assembly,
typeof(Bumblebee.ConcurrentLimits.UrlConcurrentLimits).Assembly
               );

These are just code references plug, it is recommended to run direct download version: https://github.com/IKende/Bumblebee/blob/master/bin/  (supports windows / linux .net core version 2.1 or later)

After the reference plug-in plug-in management to see these two plug-ins, most of the plug-in is off by default.

Limiting configuration

default_ip_concurrent_limits

This is a limitation of concurrent requests for an IP, it can limit the number of concurrent IP of a second, if you exceed this number that this IP will be banned for some time. In order to better solve real-item configuration added a whitelist setting to exclude or limit the IP related IP range, followed by a description of the configuration to use this plug-in.

{
    "Limit": 100,
    "DisabledTime": 100,
    "CleanTime": 1800,
    "WhiteList": [
        "192.168.1.1/24",
        "192.168.2.18"
    ]
}
  • Limit the maximum number of concurrent per second
  • DisabledTime disable time, prohibiting the request when the IP number of concurrent access to more than the second time in seconds
  • CleanTime from time to time to clear no active IP limit table in seconds
  • Bai Ming WhiteList single, IP in this whitelist is not restricted

The above configuration is limited to 100 times per second, concurrent to each IP, but exclude "192.168.1.1/24" and "192.168.2.18". Then look at the test results

These are used 192.168.2.19for the two pressure measurement result, the first pressure measuring limit triggered, then 99% of the request is denied; then all subsequent requests of the first test were rejected. 200,000 rps per second from the results, are considered to be illegal, you can imagine the pressure to flow into normal service will produce how much loss! The next test IP whitelist

 

From the normal test run, we can only deal with upstream service per 40,000 rps, so concurrency control block will play a very good flood effect.

default_url_concurrent_limits

This is complicated by the development of different restrictions for different plugins Url, in a service was inevitable that some APIneed to deal with complex logic and use up a lot of resources, if these interfaces may be complicated by excessive use of the resources of the whole service is affected. With this plugin you can restrict certain APInumber of concurrent, so as to control the overall impact other resources. Here's a look at a simple example configuration

{
    "UrlLimits": [
        {
            "Url": "^/jso.*",
            "Rps": 300
        },
        {
            "Url": "^/emp.*",
            "Rps": 100
        }
    ],
    "CleanTime": 1800
}

A set of two or more Url concurrency limits, respectively limit the second request is provided post-production configuration 300 and 100. The pressure measurement results look

  • /Json

Concurrency limits are per 300tested more than 5 seconds, there is 1800a request can be successful, and the other 99 million times is rejected

  • /Employee/2

Concurrency limits are per 100tested more than 5 seconds, there is 600a request can be successful, and the other 99 million times is rejected

Cache configuration 

Cache plug-in has two parts, namely, writing and reading; if we write open reading to take effect. Cache configuration only need to configure plug-ins can be written, read plugin requires no configuration.

default_request_cache_writer

Plug-ins can be developed request caching strategies for different paths, the development is also very easy to read as follows:

{
    "Caches": [
        {
            "Url": "^/jso.*",
            "TimeOut": 100
        },
        {
            "Url": "^/api.*",
            "TimeOut": 200
        }
    ],
    "WhiteList": [
        "192.168.2.1/24"
    ]
}

This cache plugin configuration simple, just different Urlto (in seconds) corresponding to a normal configuration and cache timeout; WhiteListis an authorization whitelist cache operations. This caching mechanism is to use the .net core of MemoryCache, if you need to use Redisthe need to expand the introduction of a cache gateway for intensive treatment is still a lot efficient in local memory will be.

test

To test the effect of the gateway-level cache, so the plug was a stress test; in order to ensure the relatively large buffer to play the role so the tests were conducted in the following 10Gb network (gateway server is the old machine E3-1230V2), so you can better in the absence of prominent cache bandwidth limitations when it has reached the effect of the application. Tests are of different sizes to obtain a list of data in the cache turned off and on different differences.

http://192.168.2.18/customers/5

image

 These are complicated by the plugin shows, in front does not turn the cache concurrency about 40,000 rps, the bandwidth is 500Mb up and down; but after opening the cache concurrency reached 20 million or more rps (plug-in for converting the maximum display concurrent only 100,000 rps), the bandwidth close 3Gb.

http://192.168.2.18/customers/20

image 

These are complicated by the plugin shows, in front does not turn the cache concurrency about 20,000 rps, bandwidth 1Gb down; but after opening the cache concurrency reached 170,000 rps (plug-in for converting the maximum display concurrent only 100,000 rps), the bandwidth close 8Gb down.

Generally speaking, if a gateway cache turned on its benefits are very obvious, this time limit may be serving a concurrent output bandwidth of exports.

Cache Operations

After two plug-in installed will provide the interface to delete a Url corresponding cache, or clear all cache; both access to IP interface must whitelist No description authority to operate.

  • http://host/__system/bumblebee/cache/remove?url=缓存对应的url
  • http://host/__system/bumblebee/cache/clean

Guess you like

Origin www.cnblogs.com/smark/p/11670324.html