RocksDB Rate Limiter Source resolve

The project we focus on a part of RocksDB: Rate Limiter. In fact, thinking Rate Limiter in many other systems are also commonly used.

In RocksDB, the background compaction will run in real time and flush operation, which will be a large number of disk writes. You can Rate Limiter to control the upper limit of the maximum write speed. Because, in some scenarios, a large number of write burst can cause a lot of read latency, which affects system performance.

The basic principle Rate Limiter is a token bucket algorithm : the number of systems per second to throw a bucket as a token 1 QPS / (full up), write requests only to get a token to be processed. When no token bucket can be a denial of service (blocking). It can be implemented in RocksDB reference herein .

Rate Limiter has adjustable following parameters :

  • int64_t rate_bytes_per_sec: upper control limit and the total write quantity of compaction per flush. You only need to adjust a parameter in general.
  • int64_t refill_period_us: How long refilled control tokens, such as rate_limit_bytes_per_sec is 10MB / s, while refill_period_us is 100ms, then the traffic every 100ms is 1MB / s.
  • int32_t fairness: used to control the high and low priority requests, to prevent starvation of low priority request.

A more detailed description can be seen rate_limiter.h directly:

 1 // Create a RateLimiter object, which can be shared among RocksDB instances to control write rate of flush and compaction.
 2 // @rate_bytes_per_sec: this is the only parameter you want to set most of the time. It controls the total write rate of compaction and flush in bytes per second. Currently, RocksDB does not enforce rate limit for anything other than flush and compaction, e.g. write to WAL.
 3 // @refill_period_us: this controls how often tokens are refilled. For example, when rate_bytes_per_sec is set to 10MB/s and refill_period_us is set to 100ms, then 1MB is refilled every 100ms internally. Larger value can lead to burstier writes while smaller value introduces more CPU overhead. The default should work for most cases.
 4 // @fairness: RateLimiter accepts high-pri requests and low-pri requests.  A low-pri request is usually blocked in favor of hi-pri request. Currently, RocksDB assigns low-pri to request from compaction and high-pri to request from flush. Low-pri requests can get blocked if flush requests come in continuously. This fairness parameter grants low-pri requests permission by 1/fairness chance even though high-pri requests exist to avoid starvation. You should be good by leaving it at default 10.
 5 // @mode: Mode indicates which types of operations count against the limit.
 6 // @auto_tuned: Enables dynamic adjustment of rate limit within the range `[rate_bytes_per_sec / 20, rate_bytes_per_sec]`, according to the recent demand for background I/O.
 7 extern RateLimiter* NewGenericRateLimiter(
 8    int64_t rate_bytes_per_sec, int64_t refill_period_us = 100 * 1000,
 9    int32_t fairness = 10,
10    RateLimiter::Mode mode = RateLimiter::Mode::kWritesOnly,
11    bool auto_tuned = false);
12 
13 }  // namespace rocksdb

Here is a bool auto_tuned, this is a RocksDB the band Auto tune Rate Limiter modules. Because this rate_bytes_per_sec (upper limit writing speed) is very difficult to manually adjustable, not too big effect, too small will lead to a large number of write operations can not continue. So this module RocksDB provided to automatically adjust. When the module is turned on, the meaning of the parameters rate_bytes_per_sec write speed becomes defined upper limit of (In this case rate_bytes_per_sec will indicate the upper-bound of the window within which a rate limit will be picked dynamically.). After this auto tuner will periodically monitor I / O write amount, and the corresponding increase / decrease amount upper limit value writing (resetting rate_bytes_per_sec_ and refill_bytes_per_period_). Here's benchmark show Auto-tune Rate Limiter can effectively reduce the level of write sudden increase.

 

Guess you like

Origin www.cnblogs.com/pdev/p/11619881.html