rocksdb out of memory

Dirk :

I'm trying to find out why my kafka-streams application runs out of memory. I already found out that rocksDB is consuming lots of native memory and I tried to restrict it with the following configuration:

# put index and filter blocks in blockCache to avoid letting them grow unbounded (https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks)
cache_index_and_filter_blocks = true;

# avoid evicting L0 cache of filter and index blocks to reduce performance impact of putting them in the blockCache (https://github.com/facebook/rocksdb/wiki/Block-Cache#caching-index-and-filter-blocks)
pinL0FilterAndIndexBlocksInCache=true

# blockCacheSize should be 1/3 of total memory available (https://github.com/facebook/rocksdb/wiki/Setup-Options-and-Basic-Tuning#block-cache-size)
blockCacheSize=1350 * 1024 * 1024

# use larger blockSize to reduce index block size (https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide#difference-of-spinning-disk)
blockSize=256 * 1024

but still the memory usage seems to grow unbounded and my container eventually gets OOMKilled.

I used jemalloc to profile the memory usage (like described here) and the result clearly shows that rocksDB is responsible but I have no clue how to further restrict the memory usage of rocksDB.

jemalloc profiling

I don't know if it is helpful, but for completeness here are statistics gathered from a running rocksdb instance:

rocksDB statistics

I'm glad for any hints

Dirk :

I found out what was causing this.

I thought that my kafka streams application would have only one rockDB instance. But there is one instance per stream partition. So this configuration:

blockCacheSize=1350 * 1024 * 1024

Does not necessarily mean that the rocksDB memory is restricted to 1350MB. If the application has e.g. 8 stream partitions assigned it also has 8 blockCaches and thus can take up to 1350 * 8 = ~11GB of memory.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=121713&siteId=1