Redis memory optimization

Memory optimization methods and parameters

Turn off Redis's virtual memory [VM] function, that is, vm-enabled = no
in redis.conf sets maxmemory in redis.conf, which is used to tell Redis to refuse to continue writing requests after how much physical memory is used, which can prevent Redis performance from degrading Even crashes
can set memory usage rules for the specified data type, thereby improving the memory usage efficiency of the corresponding data type.
Hash has the following two attributes in redis.conf, if any one exceeds the set value, the HashMap will be used to store the value
hash-max -zipmap-entires 64 means that when the number of maps in value is less than 64, zipmap is actually used to store the value
hash-max-zipmap-value 512 means that when the length of each member of the map in value is less than 512 bytes, it is actually used The zipmap stored value
List also has the following two attributes in redis.conf
list-max-ziplist-entires 64
list-max-ziplist-value 512
There is a line of macro definition in the source code of Redis REDIS-SHARED-INTEGERS = 10000, modify this Value can change the memory overhead of Redis storing numeric types

 

------------ Reprinted ------------------------------------

Business scenario: Redis is currently the mainstream key-value in-memory database. Because of its high concurrency and fast storage query rate, many of our hot data will be stored in Redis. If the amount of data is large, expensive memory consumption is also a sum A lot of expenditure, so Redis memory optimization is necessary.
The following is the memory usage of the Redis service of my company

Redis memory usage.jpg

The above figure shows that the number of objects in Redis is as high as 1,599,098,020 (1.5 billion), and its memory usage is as high as 371G, and it continues to increase. We need to optimize as soon as possible to release excess space.
Current status of storage
  • The data type currently used by Redis storage is: string
  • The composition of the key is: type: business_tag: user_id: item_id: item_detail_id
  • The composition of value is:
  {
    "expireTime": "253402271999000",
    "value": "test_test_test_test"
  }
The main optimization methods are as follows
  • Storage structure change from string to hash

The main memory optimization is based on:
There are two main encoding methods of Hash data structure in Redis:

  • OBJ_ENCODING_ZIPLIST (compressed list)
  • OBJ_ENCODING_HT (hash table)

Redis will adaptively select the best one of these two encoding methods according to the number of situations. This operation is completely transparent to the user. The basis for selecting the compressed list storage is mainly:

  • Less data entries (hash-max-ziplist-entries) means that the number of filed fields in Hash is less than 512 by default
  • Less data value (hash-max-ziplist-value) means that the value of Hash is less and the length of value is less than 64

That is, the default storage condition for compressed list storage is

field_num < hash-max-ziplist-entries && value.length < hash-max-ziplist-value

In summary: According to the business scenarios seen earlier, this scenario is fully satisfied, which is one reason for choosing to change from string storage to Hash storage.

The main idea of ​​OBJ_ENCODING_ZIPLIST encoding is: space for time, suitable for scenes with a small number of fields and small field values.
  • key optimization

In order to optimize the memory in all directions, the key and hash fields are compressed as necessary

  • If it is a string, it can be segmented and optimized, and the hexadecimal conversion is performed on the data
  • Use shorthand for strings

key_value.png

Comparison of Redis instance storage after migration

In order to ensure the accuracy of the test, the private server is installed with Redis to perform the memory space test before and after optimization. The
following is the entire test process:

  • docker install redis
docker pull redis

clipboard.png

docker ps # 查询当前容器

clipboard.png

docker exec -it ****** /bin/bash # 进入容器内部
redis-cli -a 123456 #打开Redis-cli连接Redis 
info memory # 查询安装完成后Redis 的内存使用情况

clipboard.png

The above is a simple record of the Redis service building process

  • Data storage test (before optimization)

Store 1000000 string type data before optimization as follows:

-- string key
0:1:201155:100:545953888100 -- string value { "value": "3.6230093077568726-0.3630194103106919100", "expireTime": "2147483647" }

clipboard.png
It can be seen from the memory storage in the above figure: 1000000 strings before optimization. Memory consumption: 200.12M After
the test is completed, the previously stored data is deleted

clipboard.png

  • (1) Data storage test (optimized key value)

    • Compressed key length: Convert the last character string without service identifier to: hexadecimal
    • Compressed value length: compressed value data

clipboard.png
After optimization, store 1000000 data. Use memory: 139.10M. Compared with before optimization, optimize memory: 30%
secretly, if this optimization is carried out: save memory: 115G

  • (2) Data storage test (optimized key value)

    • Compressed key length: Convert the last character string without service identifier to: 64 hexadecimal
    • Compressed value length: compressed value data

clipboard.png
This optimization change is not obvious, only reduced by less than 1M, puzzled face. . .
Therefore, the length of the key is not compressed, and only the value is compressed. The test results are as follows:

clipboard.png
It can be obtained from the above test: 1000000 data, the memory usage rate is 146.59M, so the effect of 16/64 hexadecimal on key compression is not obvious.

  • (3) Data storage test (optimized key value)

    • Compressed key length: Convert the last character string without service identifier to: 64 hexadecimal
    • Compressed value length: compressed value data, not stored in json, only stored, attr1: value, attr2: value2

clipboard.png

After the above optimization, the memory used to store 1000000 data: 123.86M, the optimization reached: 38%

  • (4) Data storage test (optimized key value)

    • Compressed key length: Convert the last character string without service identifier to: 64 hexadecimal
    • Compressed value length: compressed value data, not stored in json, only stored, attr1: value

clipboard.png
After the fourth optimization, the memory usage is still very small compared to the third time, but it is still decreasing. This time the memory usage is 123.71M. Continue to compress the value.

  • (5) Data storage test (optimized key value)

    • Compressed key length: Convert the last character string without service identifier to: 64 hexadecimal
    • Compressed value length: Compressed value data, only stored, attr1: value, exp (expiry timestamp) compressed to 64 hexadecimal

clipboard.png
After the fifth optimization, the memory footprint reached 116.27M, and the optimization reached: 42%

  • Data storage test (data structure change based on optimized key value: change to HASH)

clipboard.png

It ’s time to witness the miracle. After changing the HASH structure, 1000000 data memory occupies 100.49M, optimized by 50%

Thinking:

Redis's memory optimization requires targeted optimization in accordance with business scenarios. It is not blindly to reduce memory as the main optimization goal. We should also find a balance point in space and time for appropriate optimization. In general, although the memory is reduced by 50%, in practical applications, it is considered that the data has high efficiency requirements, so there is also time consumption in the encoding and decoding process, and ultimately the most optimized memory Program.

Guess you like

Origin www.cnblogs.com/weigy/p/12677458.html