The interviewer please stop asking me about Redis!

Thank you for reading. If this article is helpful to you, please give the blog post a one-click triple connection! Many thanks!

table of Contents

Thank you for reading. If this article is helpful to you, please give the blog post a one-click triple connection! Many thanks!

 

Foreword:

text:

So why use caching? Or why use Redis?

 Why use Redis instead of map or guava for caching?

The difference between Redis and Memcached?

Redis memory elimination mechanism? (There are 2000w data in MySQL and only 20w data in Redis. How to ensure that the data in Redis are all hot data?)

How to ensure that data can be restored after redis hangs and restarts?

summary:


Foreword:

Simply put, Redis is a database, but unlike traditional databases, Redis data is stored in memory, so the storage and writing speed is very fast, so Redis is widely used in the cache direction. In addition, Redis is often used for distributed locks. Redis provides a variety of data types to support different business scenarios. In addition, Redis also supports transactions, persistence, LUA scripts, LRU-driven events, and multiple cluster solutions.

 

text:

So why use caching? Or why use Redis?

There are two main reasons:

1. High performance

 

Two, high concurrency

 

 Why use Redis instead of map or guava for caching?

Cache is divided into local cache and distributed cache.

Taking Java as an example, using the built-in map or guava to achieve local caching, the main feature is light weight and fast , the life cycle ends with the destruction of the jvm , and in the case of multiple instances, each instance is Need to keep a copy of the cache, the cache is not consistent.

Using redis or memcached is called distributed cache. In the case of multiple instances, each instance shares a copy of cached data, and the cache is consistent . The disadvantage is that you need to maintain the high availability of redis or memcached services, and the entire program architecture is more complicated.

 

The difference between Redis and Memcached?

 

Redis memory elimination mechanism? (There are 2000w data in MySQL and only 20w data in Redis. How to ensure that the data in Redis are all hot data?)

There is a function of setting time expiration in Redis , that is, you can set an expiration time for the value stored in the redis database. As a cache database, this is very practical. (Regular deletion, lazy deletion)

However, there are still problems just by setting the expiration time. For example, many expired keys are regularly deleted and they are not checked in time. That is, they are not deleted lazily. The existence of expired keys in memory will cause Redis memory consumption. Exhausted.

 Redis provides 6 data elimination strategies:

1. Volatile-lru: select the least recently used data from the data set (server.db[i].expires) for which the expiration time has been set

2. volatile-ttl: select the data to be expired from the data set (server.db[i].expires) that has set expiration time

3. Volatile-random: arbitrarily select data to be eliminated from the data set (server.db[i].expires) with an expiration time set

4. allkeys-lru: When the memory is insufficient to accommodate the newly written data, in the key space, remove the least recently used key (this is the most commonly used).

5. allkeys-random: arbitrarily select data from the data set (server.db[i].dict) to eliminate

6. no-eviction: Prohibit eviction of data, which means that when the memory is not enough to hold the newly written data, the new write operation will report an error.

(5.0 adds two types: volatile-lfu: select the least frequently used data from the data set with an expiration time set to eliminate; allkeys-lfu: select the least frequently used data from the data set to eliminate .)

 

How to ensure that data can be restored after redis hangs and restarts?

One important point that Redis is different from Memcached is that Redis supports persistence and supports two different persistence operations, while Memcached does not support persistence.

One of Redis's persistence method is called snapshot (snapshotting, RDB ) , another method is to append only files ( append-only fifile, AOF)

 1. Snapshot ( snapshotting ) persistence ( RDB )

 Snapshot persistence is the persistence method adopted by Redis by default, which is configured by default in the redis.conf configuration file:

save 900 1 //在900秒(15分钟)之后,如果至少有1个key发生变化,Redis就会自动触发BGSAVE命令 创建快照。
save 300 10 //在300秒(5分钟)之后,如果至少有10个key发生变化,Redis就会自动触发BGSAVE命令创 建快照。 
save 60 10000 //在60秒(1分钟)之后,如果至少有10000个key发生变化,Redis就会自动触发BGSAVE命令创 建快照。

2. AOF ( append-only fifile ) persistence 

Compared with snapshot persistence, AOF persistence has better real-time performance, so it has become the mainstream persistence solution. By default, Redis does not enable AOF ( append only fifile ) persistence, which can be enabled by the appendonly parameter: 

appendonly yes

Open AOF after each execution of a persistent change Redis data commands, Redis will be written to the hard disk in order AOF file. The save location of the AOF file is the same as the location of the RDB file, both are set through the dir parameter, and the default file name is appendonly.aof .

There are three different AOF persistence methods in the Redis configuration file . They are:

appendfsync always //每次有数据修改发生时都会写入AOF文件,这样会严重降低Redis的速度 
appendfsync everysec //每秒钟同步一次,显示地将多个写命令同步到硬盘(建议)
appendfsync no //让操作系统决定何时进行同步

Supplement: Redis 4.0 optimization of persistence mechanism

Redis 4.0 began to support the hybrid persistence of RDB and AOF (the default is closed, it can be turned on through the configuration item aof - use - rdb - preamble ).

If the hybrid persistence is turned on, the content of the RDB is directly written to the beginning of the AOF file when the AOF is rewritten . The advantage of this is that it can be combined with RDB

And the advantages of AOF , fast loading and avoid losing too much data. Of course, there are disadvantages . The RDB part in AOF is compressed and no longer

The AOF format is less readable.

 

summary:

We will continue to share Redis transactions, cache avalanches, cache penetration, concurrent competition keys, data consistency during double writing, etc. Please continue to pay attention!

 

Guess you like

Origin blog.csdn.net/l_mloveforever/article/details/111574049