Redis Stereotype

1. What is Redis?

Redis is an open source database (BSD license) developed based on C language. Unlike traditional databases, Redis data is stored in memory (memory database), with very fast read and write speeds, and is widely used in caching. Moreover, Redis stores KV key-value pair data.

2. Advantages and disadvantages of Redis

Redis is an open source, memory storage, support key-value pair, persistent data structure storage system, has the following advantages and disadvantages:
Advantages:

  1. High performance: Redis stores all data in memory and uses a single-threaded model, which makes Redis read and write very fast.
  2. Rich data structures: Redis supports a variety of data structures, including strings, lists, hash tables, sets, and ordered sets, etc. These data structures can meet the needs of different application scenarios.
  3. Data persistence: Redis supports persisting data to disk, including snapshots and log files, which allows Redis to quickly recover data after system downtime.
  4. Support distributed: Redis supports distributed architecture, which can disperse data to multiple nodes and improve the scalability and fault tolerance of the system.
  5. Ease of use: Redis provides an easy-to-use command line interface and supports clients in multiple programming languages, making it very convenient to use.

shortcoming:

  1. Limited data storage: Since Redis stores all data in memory, it is limited by memory capacity. If the amount of data is large, more memory is required.
  2. Single-threaded model: Although Redis uses a single-threaded model to ensure read and write performance, the single-threaded model may also become a bottleneck in high concurrency situations.
  3. Not suitable for storing large files: Since Redis stores all data in memory, it is not suitable for storing large binary data such as large files.
  4. Data consistency: Redis supports data persistence, but in the case of high concurrency, there may be data consistency problems, which require additional configuration and processing.
  5. Complex configuration: Redis provides a variety of configuration options, which need to be configured differently according to different usage scenarios, which may increase the complexity of system configuration.

3. Why is Redis so fast

  1. Based on memory: Redis stores all data in memory, and the access speed of memory is thousands of times faster than that of disk, which enables Redis to read and write data quickly.
  2. Efficient event processing model: Redis has designed and developed a set of efficient event processing models based on Reactor mode, mainly single-threaded event loop and IO multiplexing, which enables Redis to efficiently handle a large number of concurrent requests.
  3. Optimized data structure implementation: Redis has built-in a variety of optimized data structure implementations, such as hash tables, ordered sets, etc. These data structures can efficiently handle common data operations, such as insertion, deletion, and search.
  4. Asynchronous non-blocking network model: Redis uses an asynchronous non-blocking network model, which enables Redis to efficiently handle a large number of concurrent requests without blocking threads while waiting for I/O operations, thereby improving system throughput.
  5. Data persistence: Redis supports data persistence to disk, including snapshots and log files, which enables Redis to quickly restore data after a system downtime, thereby improving system reliability.

4. Why does Redis choose single thread

The main reason why Redis chooses the single-threaded model is to avoid the overhead caused by thread switching and the complexity caused by race conditions, thereby improving the performance and reliability of the system. Specifically, the single-threaded model brings the following advantages:

  1. Avoid the overhead caused by thread switching: thread switching will bring certain overhead, including context switching and memory overhead, and the single-threaded model can avoid these overheads, thereby improving system performance.
  2. Avoid the complexity caused by race conditions: In the multi-threaded model, there will be race conditions among multiple threads, and mechanisms such as locks need to be used to avoid the complexity caused by race conditions, while the single-thread model can avoid these complexities, Thereby improving the reliability of the system.
  3. Easier to implement: The single-threaded model is easier to implement and maintain than the multi-threaded model, because it does not need to consider issues such as synchronization and communication between threads.
  4. Easier to optimize: Since the single-threaded model does not have race conditions among multiple threads, it can be optimized more easily, such as utilizing CPU cache, etc.

It should be noted that the single-threaded model of Redis does not mean that it can only process one request. In fact, Redis can handle a large number of concurrent requests at the same time through the event processing model and asynchronous non-blocking network model, thereby achieving high throughput and low latency. Delay.

5. What are the application scenarios of Redis?

As a high-performance, high-reliability in-memory database, Redis has been widely used in many application scenarios. Here are some common use cases for Redis:

  1. Cache: As a high-speed cache database, Redis can cache hot data in memory, thereby improving the read and write performance of the system.
  2. Message queue: Redis's publish/subscribe model and List data structure can implement a lightweight message queue to support asynchronous message communication.
  3. Counters and leaderboards: Redis' atomic operations and Sorted Set data structures can implement efficient counters and leaderboards.
  4. Distributed lock: Redis' atomic operation and expiration time can realize efficient distributed lock function, thus ensuring mutual exclusion of data access between multiple processes or threads.
  5. Session storage: Redis can be used as a session storage database to store user session information in memory, thereby supporting high-concurrency web applications.
  6. Geographic location and spatial indexing: Redis's Geo data structure enables efficient geographic location and spatial indexing capabilities to support LBS applications.
  7. Database persistence: Redis supports data persistence to disk, thereby supporting data backup and recovery, and improving system reliability.

To sum up, Redis has a good performance in multiple application scenarios such as caching, message queues, counters and leaderboards, distributed locks, session storage, geographic location and spatial indexes, and database persistence.

6. The difference between Memcached and Redis

  • Redis supports richer data types (supports more complex application scenarios) . Redis not only supports simple k/v type data, but also provides storage of data structures such as list, set, zset, and hash. Memcached only supports the simplest k/v data type.
  • Redis supports data persistence. It can keep the data in the memory on the disk, and it can be loaded again for use when it is restarted. Memcached stores all the data in the memory.
  • Redis has a disaster recovery mechanism. Because the data in the cache can be persisted to disk.
  • Redis can put unused data on disk after the server memory is used up. However, Memcached will directly report an exception after the server memory is used up.
  • Memcached does not have a native cluster mode, and needs to rely on the client to write data to the cluster; but Redis currently supports the cluster mode natively.
  • Memcached is a multi-threaded, non-blocking IO multiplexing network model; Redis uses a single-threaded multiplexing IO multiplexing model. (Redis 6.0 introduces multithreaded IO)
  • Redis supports the publish-subscribe model, Lua scripts, transactions and other functions, while Memcached does not. Also, Redis supports more programming languages.
  • The deletion strategy of Memcached expired data only uses lazy deletion, while Redis uses both lazy deletion and periodic deletion.

7. What are the Redis data types?

  • 5 basic data structures : String (string), List (list), Set (set), Hash (hash), Zset (ordered set).
  • 3 special data structures : HyperLogLogs (cardinality statistics), Bitmap (bit storage), Geospatial (geographic location)

8. Problems with the keys command

  1. Performance issues: The keys command will traverse all the key values ​​in the entire Redis database. For a large Redis database, this operation may consume a lot of time and system resources, resulting in a decrease in the performance of the Redis server.
  2. Blocking problem: Since the keys command will traverse the entire Redis database, the Redis server will be blocked during the execution of the keys command and cannot respond to other requests.
  3. Security issue: The keys command can return all key values ​​in the Redis database, including some sensitive data. If the keys command is misused in a production environment , it may lead to the leakage of sensitive data.

9. Similarities and differences between SortedSet and List

Same point:

  1. Both can store multiple elements.
  2. Elements can be accessed by index.
  3. Both can insert and delete elements at the head or tail.
  4. Range queries can be performed, that is, to query elements within a certain range.
  5. Both support blocking pop and insert operations.
  6. The basic operations of the data structure can be realized through a variety of commands, such as adding, deleting, modifying and checking.

difference:

  1. The data structures are different: List is a linked list structure, and SortedSet is a balanced tree structure.
  2. The uniqueness of elements: List allows storing duplicate elements, while SortedSet does not allow storing duplicate elements, and the elements are sorted according to the specified weight.
  3. The sorting methods are different: the elements in the List are sorted in the order of insertion, while the elements in the SortedSet are sorted according to the specified weight.
  4. The complexity of operations is different: most operations in List are at O(1) level, while most operations in SortedSet are at O(log N) level, where N is the number of elements.
  5. Applicable scenarios are different: List is suitable for first-in-first-out (FIFO) scenarios, such as message queues, task queues, etc.; SortedSet is suitable for scenarios that are ordered and need to be deduplicated, such as leaderboards, counters, etc.

10. Redis transaction

The basic usage of Redis transactions is as follows:

  1. Start a transaction with the MULTI command.
  2. Execute multiple commands in a transaction, these commands are not executed immediately, but are put into a queue.
  3. Execute all commands in the transaction sequentially through the EXEC command.
  4. If an error occurs during transaction execution, the transaction can be canceled by the DISCARD command.

Here is an example of a simple Redis transaction:

sqlCopy code
MULTI SET key1 value1 SET key2 value2 GET key1 EXEC

In this example, a transaction is first opened by the MULTI command, and then three commands are executed: SET key1 value1 , SET key2 value2 and GET key1 . These commands are not executed immediately, but are put into a queue. Finally, all commands in the transaction are executed sequentially through the EXEC command. If all commands are executed successfully, the result set is returned; otherwise, nil is returned .
Redis transactions also support some advanced functions, such as WATCH commands, CAS commands, etc., which can be used to implement functions such as optimistic locks and atomic operations.
It should be noted that transactions in Redis are not strictly ACID transactions, because Redis transactions cannot guarantee the atomicity of all commands, and even if one of the commands fails to execute, the executed commands will not be rolled back. Therefore, extra care needs to be taken when using Redis transactions to avoid unexpected situations.

11. Persistence mechanism

RDB mode
Redis can obtain a copy of the data stored in the memory at a certain point in time by creating a snapshot. After Redis creates a snapshot, the snapshot can be backed up, the snapshot can be copied to other servers to create a server copy with the same data (Redis master-slave structure, mainly used to improve Redis performance), and the snapshot can be left in place to restart the server when used.
Snapshot persistence is the default persistence method adopted by Redis, which is configured by default in the redis.conf configuration file:

save 900 1           #在900秒(15分钟)之后,如果至少有1个key发生变化,Redis就会自动触发bgsave命令创建快照。

save 300 10          #在300秒(5分钟)之后,如果至少有10个key发生变化,Redis就会自动触发bgsave命令创建快照。

save 60 10000        #在60秒(1分钟)之后,如果至少有10000个key发生变化,Redis就会自动触发bgsave命令创建快照。

Redis provides two commands to generate RDB snapshot files:

  • save : synchronous save operation, which will block the Redis main thread;
  • bgsave : fork a child process, the child process will execute without blocking the Redis main thread, the default option.

AOF method
Compared with snapshot persistence, AOF persistence has better real-time performance, so it has become the mainstream persistence solution. By default, Redis does not enable AOF (append only file) persistence, which can be enabled through the appendonly parameter:

appendonly yes 

After AOF persistence is enabled, every time a command that changes the data in Redis is executed, Redis will write the command to the memory cache server.aof_buf, and then decide when to synchronize it to the AOF in the hard disk according to the appendfsync configuration document.
The save location of the AOF file is the same as that of the RDB file, both of which are set by the dir parameter, and the default file name is appendonly.aof.
There are three different AOF persistence methods in the Redis configuration file, they are:

appendfsync always    #每次有数据修改发生时都会写入AOF文件,这样会严重降低Redis的速度
appendfsync everysec  #每秒钟同步一次,显式地将多个写命令同步到硬盘
appendfsync no        #让操作系统决定何时进行同步

In order to balance data and write performance, users can consider the appendfsync everysec option to allow Redis to synchronize AOF files once per second, and Redis performance is hardly affected at all. And in this way, even if the system crashes, the user will only lose the data generated within one second at most. When the hard disk is busy performing write operations, Redis will also gracefully slow down its speed to adapt to the maximum writing speed of the hard disk.

12. Do you understand AOF rewriting?

When AOF becomes too large, Redis can automatically rewrite AOF in the background to generate a new AOF file. This new AOF file is the same as the database state saved by the original AOF file, but smaller in size.
AOF rewriting is an ambiguous name. This function is realized by reading the key-value pairs . The program does not need to perform any reading, analysis or writing operations on the existing AOF files.
When executing the BGREWRITEAOF command, the Redis server will maintain an AOF rewrite buffer, which will record all write commands executed by the server during the creation of a new AOF file by the child process. When the child process completes the work of creating a new AOF file, the server will append all the content in the rewrite buffer to the end of the new AOF file, so that the database state saved in the new AOF file is consistent with the existing database state. Finally, the server replaces the old AOF file with the new AOF file to complete the AOF file rewriting operation.
Before Redis version 7.0, if there are write commands during rewrite, AOF may use a lot of memory, and all write commands arriving during rewrite will be written to disk twice.

13. How to choose RDB and AOF?

RDB is better than AOF :

  • The content stored in the RDB file is compressed binary data, which saves a data set at a certain point in time. The file is small and suitable for data backup and disaster recovery. The AOF file stores every write command, which is similar to MySQL's binlog log, and usually makes the RDB file much larger. When AOF becomes too large, Redis can automatically rewrite AOF in the background. The new AOF file saves the same database state as the original AOF file, but the size is smaller. However, before Redis version 7.0, if there are write commands during rewrite, AOF may use a lot of memory, and all write commands arriving during rewrite will be written to disk twice.
  • Use RDB files to restore data, just parse and restore the data directly, without executing commands one by one, the speed is very fast. However, AOF needs to execute each write command in turn, which is very slow. That is to say, compared with AOF, RDB is faster when restoring large data sets.

AOF is better than RDB :

  • The data security of RDB is not as good as AOF, and there is no way to persist data in real time or at the second level. The process of generating the RDB file is relatively heavy. Although the work of the BGSAVE child process writing the RDB file will not block the main thread, it will have an impact on the CPU resources and memory resources of the machine. Downtime. AOF supports second-level data loss (depending on the fsync strategy, if it is everysec, the data will be lost for at most 1 second), it is only to append commands to the AOF file, and the operation is light.
  • RDB files are saved in a specific binary format, and there are multiple versions of RDB in the evolution of Redis versions, so there is a problem that the old version of Redis service is not compatible with the new version of RDB format.
  • AOF contains logs of all operations in an easy to understand and parse format. You can easily export AOF files for analysis, and you can also directly manipulate AOF files to solve some problems. For example, if the execution of the FLUSHALL command accidentally refreshes all content, as long as the AOF file has not been rewritten, delete the latest command and restart to restore the previous state.

14. What are the common deployment methods of Redis

  1. stand-alone deployment

Stand-alone deployment refers to deploying Redis on a single server and providing services by listening to specific ports. This method is easy to use and suitable for small-scale application scenarios, but it is prone to single point of failure and cannot meet high availability requirements.

  1. Master-slave replication deployment

Master-slave replication deployment refers to writing data on the master node and replicating it on the slave node. Multiple slave nodes are used to separate reads and writes to improve read performance, and the slave nodes can be used as the backup of the master node to ensure high availability.

  1. cluster deployment

Cluster deployment is to distribute Redis instances on multiple physical nodes, and manage and schedule the instances through the cluster manager to achieve high availability and horizontal expansion. Compared with master-slave replication deployment, cluster deployment can provide higher performance and better scalability, but data fragmentation and network communication between nodes need to be considered.

  1. Sentinel Deployment

Sentinel deployment is based on master-slave replication by adding a sentinel node to monitor the status of the master node. When the master node goes down, the sentinel node will automatically switch the slave node to the new master node. Sentinel deployment can provide high availability, and has lower management and maintenance costs than cluster deployment, but for large-scale data and high concurrency scenarios, its performance and scalability are not as good as cluster deployment.

15. Master-slave replication

Redis master-slave replication is a data synchronization mechanism, in which one Redis instance acts as the master node (Master), while other Redis instances act as slave nodes (Slave) to replicate the data of the master node.
The master node receives the client's write operations and sends the results of these write operations to the slave nodes through the network, and the slave nodes receive these results and update the local data. In this way, even if the master node goes down, the slave nodes can continue to provide services and maintain data integrity.
When configuring Redis master-slave replication, you need to specify the IP address and port number of the master node, and set the slaveof option in the configuration file of the slave node to the IP address and port number of the master node. Once the slave node successfully connects to the master node, it will start receiving data from the master node and store it in local memory.
Master-slave replication can be used in various scenarios, such as:

  1. Data backup: The slave node can be used as a backup of the data of the master node to avoid data loss.
  2. Load balancing: The slave nodes can be used to share the load of the master node to improve the performance and stability of the system.
  3. Read-write separation: The master node can focus on processing write operations, while the slave nodes can process read operations to improve the concurrent performance of the system.

It should be noted that master-slave replication is not a high-availability solution, because when the master node goes down, it is necessary to manually switch the slave node to become the master node to maintain service availability. If you need high availability, you can use Redis Sentinel or Redis Cluster.

16. Sentinel

https://pdai.tech/md/db/nosql-redis/db-redis-x-sentinel.html
https://juejin.cn/post/6998564627525140494#heading-13
Function

  1. Monitoring : Monitor the status of all Redis nodes.
  2. Failover : When Sentinel finds that the master node is offline, it will select one of all slave nodes as the new master node, and point the Master of all other nodes to the new master node. At the same time, the original master node that has been offline will also be downgraded to a slave node, and the configuration will be modified to point the Master to the new master node. When it comes back online, it will automatically work as a slave node.
  3. Notification : After Sentinel elects a new master node, it can notify the client through the API.

The principle of sentinels
is found from the library
. For the configuration of the sentinel, we only need to configure the information of the main library. After the sentinel is connected to the main library, it will call the INFO command to obtain the information of the main library, and then parse out the information of the slave library connected to the main library. In this way, establish a connection with other slave libraries for monitoring.
Publish/subscribe mechanism Sentinels discover each other's existence through the publish/subscribe
(pub/sub) mode after connecting to the same main library . Monitoring After establishing a TCP connection to a Redis node, Sentinel will periodically send a PING command to the node (1s by default) to determine whether the node is normal. If no response is received from the node within the down-after-milliseconds time, it considers the node to be offline. Subjective offline When the sentinel finds that it is disconnected from other nodes it is connected to, it will mark the node as subjective offline (+sdown), including the master node, slave node or other sentinels can be marked as sdown state. Objective offline The sentinel confirms whether the master node is really down. This step becomes the objective offline confirmation . If the master node is really down, the sentinel will mark the master node as an objective offline (+odown) state. Objective Offline Voting Process






  1. When the sentinel finds that the master node is offline, it marks the master node as sdown state.
  2. The sentinel sends the SENTINEL is-master-down-by-addr command to other sentinels to ask other sentinels whether the master node is offline.
  3. After receiving the voting request, other sentinels will check the status of the master node in the local master cache and reply (1 means offline, 0 means normal).
  4. After receiving the reply, the sentinel who initiated the inquiry will accumulate the number of votes for "offline".
  5. When the number of offline votes is greater than half of the number of sentinels and not less than quorum, the master node will be marked as odown. and start preparing for failover
  6. The sentinel who initiated the voting has a voting countdown. If the countdown is over, if there are still not enough votes, the objective offline voting will be abandoned. And try to continue to establish a connection with the master node.

17.Redis cluster

Redis Cluster is a distributed implementation of Redis that allows you to scale Redis horizontally across multiple nodes, providing high availability and automatic data sharding. Redis Cluster automatically distributes data across multiple nodes, which helps distribute load and increase application throughput.
Redis Cluster uses a sharding algorithm called hash slots, which allows each key to be assigned to a specific node in the cluster. This ensures that each node only stores a subset of the data, making it easier to scale and manage the Redis database.
Redis Cluster supports a master-slave replication model where each master node has one or more slave nodes. The master node is responsible for processing read and write operations, while the slave node replicates the data of the master node and acts as a hot backup, which can immediately take over the function of the master node when the master node fails.
Redis Cluster also provides automatic failover, when the master node fails, it will be automatically replaced by one of the slave nodes. This helps ensure high availability and prevents data loss in the event of node failure.

18. What are the hash partition algorithms

  1. CRC16 algorithm: The default hash partition algorithm of Redis cluster is CRC16 algorithm, which is a fast hash algorithm with good distribution and low collision rate. In Redis cluster, each key will be mapped to a 16-bit integer by the CRC16 algorithm, and then the node where the key is located will be obtained by moduloing the number of nodes according to this integer.
  2. CRC32 algorithm: Redis cluster also supports CRC32 algorithm as a hash partition algorithm, which is a more complex hash algorithm with better randomness and lower collision rate. In Redis cluster, using the CRC32 algorithm requires manual settings.
  3. MD5 algorithm: Redis cluster also supports MD5 algorithm as a hash partition algorithm, which is a more secure hash algorithm with high randomness and low collision rate. In Redis cluster, using the MD5 algorithm requires manual settings.

Different hash partition algorithms are suitable for different scenarios. The CRC16 algorithm has good performance and effect, so it is the default hash partition algorithm of Redis cluster. If you need higher hash distribution and lower collision rate, you can consider using CRC32 algorithm or MD5 algorithm.

19. Deletion strategy for expired keys

There are two commonly used strategies for deleting expired data (important! Things to consider when building your own cache wheels):

  1. Lazy deletion : the data will only be checked for expiration when the key is taken out. This is the most CPU-friendly, but it may cause too many expired keys not to be deleted.
  2. Periodic deletion : Extract a batch of keys at regular intervals to delete expired keys. Moreover, the bottom layer of Redis will reduce the impact of delete operations on CPU time by limiting the duration and frequency of delete operations.

Regular deletion is more friendly to memory, and lazy deletion is more friendly to CPU. Both have their own advantages and disadvantages, so Redis uses regular deletion + lazy/lazy deletion . However, there are still problems just by setting the expiration time for the key. Because there may still be a situation where many expired keys are missed by regular deletion and lazy deletion. This will cause a large number of expired keys to accumulate in the memory, and then it will be Out of memory.
How to solve this problem? The answer is: Redis memory elimination mechanism.

20. What are the memory elimination strategies?

Redis provides 6 data elimination strategies:

  1. volatile-lru (least recently used) : Select the least recently used data from the data set (server.db[i].expires) with an expiration time set to eliminate
  2. volatile-ttl : Select the data that will expire from the data set (server.db[i].expires) that has set the expiration time
  3. volatile-random : Arbitrarily select data elimination from the data set (server.db[i].expires) with an expiration time set
  4. allkeys-lru (least recently used) : When the memory is not enough to accommodate the newly written data, in the key space, remove the least recently used key (this is the most commonly used)
  5. allkeys-random : Randomly select data elimination from the dataset (server.db[i].dict)
  6. no-eviction : It is forbidden to evict data, that is to say, when the memory is not enough to accommodate the newly written data, the new write operation will report an error. This should not be used by anyone!

The following two types are added after version 4.0:

  1. volatile-lfu (least frequently used) : Select the least frequently used data from the data set (server.db[i].expires) with an expiration time set to eliminate
  2. allkeys-lfu (least frequently used) : When the memory is not enough to accommodate the newly written data, in the key space, remove the least frequently used key.

21. How to ensure the data consistency between the cache and the database double write

https://juejin.cn/post/6850418121754050567#comment

22. Cache penetration

The simple point of cache penetration is that a large number of requested keys are unreasonable, and they do not exist in the cache or in the database at all . This causes these requests to go directly to the database without going through the caching layer at all, which puts a huge pressure on the database, and may be directly shut down by so many requests.
Solution
The most basic thing is to do a good job of parameter verification first, and some illegal parameter requests will directly throw an exception message and return it to the client. For example, the id of the database to be queried cannot be less than 0, and when the format of the incoming mailbox is incorrect, an error message is returned directly to the client, etc.

23. Cache Avalanche

At the same time, a large area of ​​the cache fails, causing a large number of requests to fall directly on the database, which puts a huge pressure on the database. This is like an avalanche, with a destructive trend, the pressure on the database can be imagined, and it may be directly shut down by so many requests.
For the case where the Redis service is unavailable:

  1. Redis cluster is used to avoid problems with a single machine, and the entire cache service cannot be used.
  2. Limit current to avoid processing a large number of requests at the same time.

For hotspot cache invalidation:

  1. Set different expiration times, such as randomly setting the expiration time of the cache.
  2. The cache never expires (not recommended, too poor practicality).
  3. Set up the second level cache.

What is the difference between cache avalanche and cache breakdown?

Cache avalanche is similar to cache breakdown, but the cause of cache avalanche is that a large amount or all of the data in the cache is invalid. The cause of cache breakdown is that a certain hot data does not exist in the cache (usually because the data has expired)

24. Cache breakdown

In cache breakdown, the requested key corresponds to hot data , which exists in the database but not in the cache (usually because the data in the cache has expired) . This may cause a large number of requests to be sent directly to the database in an instant, which puts a huge pressure on the database, and may be directly shut down by so many requests.

What are the solutions?

  • Set the hotspot data to never expire or expire for a long time.
  • Preheat hot data in advance, store it in the cache and set a reasonable expiration time. For example, the data in the seckill scenario will not expire before the seckill ends.
  • Before requesting the database to write data to the cache, obtain a mutex lock to ensure that only one request will fall on the database, reducing the pressure on the database.

What is the difference between cache penetration and cache breakdown?

In cache penetration, the requested key neither exists in the cache nor in the database.
In cache breakdown, the requested key corresponds to hot data , which exists in the database but not in the cache (usually because the data in the cache has expired)

25. How does Redis implement message queue?

(1) List structure; (2) Pub/Sub mode; (3) Stream structure
https://mp.weixin.qq.com/s/_q0bI62iFrG8h-gZ-bCvNQ

26. The role of pipeline

Redis Pipeline is a technique for optimizing Redis batch queries. It allows the client to send multiple Redis commands to the Redis server at once without waiting for a response for each command. This reduces round trips between the client and the server, improving Redis performance.
Specifically, Redis Pipeline works as follows:

  1. The client packs multiple Redis commands into one request.
  2. The client sends the packaged request to the Redis server.
  3. After the Redis server receives the request, it does not execute it immediately, but caches the request.
  4. The Redis server executes the requests one by one in the order of the requests, and saves the execution results in the cache.
  5. The client finally receives the execution results of all commands at one time.

Compared with sending each Redis command separately, using Pipeline can greatly reduce communication delay and network bandwidth, thereby improving the throughput of Redis. In some scenarios where Redis needs to be queried in batches, using Pipeline can significantly improve the performance of the application.

27. LUA script

  • To reduce network overhead, multiple commands can be run in the same script in Lua scripts
  • Atomic operation, Redis will execute the entire script as a whole, and will not be inserted by other commands in the middle (no need to worry about race conditions during scripting)
  • Reusability, the script sent by the client will be stored in Redis forever, which means that other clients can reuse this script.

28. When to use pipeline and when to use lua

  • When there is no dependency and sequence relationship between multiple redis commands (for example, the second command depends on the result of the first command), it is recommended to use pipline;
  • If there is a dependency or sequence relationship between commands, the pipline cannot be used. At this time, you can consider using lua scripts.

28. What is RedLock?

In RedLock, multiple clients acquire a distributed lock through competition, and only the client that acquires the lock can perform access to shared resources. In order to prevent a single point of failure in a distributed environment, RedLock uses multiple Redis instances to store lock information, and when acquiring a lock, it needs to occupy the locks of most Redis instances to successfully acquire the lock, thus ensuring the reliability of the distributed lock and high availability.

29. Redis single thread understanding

RedLock is a distributed lock implemented based on redis, which can guarantee the following characteristics:

  • Mutual exclusion: at any time, only one client can hold the lock; avoid deadlocks:
  • When the client acquires the lock, even if a network partition occurs or the client goes down, deadlock will not occur; (using the survival time of the key)
  • Fault tolerance: As long as the redis instances of most nodes are running normally, they can provide external services, lock or release locks;

The idea of ​​the RedLock algorithm means that locks cannot be created on only one redis instance, but should be created on multiple redis instances, n / 2 + 1 , locks must be successfully created on most redis nodes to count as the overall RedLock is successfully locked, avoiding the problems caused by only locking on one redis instance.

30.lO Multiplexing

Redis adopts a multiplexing mechanism to enable it to process a large number of client requests concurrently in network IO operations to achieve high throughput.
In fact, Redis uses the IO multiplexing mechanism in Linux. The IO multiplexing mechanism in Linux means that one thread processes multiple IO streams.
To put it simply, when Redis only runs a single thread, this mechanism allows multiple listening sockets and connected sockets to exist in the kernel at the same time.
The kernel will always listen for connection requests or data requests on these sockets. Once a request arrives, it will be handed over to the Redis thread for processing, which realizes the effect of one Redis thread processing multiple IO streams.

Guess you like

Origin blog.csdn.net/weixin_56640241/article/details/129838680