[Redis interview] summary of basic questions (below)

Related articles:
[Redis Interview] Summary of Basic Questions (Part 1)
[Redis Interview] Summary of Basic Questions (Part 2)
For basic knowledge of redis, see the column: Redis

1. What is the difference between set and zset?

set:
The elements in the set are unordered and non-repeatable, and a set can store up to 232-1 elements;
in addition to supporting addition, deletion, modification and query of elements, the set also supports intersection, union, and difference of multiple sets set.
Zset:
The ordered collection retains the feature that the elements of the collection cannot be repeated;
the ordered collection will set a score for each element, and use this as the basis for sorting; the
ordered collection cannot contain the same elements, but the scores of different elements can be the same .

2. Talk about the watch command in Redis

Many times, it is necessary to ensure that the data in the transaction has not been modified by other clients before executing the transaction. Redis provides the watch command to solve this kind of problem, which is an optimistic locking mechanism. The client requires the server to monitor one or more keys through the watch command. If these keys change before the client executes the transaction, the server will refuse to execute the transaction submitted by the client and return a null value to it.

3. Talk about the related operations of the List structure in Redis

A list is a linear and ordered data structure, its internal elements can be repeated, and a list can store up to 2^32-1 elements. The list includes the following frequently used commands:

lpush/rpush : add data from the left/right side of the list;

lrange : Specify the index range and return the data within this range;

lindex : returns the data at the specified index;

lpop/rpop : Pop a data from the left/right side of the list;

blpop/brpop : Pop a piece of data from the left/right side of the list, and enter the blocking state if the list is empty.

4. How do you design the expiration time of Redis?

Hot data does not set an expiration time, making it "physically" never expired, which can avoid the problem of cache breakdown;
when setting the expiration time, a random number can be added to avoid a large number of keys expiring at the same time, causing a cache avalanche.

5. Redis common commands

General instructions are instructions that can be used for some data types. The common ones are:

KEYS : View the keys that match the template. It is not recommended to use it on production environment devices (because redis is single-threaded and can only process one instruction at the same time. If the amount of data is large, querying all of them will cause huge performance consumption and blockage ).

DEL : delete a specified key.

EXISTS : Determine whether the key exists.

EXPIRE : Set an expiration date for a key, and the key will be automatically deleted when the expiration date expires.

TTL : View the remaining validity period of a KEY

6. When to use redis

Cache for hotspot data
For storage of specific time-limited data Distributed locks
for sorted list of data with hotspot weights

7. The difference between redis and memcache

Redis uses a single-threaded model to process network requests, while memcache uses multi-threaded asynchronous IO.
Redis supports data persistence. Memcache does not support
redis and supports more data formats than memcache.
insert image description here

8. Talk about the advantages and disadvantages of redis

Advantages:
Based on memory operations, the memory read and write speed is fast.
Supports multiple data types, including String, Hash, List, Set, ZSet, etc.
Persistence is supported. Redis supports two persistence mechanisms, RDB and AOF, and the persistence function can effectively avoid data loss.
Support transactions. All Redis operations are atomic, and Redis also supports the atomic execution of several operations after merging.
Support master-slave replication. The master node will automatically synchronize the data to the slave node, which can separate read and write.
The processing of Redis commands is single-threaded. Redis6.0 introduces multi-threading. It should be noted that multi-threading is used to process network data reading and writing and protocol analysis, and Redis command execution is still single-threaded.

Disadvantages:
poor support for structured queries.
The database capacity is limited by physical memory, so it is not suitable for high-performance reading and writing of massive data. Therefore, the suitable scenarios for Redis are mainly limited to operations with small data volumes.
It is difficult for Redis to support online expansion, and online expansion will become very complicated when the cluster capacity reaches the upper limit.

9. Tell me about the threading model of Redis?

Redis developed a network event processor based on the Reactor model, which is called a file event processor. It consists of 4 parts: multiple sockets, IO multiplexing program, file event dispatcher, and event processor. Because the consumption of the file event dispatcher queue is single-threaded, Redis is called a single-threaded model.

File event handlers use I/O multiplexing (multiplexing) procedures to listen to multiple sockets at the same time, and associate different event handlers for sockets based on the tasks currently performed by the sockets.
When the monitored socket is ready to perform operations such as connection accept, read, write, close, etc., the file event corresponding to the operation will be generated, and the file event handler will call the previously associated event of the socket handlers to handle these events.
Although the file event handler runs in a single-threaded manner, by using an I/O multiplexer to listen to multiple sockets, the file event handler implements a high-performance network communication model and works well with In the redis server, other modules that also run in a single-threaded mode are connected, which keeps the simplicity of the single-threaded design inside Redis.

10. What are the deployment options for Redis?

Stand-alone version: Deployed on a single machine, the QPS that a single-machine Redis can carry ranges from tens of thousands to tens of thousands. This deployment method is rarely used. Existing problems:

1. The memory capacity is limited. 2. The processing capacity is limited. 3. It cannot be highly available.

Master-slave mode: one master and multiple slaves, the master is responsible for writing and copying data to other slave nodes, and the slave nodes are responsible for reading. All read requests go to the slave nodes. In this way, horizontal expansion can be easily realized, and high read concurrency can be supported. After the master node hangs up, you need to manually specify a new master. The availability is not high, and it is basically not used.

Sentinel mode: Master-slave replication has the problem of not being able to automatically fail over and not achieve high availability. Sentry mode solves these problems. The master-slave node can be automatically switched through the sentinel mechanism. After the master node hangs up, the sentinel process will actively elect a new master, which has high availability, but the data stored in each node is the same, wasting memory space. The amount of data is not very large, the cluster size is not very large, and it is used when automatic fault tolerance and disaster recovery are required.

Redis cluster: server-side sharding technology, officially available in version 3.0. Redis Cluster does not use consistent hash, but uses the concept of slot (slot), which is divided into 16384 slots in total. Send the request to any node, and the node that receives the request will send the query request to the correct node for execution. It is mainly for the scenario of massive data + high concurrency + high availability. If it is massive data, if you have a large amount of data, then it is recommended to use Redis cluster. The sum of the capacity of all master nodes is the data capacity that can be cached by Redis cluster.

11. How to deal with Redis big key?

Usually, we call a key with large data or a large number of members and lists as a large key.
The following is a description of the large key of each data type:
value is STRING type, and its value exceeds 5MB.
When value is a collection type such as ZSET, Hash, List, Set, etc., its number of members exceeds 1w. The
above definition is not absolute, mainly It is determined according to the number and size of value members, and the standard is determined according to the business scenario.
How to deal with it:
When the vaule is a string, serialization and compression algorithms can be used to control the size of the key within a reasonable range, but serialization and deserialization will bring more time consumption. Or split the key, divide a large key into different parts, record the key of each part, and use operations such as multiget to achieve transactional reading.
When the value is a collection type such as list/set, fragmentation is performed according to the estimated data size, and different elements are divided into different fragments after calculation.

12. What is RedLock?

The Redis official website has proposed an authoritative way to implement distributed locks based on Redis called Redlock, which is more secure than the original single-node method. It can guarantee the following features:
Security features: Mutually exclusive access, that is, only one client can always get the lock
to avoid deadlock: in the end, all clients may get the lock, and there will be no deadlock, even if the client that originally locked a resource Fault
tolerance: As long as most Redis nodes survive, the service can be provided normally

13. What is the role of the pipeline?

The redis client executes a command in four processes: sending the command, queuing the command, executing the command, and returning the result. Using the pipeline can batch requests and return results in batches, and the execution speed is faster than executing one by one.
The number of commands assembled using the pipeline should not be too large, otherwise the amount of data will be too large, which will increase the waiting time of the client and may cause network congestion. A large number of commands can be split into multiple small pipeline commands to complete.
Native batch commands (mset and mget) are compared with pipeline:
native batch commands are atomic, and pipeline is nonatomic. The pipeline command exits abnormally in the middle, and the previously successfully executed command will not be rolled back.
The native batch command has only one command, but the pipeline supports multiple commands.

Guess you like

Origin blog.csdn.net/qq_54796785/article/details/126478061