Redis familiar with a few high-frequency interview questions, interview, do not worry

1, Redis talk about what are the scenarios?

  • Cache : This should be the most important function of the Redis, is an essential mechanism for large sites, rational use of caching can not only speed up data access speed, and can effectively reduce the pressure of back-end data sources.
  • Sharing the Session : For some service functions depend session, if the need to become a cluster from a stand-alone, you can choose to redis unified management session.
  • Message queuing systems : message queuing system can be said to be a major site of the necessary basic components, because of its business decoupling, non-real-time business clipping and other characteristics. Redis provides a publish and subscribe functions blocking queue features, although the message queue and professional than enough strong enough, but for the average functionality to meet the basic message queue. For example, in a distributed crawler system, a unified management url to redis queue.
  • Distributed Lock : In a distributed service. Setnx function can use Redis to write distributed locks, although this may not be too common.

Of course, such as charts, thumbs up function can be used to achieve Redis, but Redis is not what can be done, such as the amount of data is particularly large, is not suitable for Redis , we know that Redis is based on memory, although the memory is cheap, but if you particularly large amount of data every day, hundreds of millions of user behavior such as log data, use Redis to store it, the cost is very high.

2, single-threaded Redis Why so fast?

Redis How fast? Official answer is to read and write speed of 100,000 / sec, this figure is not surprising, but Redis is single-threaded. Why Redis single-threaded speed so fast? There are three reasons:

  • Pure memory operations : Redis is based entirely on memory, reading and writing so efficiency is very high, there is of course Redis persistence operations, the persistence operations are fork a child process is using the Linux system and page caching technology to complete, and will not affect the performance of the Redis.
  • Single-threaded operation : not a bad thing single-threaded, single-threaded avoid frequent context switching, frequent context switches can also affect performance.
  • Reasonable and efficient data structure
  • Using non-blocking I / O multiplexing mechanisms : Multiple I / O multiplexing model is the use of select, poll, epoll capability can monitor I / O event of multiple streams at the same time, in his spare time, will present thread is blocked out, when one or more streams have I / O event, wakes from blocking states, so the program will again poll all streams (epoll is the only poll that really sent a stream of events), and only sequential order of process streams ready, this approach avoids a lot of useless operation.

3, the data structure talk and usage scenarios Redis

Redis provides five data structure, each data structure has various usage scenarios.

1, String string

Redis string type is the most basic data structures, first of all keys are of type string, and several other data structures are built on the basis of type string, we often use the set key value is the command string. Used in the cache, the count, the Session sharing, rate limit.

2, Hash hash

In Redis, the key type is the hash itself is a key structure, the form value = {{field1, value1}, ... {fieldN, valueN}}, add the command: hset key field value. Hash can be used to store user information, such as the realization cart

3, List list

List (list) is used to store a plurality of types of ordered strings. You can do simple functions of the message queue. In addition, you can use lrange command, do the Redis-based paging functionality, excellent performance, good user experience.

4, Set collection

Collection (set) element type is used to store a plurality of strings, and lists, but not the same type that does not permit duplicates set and elements of the collection are unordered, can not obtain an index subscript element . Set using the intersection, union, difference and other operations can be calculated together preferences, all preferences, their own unique preferences and other functions.

5, Sorted Set ordered collection

Sorted Set more than a weight parameter Score, elements of a set can be arranged Score. Charts can be applied, the operation takes TOP N

4, talk about Redis data expiration policies

Give everyone a conclusion, Redis data expiration policies adopted periodically delete + inert deletion policy .

1, periodically delete, delete inert what strategy?

  • Regularly delete a policy : Redis enable a timer timer monitors all key, to determine whether key expired, expired, then delete. This strategy can guarantee expired key will eventually be removed, but there are also serious drawbacks: each time through all the data in memory, consuming CPU resources, and when the key has expired, but did not arouse the timer is still in the state, During this time key can still be used.
  • Inert delete a policy : in acquiring key, first determine whether the key has expired, expired is deleted. There is a drawback to this method: If the key has not been used, it has been in memory, in fact it has expired, will waste a lot of space.

2, periodically delete + Delete inert how policy works?

Natural complementarity of these two strategies, then combine the timed deletion policy took place some changes, not all of each scan is the key, but the key part of a random sample checks, thus reducing the loss of CPU resources, inert delete complementary strategies for examination to the key, basically meet all the requirements. But sometimes is so clever, neither timer drawn to, but not used, and how these data disappear from memory ? It does not matter, as well as the elimination mechanism of memory, when the memory is not enough, the memory will play elimination mechanism. Redis memory elimination mechanism are the following strategies:

  • noeviction: When insufficient memory to accommodate the new write data, write new error. (Redis default policy)
  • allkeys-lru: When insufficient memory to accommodate the new data is written in the key space, Key remove the least recently used. (Recommended Use)
  • allkeys-random: When insufficient memory to accommodate the new data is written in the key space, remove a Random Key.
  • volatile-lru: When insufficient memory to accommodate the new data is written in the key space is provided in the expiration time, to remove the least recently used Key. This is usually only when the Redis cache, but also made persistent storage when it is used.
  • volatile-random: When insufficient memory to accommodate the new data is written in the key space is provided in the expiration time, remove a Random Key.
  • volatile-ttl: When the memory is insufficient to hold the new data is written in the key space is provided in the expiration time, the expiration time has an earlier priority Key removed.

Memory configuration mechanism to eliminate the need to configure only maxmemory-policy parameters before redis.conf profile.

5, how to solve the Redis cache and cache penetrate avalanche problems

Avalanche cache: Since buffer layer bearing a large number of requests, the effective protection of the storage layer, but if for some reason the buffer layer is unable to provide services, such as hang Redis node, a hot key all-off failure, in these cases, all requests are a direct request to the database, the database may result in downtime.

Avalanche prevention and resolution cache problem , we can proceed from the following three aspects:

  • 1, using Redis high availability architecture : use Redis Redis cluster to ensure that the service will not hang up
  • 2, the cache time inconsistency: a cache expiration time plus a random value, to avoid failure of collective
  • 3, limiting downgrade strategy : a certain record, such as personalized recommendation service is not available, replaced by hot data referral service

Cache penetration: caching refers to penetrate a query data does not exist, this data is certainly not in the cache, which can lead to all requests fell on the database, there may be a case database downtime.

Prevention and resolution cache penetration problem , consider the following two ways:

  • 1, empty the cache objects: a null value cached, but there is such a problem, a large number of invalid null value will occupy space, very wasteful.
  • 2, Bloom filter intercepts: all possible query key is mapped to the first Bloom filter, first determine whether the key exists Bloom filter, there has to be continued downward query, and if not, then directly back . Bloom filter has some false positives, so take your business to allow some fault tolerance.

At last

Currently on the Internet has a lot of heavyweights Redis high-frequency face questions related articles, any similarity, please forgive me up. The original is not easy, the code word is not easy, but also hope that we can support. If something incorrect in the text, but also look made, thank you.

Welcome scan code concern micro-channel public number: learn together "flat head brother's technology blog," the first Colombian peace, progress together.

Flathead brother of technical Bowen

Guess you like

Origin www.cnblogs.com/jamaler/p/12100783.html