Understanding of common data structure, persistence and concurrent business scenarios of Redis

Brief description of Redis data structure
Redis has five common data structures, three special data structures (not explained here)

                                      
commonly used data structures:
    STRING:
    It is an object implemented by integer values ​​and SDS (simple dynamic string).
    Application scenarios:
        1 Can be used as a cache
        2. Can be used as a counter
        3. Can be used as a shared user session

     HASH: It is a hash object
     application scenario implemented by compressed lists and dictionaries :
        1. It can be used as a relational database storage to store user-related information.
        
     LIST: It is a list object
     application implemented by compressed lists and double-ended linked lists Scenario:
         1. It can be used as a message queue to achieve a blocking queue, through the command left in and right out
         . 2. It can be used as data paging, such as a user’s article list on a blog

     SET: It is a collection object
     application scenario realized by a collection of integers and a dictionary :
          1. Label
          2. Common friend
          3. Independent ip

     ZSET: It is an ordered set of objects implemented by compressed lists, dictionaries, and jump tables.
     Application scenarios:
         1. Ranking, which can be sorted according to a certain field
         2. Calculate weights, set weight values, and let threads execute according to weight rules


Briefly describe Redis's persistence mechanism RDB and AOF

Both are the persistence mechanism of redis

RDB is a snapshot-level persistence, which is persisted in the form of a binary stream file through the save command and the bgsave command. When the save command is executed, the Redis server process will be blocked. Before the RDB file is generated, the Redis server cannot execute other commands to request operations. When the bgsave command is executed, the Redis server will Fork a child process to perform the generation of the RDB file. , Which is equivalent to running in the background. At this time, the Redis server is in a non-blocking state. It can perform operations requested by other commands. Note that Redis does not allow the main process and child processes to execute save or bgsave commands at the same time, in order to prevent the main The process and the child process have a competitive relationship.


AOF is log-level persistence. It appends the commands for operating the database to the end of the aof file. If there is a downtime, the command in the aof file will be loaded and executed again to achieve the purpose of database state recovery. It provides three There are two kinds of interval storage methods, one is to write once in a command operation, the second is to write once every second, and the third is to transfer the write interval processing to the system management. The default in AOF is to use every Write once a second.

RDB is turned on by default in the configuration file of redis, and AOF is turned off by default. There are three ways to execute the default save command in the configuration file of Redis.

                save 900 1---------------In 900S, the database has been modified at least once
                save 300 10--------------In 300S, correct The database has been modified at least 10 times
                save 60 10000------------Within 60S, the database has been modified at least 10,000 times

 


Breakdown, penetration, avalanche and solutions of business scenarios that may occur in Redis high concurrency

breakdown:

If a hot key (such as a hot search on Weibo) has 10,000 requests per second, the access is very frequent, and it is in a state of high concurrency. If the validity of this key is suddenly invalidated, then at this moment, 10,000 times per second The request will be hit directly on the database. Obviously, the database cannot withstand such high-frequency access, which will kill the database.

solve:

Set the hotspot data to never expire, or implement a mutual exclusion lock, lock -> wait for the cache machine to cache the data accessed for the first time -> release the lock, so that subsequent high concurrent access to the same data can go directly The data is retrieved from the cache and returned.

penetrate:

Assuming that there are 10,000 requests per second, of which 8,000 requests are sent by the attacker, no such id can be found in the cache and database (you can assume that the id is a negative number), in this case also Will kill the database

solve:

As long as the key that is not found in the cache or the database for the first time, set the corresponding value to null and write it into the cache, so that even if there are 7999 requests afterwards, null can be returned directly instead of every time Need to search in the database

avalanche:

If there are 10,000 requests per second, and the cache machine can handle up to 8,000 requests per second, obviously there will be no problem, but if the cache machine suddenly goes down, then these 8,000 requests will be directly appended. On the database, the database will inevitably not be able to support it. In theory, it will call the police first and then hang up. In practice, it will be more likely to hang up directly.

solve:

Beforehand: Redis is highly available, read-write separation, master-slave replication + sentinel mode, redis cluster, to avoid total crash.
In the event: local ehcache cache + hystrix current limit & downgrade to prevent MySQL from being killed.
After the event: Redis persists. Once restarted, data is automatically loaded from the disk to quickly restore cached data

Redis transaction
open transaction
multi
execute command
exec in command queue
close transaction
discard
monitoring lock watch key
cancel monitoring unwatch

Guess you like

Origin blog.csdn.net/weixin_43562937/article/details/107086702