Redis study notes (Part 2): Persistent RDB, AOF + master-slave replication (passing on the fire, anti-customer-oriented, one master and multiple slaves, sentinel mode) + Redis cluster

11. Persistent RDB and AOF

Persistence: save data to hard disk

11.1 RDB(Redis Database)

RDB: Writes a snapshot of the data set in the memory to the disk within a specified time interval, which is also called a Snapshot in the jargon. When it is restored, the snapshot file is read directly into the memory.

How the backup is performed:

Fork:

Enter: vim /etc/redis.conf in the /usr/local/bin directory. Find SNAPSHOTTING.

dbfilename dump.rdp means that the name of the file after rdb is dump.rdg.

If stop-writes-on-bgsave-error is yes, it means: when Redis cannot write to the disk, directly turn off the write operation of Redis. Yes is recommended.

rdbcompression compresses files.

If rdbchecksum is yes, it means: check the integrity, yes is recommended.

sava seconds Number of write operations. By default, if one key changes within 60 minutes, the persistence operation will be performed. 10 keys changed within 5 minutes for persistence operation. If 1000key changes in 1 minute, the persistence operation will be performed.

Operation steps: Write save 20 3, wq! below the location in the redis.conf configuration file, then ps -ef | grep redis to kill the process, then restart redis-server /etc/redis.conf, enter ll, and view The size of the dump.rdb file, the client tool is connected to redis. If there is a problem with snapshot failure, you can try: sysctl vm.overcommit_memory=1.

bgsave: Redis will perform snapshot operations asynchronously in the background, and the snapshot can also respond to client requests.

Advantage:

Disadvantages:

RDB backup

 Operation steps: First connect to redis, then flushdb, then exit, enter ll in the bin directory, and check the size of the dump.rdb file at this time: 148. sudo vim /etc/redis.conf Enter the configuration file of redis.conf and add save 20 3. Then add 5 pieces of data such as set a a to set f f.

Then cp dump.rdb d.rdb copies dump.rdb into d.rdb. Then use the command sudo rm -f dump.rdb to delete the file to simulate the data loss scenario at this time. Enter sudo mv d.rdb dump.rdb to rename d.rdb to dump.rdb.

11.2 AOF(Append Only File)

Open AOF: vim /etc/redis.conf, enter: /appendonly to search for keywords, appendfilename writes the final generated name of the file:

Use ps -ef | grep redis to find the process, use kill -9 process ID to kill the process, then restart redis-server /etc/redis.conf, then open a new session in xshell, cd /usr/local/bin , ll check, and you will see the appendonlydir folder. At this time, redis-cli enters redis and finds that there is no data. This means that AOF is read by default when RDB and AOF are turned on at the same time.

Recovery operation: cp appendonly.aof appendonly.aof.bak backup first. Shut down in redis and then exit. Then rm -rf appendonly.aof. Then mv appendonly.aof.bak appendonly.aof. Then restart redis and connect again.

Abnormal recovery: First cd appendonlydir, then vi appendonly.aof.1.incr.aof, add Hello at the end, then enter redis, shutdown, start redis in the bin directory, and then enter redis. At this time, the file cannot be started. There is a redis-check-aop file in the bin directory. redis-check-aop --fix fixes the name of the file. It will prompt where the error occurred and you can repair it. There will be no problem with restarting at this point.

AOF sync frequency setting:

Rewrite compression:

Use one statement to express various commands and only focus on the final operation. The principle is as follows:

Triggering conditions:

Rewrite process:

Advantage:

Disadvantages:

comprehensive:

If you are not afraid of data loss, you can use RDB. AOF is not recommended to be used alone as bugs may occur.

12. Master-slave replication

Master-slave replication: After the host data is updated, it is automatically synchronized to the master/slaver mechanism of the standby machine according to the configuration and policy. The Master is mainly for writing, and the Slave is mainly for reading.

Features: 1. Separation of reading and writing (Master mainly writes, Slave mainly reads, one master and multiple slaves). 2. Disaster recovery and fast recovery (if one slave server hangs up, it can quickly switch to other slave servers for reading).

What if there is only one main server, but this server hangs up? Configuring the cluster: Each part has one host, multiple slave servers, and multiple parts form a cluster as a whole.

2.1 Build one master and multiple slaves

First cd /myredis, then copy the configuration file sudo cp /etc/redis.conf /myredis/redis.conf,

sudo vim redis.conf,/appendonly, then change yes to no. Enter sudo vi redis6379.conf

include /myredis/redis.conf
pidfile /var/run/redis_6379.pid
port 6379
dbfilename dump6379.rdb

sudo cp redis6379.conf redis6380.conf, sudo vi redis6380.conf, change all the words 6379 to 6380. sudo cp redis6379.conf redis6381.conf, sudo vi redis6380.conf, change all the words 6379 to 6381.

Start 3 redis sessions, and then start the redis server [note that you must ensure that you are in the root role when starting]: redis-server redis6379.conf, redis-server redis6380.conf, redis-server redis6381.conf. Enter ps -ef | grep redis to check whether they are all started:

First, open 3 sessions in Xshell, enter: cd /myredis to enter the folder, respectively enter: redis-cli -p 6379, redis-cli -p 6380, redis-cli -p 6381 to enter redis.

Enter info replication in 6379 to see that there are 0 slave servers, and enter slaveof 127.0.0.1 6379 in 6380 and 6381. Enter keys * in 6379. The returned result is empty, then enter set a1 v1. At this time, entering keys * in 6380 and 6381 will display a1 [provided that all redis servers are under the control of the root role].

2.2 Principle of replication and one master and two servants

Replication principle: When the slave server connects to the master server, the slave server sends a data synchronization message to the master server. The main server receives the synchronization message sent from the slave server and persists the main server data.

First enter shutdown in 6380 to simulate hanging, and send data set a2 k2 in 6379. If you start redis-cli -p 6380 in 6380, and then enter saveof 127.0.0.1 6379 becomes a slave server, and then keys * can still receive new additions. keys.

If you hang up the master server and enter shutdown in 6379, the slave server will still be the slave server. When the master server is restarted, it will still be the master server. The slave server will still know the master server.

2.3 Pass on the past and focus on customers

Passing on the fire from generation to generation: Set up the slave server under the slave server. Enter slaveof 127.0.0.1 6381 in 6380.

Anti-customer-oriented: When the upstream server hangs up, the downstream server automatically takes over. Enter slave of no one in the server scheduled to be replaced. After the upstream machine hangs up, it will naturally become the main server.

2.4 Sentry Mode

The anti-customer-oriented automatic version can monitor whether the host fails in the background. If it fails, it will automatically switch from the secondary database to the primary database based on the number of votes.

1. First shut down all redis servers. Then start the redis servers one by one and enter. Let 6380 and 6381 become the slave servers of 6379.

2. Open another client connection. cd /myredis/, then vi sentinel.conf, write sentinel monitor mymaster 127.0.0.1 6379 1 (mymaster is a codename, and the last parameter 1 refers to the number of sentinels that agree to migrate).

3. Enter redis-sentinel sentinel.conf (default port 26379) in 6381, and enter shutdown in 6379 to simulate the hang-up situation. At this time, the sentinel of 6381 will notice it, and then let 6380 become the new master server.

4. At this time, restart 6379, info replication, and find that 6379 has become a slave server.

Replication delay issues:

Replica-priority is the priority. The lowest priority is 100 and the highest is 0. The lower the value, the higher the priority.

Offset: The number synchronized with the host. The more the number, the higher the offset.

The runid is randomly generated.

13. Redis6 cluster

If the capacity is not enough, redis needs to be expanded. Concurrent write operations are amortized by redis.

Proxy host:

Redis3.0 decentralized cluster: Any server can become the entrance to the cluster, connect to each other, and forward according to needs.

What is a cluster:

Build a redis cluster

1. cd /myredis. ll. Delete the file rm -rf dump63*. rm -rf redis6380.conf. rm-rf redis6381.conf. vi redis6379.conf. Delete redundant content and keep the first 4 lines. Plus the following 3 configurations:

cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000

Line 1 turns on cluster mode. Line 2 sets the node configuration file name. Line 3 sets the node disconnection time. After this time, the cluster automatically switches between master and slave.​ 

2. Copy 6379 to 5 more copies. cp redis6379.conf redis6380.conf. cp redis6379.conf redis6381.conf. cp redis6379.conf redis6389.conf. cp redis6379.conf redis6390.conf. cp redis6379.conf redis6391.conf.

3.vi redis6380.conf,:%s/6379/6380。vi redis6381.conf,:%s/6379/6381。vi redis6389.conf,:%s/6379/6389。vi redis6390.conf,:%s/ 6379/6390。 vi redis6391.conf,:%s/6379/6391。

4. [Note that you must be in the root role] Start redis: redis-server redis6379.conf. redis-server redis6380.conf. redis-server redis6381.conf. redis-server redis6389.conf. redis-server redis6390.conf. redis-server redis6391.conf.

5. Enter ll to see 5 nodes-xxxx and 5 redis63xxxxx files.

6. First cd /opt/redis-6.2.1/src, then copy the following code and enter it into:

redis-cli --cluster create --cluster-replicas 1 192.168.182.151:6379 192.168.182.151:6380 192.168.182.151:6381 192.168.182.151:6389 192.168.182.151:6390 192.168.182.151:6391

7. redis-cli -c -p 6379 (-c uses cluster strategy to connect, and the setting data will automatically switch to the corresponding write host). The cluster nodes command displays cluster information.

Cluster operation and failure recovery

When the parameter is 1, it is the simplest, one master node + one slave node.

Slots are slots. A Redis cluster contains 16384 slots (hash slots). If a key is added, an algorithm will be used to calculate which slot the key belongs to. Each node in the cluster is responsible for processing a part of the slots. The purpose is In order to make the keys distributed as evenly as possible in each node.

1. Because it is decentralized, you can choose any redis server as the entry server for operation. Entering set k1 v1 returns the slot value 12706, and then the host switches to 6381. If you enter mset name lucy age 20, an error will be reported, and multiple keys and values ​​cannot be inserted. user is the name of the group, enter mset name{user} lucy age{user} 20, and then calculate the slot based on the name of the group, the slot is 5474.

2. Enter the cluster keyslot key to return the slot value corresponding to the key. Enter the cluster countkeysinslot slot value to get whether a key is in the slot. Note that it must be accessed under the host of the corresponding slot range.

The waiting time is 15 seconds after the master hangs up. After the master hangs up, the slave becomes the master, and after the master is revived, it becomes the slave.

Clustered Jedis development

Create RedisClusterDemo:

public class RedisClusterDemo {
    public static void main(String[] args) {
        //创建对象
        HostAndPort hostAndPort = new HostAndPort("192.168.182.151", 6379);
        JedisCluster jedisCluster = new JedisCluster(hostAndPort);
        //进行操作
        jedisCluster.set("b1","value1");
        String value = jedisCluster.get("b1");
        System.out.println("value:"+value);
        jedisCluster.close();
    }
}

14. Redis6 application problem solving

cache penetration

Phenomenon: 1. The pressure on the application server suddenly increases. 2. The redis hit rate is reduced. 3. Keep querying the database.

1. The data cannot be queried in redis. 2. There are many abnormal URL accesses (cannot access the interface, malicious attacks))

solution:

1. Cache null values. Even if the data cannot be found in the database, it will still be cached. Set the expiration time of empty results to be very short, no more than 5 minutes at most.

2. Set the accessible list (whitelist). Use the bitmaps type to define an accessible list. Each access is compared with the ID in the bitmap. If not, it will be intercepted and access will not be allowed. The efficiency is relatively low.

3. Use Bloom Filter. It's actually a long binary vector (bitmap) and a series of random mapping functions (hash functions).

4. Perform real-time monitoring:

Cache breakdown

Phenomenon: 1. Database access pressure increases instantaneously. 2. There are no large number of expired keys in redis. 3.redis runs normally, but the database crashes.

Cause of the problem: A certain redis key has expired, and there were a large number of visits using this key (some popular visits).

solution:

1. Preset popular data. Store popular data in redis in advance to lengthen the key duration of popular data.

2. Adjust in real time.

3. Use locks.

cache avalanche

Phenomenon: 1. The database pressure increases and the server crashes.

Reason: In a very small period of time, a large number of keys expire collectively, cached data cannot be queried and the access database crashes.

solution:

1. Build a multi-level cache architecture: nginx cache + redis cache + other caches.

2. Use locks or queues: Ensure that there will not be a large number of threads reading and writing to the database at one time.

3. Set the expiration flag to update the cache: If the key expires, it will trigger a notification to another thread to update the cache of the actual key in the background.

4. Spread out cache expiration times. Add a random value to the original expiration time to reduce the repetition rate of cache expiration time.

Distributed lock

As business develops, single-level deployment systems will evolve into distributed cluster systems. The concurrency control lock strategy in the original single-level deployment is invalid. A cross-JVM mutual exclusion mechanism is needed to control access to shared resources.

Mainstream solutions for distributed locks: 1. Based on database. 2. Based on caching Redis (highest performance). 3. Based on Zookeeper (the most reliable).

Set lock and expiration time

EX second: Set the expiration time of the key to seconds. The effect of SET key value EX second is equivalent to SETEX key second value.

1. Enter the setnx users value in 6379 to set the lock. del users, delete the lock.

2. Enter setnx users 10 in 6379, expire users 10 to set the expiration time of 10 seconds, and then type ttl users to view the expiration time.

3. In set users 10 nx ex 12 (nx means locking, ex means setting the expiration time, 10 means, 12 means time)

Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock","111",3,TimeUnit.SECONDS);
if(lock){
    Object value = redisTemplate.opsForValue().get("num");
    if(StringUtils.isEmpty(value)){ //判断Num为空return
        return;
    }
    int num = Integer.parseInt(value+""); //有值就转为int
    redisTemplate.opsForValue().set("num",++num);//把redis的num加1
    redisTemplate.delete("lock");//释放锁,del
}else{
    try{
        Thread.sleep(100);
        testLock();
    }catch(InterruptedException e){
        e.printStackTrace();
    }
}

ab -n 1000 -c 100 http://192.168.182.151:8080/redisTest/testLock 

Lock +1, release the lock, and repeat the operation. Eventually the value of num will become 1000.

UUID prevents accidental deletion

What is released is not your own lock, but someone else's lock: a operates first, locks first, and then performs the specific operation. Assume that the server freezes during the specific operation and it exceeds 10 seconds, and the lock is automatically released. Assume that b grabs the lock, locks it first, and then performs specific operations. Suppose that before the operation is completed, server a suddenly reacts, and a will perform the operation and release the lock manually. What is released at this time is the lock of b.

Use UUID to prevent accidental deletion:

Step 1: uuid represents different operations set lock uuid nx ex 10.

Step 2: When releasing the lock, first determine whether the current uuid and the uuid of the lock to be released are the same.

Change the code as follows:

Boolean lock = redisTemplate.opsForValue().setIfAbsent("lock","111",3,TimeUnit.SECONDS);
if(lock){
    Object value = redisTemplate.opsForValue().get("num");
    if(StringUtils.isEmpty(value)){ //判断Num为空return
        return;
    }
    int num = Integer.parseInt(value+""); //有值就转为int
    redisTemplate.opsForValue().set("num",++num);//把redis的num加1
    String lockUuid = (String)redisTemplate.opsForValue().get("lock");
    if(lockUuid.equals(uuid)){
        redisTemplate.delete("lock");//释放锁
    }
    

LUA guarantees delete atomicity

Delete operations lack atomicity.

First lock, then perform specific operations, and finally release the lock. To release the lock, you need to compare the uuid and delete it if they are the same. Assume that it is about to be deleted but has not yet been deleted. At this time, the lock reaches the expiration time and is automatically released. At this time, b can obtain the lock and then perform specific operations. Because a can continue to perform deletion operations, a can release b's lock at this time.

15. New features of Redis6

— — — — — — — — — — — — — — — —

Previous import:cd /usr/local/bin, Re-import:redis-server / etc/redis.conf /usr/bin/redis-cli input redis.  

Reason for snapshot failure: First sudo su to enter the root user, then ps -ef | grep redis to kill the process, redis-server /etc/redis.conf starts the service, be sure to let redis start under the root user, like this Permissions on dump.rdb are sufficient.

Guess you like

Origin blog.csdn.net/RuanFun/article/details/133523807