Redis master-slave replication configuration and scenario testing

Redis master-slave replication configuration and scenario testing

Why use master-slave replication?

Although the reading and writing speed of Redis is faster than that of traditional relational databases, there will be a situation where the reading pressure is relatively high. In order to avoid this situation from happening, so as not to cause a bad experience for users, this time is necessary Consider the master-slave replication of Redis. The so-called master-slave replication is that there are multiple slave nodes under one master. The user's operation of data is much greater than reading, so we only perform writing operations under the master node, and reading is handed over to the slave node, so that the pressure is well distributed and the read and write performance of Redis is guaranteed.
In Redis, data replication is one-way, and can only be replicated from the Master node to the Salve node. A Salve node can only have one Master node.

The role of master-slave replication:

  • Data redundancy: Master-slave replication realizes hot backup of data, which is a data redundancy method besides persistence.
  • Crash recovery: When the Master node has an error, the slave node will take on the important task to ensure that the data read and write operations can be performed normally, and the fault can be repaired quickly.
  • Load balancing: On the basis of master-slave replication, with the use of read-write separation, the master node can implement write operations, and the slave nodes provide read operations to share the pressure of the server. Especially in the case of more reads and less writes, multiple Salve nodes can share read operations, which can greatly increase the concurrency of Redis.

Insert picture description here
We can write and configure multiple Salve nodes in the Master node, and multiple Salve nodes can still be configured under the Salve node.

Manual configuration to achieve one master and multiple slaves:

Although we can also configure the slave node through the command line, the configuration will become invalid once it is down, so I use the configuration file for configuration here.

Configuration file configuration (take one server to configure multiple redis.conf files to achieve the goal of one master and multiple slaves as an example):
Open the redis.conf configuration file and we can find the REPLICATION node. You only need to configure the configuration file of the slave node, and the master node does not need to be operated.

slaveof <masterip> <masterport>    #salveof master节点IP master节点端口
masterauth <master-password>       #如果master节点有密码,需要在此配上密码
slave-serve-stale-data yes         #主从复制中,从服务器可以响应客户端请求
slave-read-only yes                #从节点只读,不可写
repl-diskless-sync no              #默认不使用diskless同步方式 
repl-diskless-sync-delay 5         #无磁盘diskless方式在进行数据传递之前会有
一个时间的延迟,以便slave端能够进行到待传送的目标队列中,这个时间默认是5秒
repl-timeout 60                    #设置超时时间 

Before configuration, let's check whether the 6379Master node has a Salve node through the command:

127.0.0.1:6379> info replication
# Replication
role:master               #该节点为Master节点
connected_slaves:0        #没有从节点
master_replid:26dda30d25e234451caf414978532d5a1a55b257
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
127.0.0.1:6379> 

If we want to turn the Redis6380 into a Salve node of Redis6379, we need to modify the port number of the IP Redis6379 of the slaveof Redis6379. In addition, there are also four places of port, pidfile, logfile, and dbfilename that need to be changed to avoid the master node Conflict (for configuring multiple redis.conf files on one server to achieve one master and multiple slaves, you need to modify port, pidfile, logfile, dbfilename) If there is a password for the Master node, then you need to modify the password of masterauth Redis6379 to
complete the above two Configuration, Redis6380 becomes the slave node of Redis6379.
After the configuration file is modified, let's check whether the processes are started normally:

Insert picture description here
We can see that ports 6379, 6380 and 6381 are all started normally.
Then let's look at the information of the 6380 node at this time:

127.0.0.1:6380> info replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:9
master_sync_in_progress:0
slave_repl_offset:42
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:8aba049a5b9bcec45ad5afa87fa5c7f682189d40
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:42
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:42
127.0.0.1:6380> 

We can see that the role of 6380 is Slave, and the IP of his Master node is 127.0.0.1, the port number is 6379 and some other master information. The same can be seen at the 6381 node as the 6380. What will happen to the information of the Master node at this time? Let's take a look.

127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6380,state=online,offset=364,lag=1
slave1:ip=127.0.0.1,port=6381,state=online,offset=364,lag=1
master_replid:8aba049a5b9bcec45ad5afa87fa5c7f682189d40
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:364
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:364
127.0.0.1:6379> 

In the Master node, we can see the information of the two Salve nodes below him.

Scene test

Now that the master-slave replication has been configured, we might as well test several scenarios:

  • Scenario 1: Insert data in the Master node and see if it can be queried on the two Salve nodes
  • Scenario 2: The 6380 node suddenly goes down. At this time, the Master node is still inserting data. Restart the 6380 to see if the data inserted by the Master during the downtime can be found.
  • Scenario 3: Master suddenly goes down, whether the two Salve nodes can be queried normally.

Scenario one test result:

#在Master6379节点插入一条数据
127.0.0.1:6379> set name Silence
OK
127.0.0.1:6379> get name
"Silence"
127.0.0.1:6379> 
#6380Salve节点可以获取到Master节点插入的数据
127.0.0.1:6380> get name
"Silence"
127.0.0.1:6380>
#6381Salve节点可以获取到Master节点插入的数据
127.0.0.1:6381> get name
"Silence"
127.0.0.1:6381> 

It can be seen that the data we inserted in the Master node can be queried in the two Salve nodes.

Scenario 2: Test results:
We kill the process of the 6380Salve node, and then insert data in the Master node.

[root@Silence /]# ps -ef|grep redis
root      6254  6208  0 16:32 pts/2    00:00:02 redis-server 127.0.0.1:6379
root      6280  6261  0 16:33 pts/0    00:00:00 redis-cli
root      6307     1  0 16:37 ?        00:00:01 redis-server 127.0.0.1:6380
root      6315  6283  0 16:37 pts/1    00:00:00 redis-cli -p 6380
root      6343     1  0 16:41 ?        00:00:01 redis-server 127.0.0.1:6381
root      6351  6323  0 16:41 pts/3    00:00:00 redis-cli -p 6381
root      6403  6382  0 17:00 pts/4    00:00:00 grep --color=auto redis
[root@Silence /]# kill 6307
[root@Silence /]# ps -ef|grep redis
root      6254  6208  0 16:32 pts/2    00:00:02 redis-server 127.0.0.1:6379
root      6280  6261  0 16:33 pts/0    00:00:00 redis-cli
root      6315  6283  0 16:37 pts/1    00:00:00 redis-cli -p 6380
root      6343     1  0 16:41 ?        00:00:01 redis-server 127.0.0.1:6381
root      6351  6323  0 16:41 pts/3    00:00:00 redis-cli -p 6381
root      6405  6382  0 17:00 pts/4    00:00:00 grep --color=auto redis
[root@Silence /]# 

The service of the 6380Salve node has been killed.
Insert data in the Master node:
This is how the Master and the two Salve nodes query data like this:

127.0.0.1:6379> set name Silence
OK
127.0.0.1:6379> get name
"Silence"
127.0.0.1:6379> set name wen
OK
127.0.0.1:6379> get name
"wen"
127.0.0.1:6379> 

#6380Salve节点
127.0.0.1:6380> get name
"Silence"
127.0.0.1:6380> get name
Could not connect to Redis at 127.0.0.1:6380: Connection refused
not connected> 

#6381Salve节点
127.0.0.1:6381> get name
"Silence"
127.0.0.1:6381> get name
"wen"
127.0.0.1:6381> 

Analyzing the execution results, we can see that both the Master and 6381Salve nodes can obtain the inserted data normally, but the 6380 cannot obtain it. So now we restart the 6380 node and get the data to see if we can get the data inserted by the Master during the downtime.

127.0.0.1:6380> get name
"Silence"
127.0.0.1:6380> get name
Could not connect to Redis at 127.0.0.1:6380: Connection refused
not connected> exit
[root@Silence bin]# redis-server redis6380.conf 
[root@Silence bin]# redis-cli -p 6380
127.0.0.1:6380> get name
"wen"
127.0.0.1:6380>

Huh? During the 6380 downtime, the data inserted by the Master can still be found after the 6380 resumes service. This shows how awesome master-slave replication is!

Scenario three test result:
we kill the Master node.

[root@Silence /]# ps -ef|grep redis
root      6254  6208  0 16:32 pts/2    00:00:03 redis-server 127.0.0.1:6379
root      6280  6261  0 16:33 pts/0    00:00:00 redis-cli
root      6343     1  0 16:41 ?        00:00:02 redis-server 127.0.0.1:6381
root      6351  6323  0 16:41 pts/3    00:00:00 redis-cli -p 6381
root      6422     1  0 17:05 ?        00:00:00 redis-server 127.0.0.1:6380
root      6427  6283  0 17:06 pts/1    00:00:00 redis-cli -p 6380
root      6431  6382  0 17:08 pts/4    00:00:00 grep --color=auto redis
[root@Silence /]# kill 6254
[root@Silence /]# ps -ef|grep redis
root      6280  6261  0 16:33 pts/0    00:00:00 redis-cli
root      6343     1  0 16:41 ?        00:00:02 redis-server 127.0.0.1:6381
root      6351  6323  0 16:41 pts/3    00:00:00 redis-cli -p 6381
root      6422     1  0 17:05 ?        00:00:00 redis-server 127.0.0.1:6380
root      6427  6283  0 17:06 pts/1    00:00:00 redis-cli -p 6380
root      6433  6382  0 17:08 pts/4    00:00:00 grep --color=auto redis
[root@Silence /]# 

At this time, go to the two Salve nodes to query data.

#6380Salve节点依然可以查询的到之前Master节点插入的数据
127.0.0.1:6380> get name
"wen"

#6381Salve节点也依然可以查询的到之前Master节点插入的数据
127.0.0.1:6381> get name
"wen"

Although the two Salve nodes can still find the data inserted before the Master node, can the two Salve nodes insert data if the user has new data inserted?
The answer is no, if you don’t believe it, we can try

127.0.0.1:6380> set age 23
(error) READONLY You can't write against a read only replica.  
127.0.0.1:6380> 

127.0.0.1:6381> set age 25
(error) READONLY You can't write against a read only replica.  
127.0.0.1:6381> 

According to the results, it can be seen that the two Salve nodes cannot insert data.

Then the problem is, when the Master suddenly goes down, what should I do with the write operation? Please see the next article.

Guess you like

Origin blog.csdn.net/nxw_tsp/article/details/107984857