Redis three modes - master-slave replication, sentinel mode, cluster

Table of contents

1. Master-slave replication

 1.1 The concept of master-slave replication

  1.2 Redis master-slave replication function

 1.2.1 Data redundancy

 1.2.2 Fault recovery

  1.2.3 Load balancing

 1.2.4 Cornerstone of High Availability

 1.3 Redis master-slave replication process

   1.4 Deploy Redis master-slave replication

1.4.1. Environment deployment

 1.4.2. All servers must first close the firewall

 1.4.3. Redis is installed on all servers

 1.4.4 Modify the configuration file of the Master master node Redis

  1.6 Verify the master-slave effect (192.168.40.17)

   1.6.1 Verify the slave node on the Master node

2. Redis sentinel mode 

 2.1 The principle of sentinel mode 

 2.2 The role of sentinel mode

 2.3 Structure of Sentinel Mode

  2.4 Failover mechanism

 2.4.1 The sentinel node regularly monitors to find out whether the master node is faulty

  2.4.2 When the primary node fails

  2.4.3 The failover is performed by the leader sentinel node, the process is as follows:

  2.5 Election of master node

  2.7 Environment preparation

2.8 Modify the Redis configuration file (all node operations)

 2.9 Start sentry mode and view information

  2.10 Fault simulation

Edit 3. Redis cluster mode 

 3.1 The concept of redis cluster

  3.2 The role of the cluster

 3.2.1 Data partition

 3.2.2 High availability

  3.3 Data Fragmentation in Cluster Mode

 3.4 Master-slave replication model in cluster mode

 3.5 Redis cluster deployment

 3.5.1 Environment preparation

3.6 Preparing for operation

  3.7 Turn on the cluster function​

  3.8 Start the redis node

 3.9 Start the cluster

  3.10 Test cluster

  Four. Summary

1. Master-slave replication

 1.1 The concept of master-slave replication

        Master-slave replication refers to copying the data of one Redis server to other Redis servers. The former is called the master node (Master), and the latter is called the slave node (Slave); data replication is one-way, only from the master node to the slave node .

        By default, each Redis server is a master node; and a master node can have multiple slave nodes (or no slave nodes), but a slave node can only have one master node.

 

  1.2 Redis master-slave replication function

 1.2.1 Data redundancy

  • Master-slave replication implements hot backup of data, which is a data redundancy method other than persistence.

 1.2.2 Fault recovery

  • When there is a problem with the master node, the service can be provided by the slave node to achieve rapid failure recovery; it is actually a kind of service redundancy.

  1.2.3 Load balancing

  • On the basis of master-slave replication, combined with read-write separation, the master node can provide write services, and the slave nodes can provide read services (that is, the application connects to the master node when writing Redis data, and the application connects to the slave node when reading Redis data), sharing the server Load; especially in the scenario of less writing and more reading, sharing the reading load through multiple slave nodes can greatly increase the concurrency of the Redis server.

 1.2.4 Cornerstone of High Availability

  • In addition to the above functions, master-slave replication is also the basis for the implementation of sentinels and clusters, so master-slave replication is the basis for high availability of Redis.

 1.3 Redis master-slave replication process

  • If a Slave machine process is started, it will send a "sync command" command to the Master machine to request a synchronous connection.
  • Regardless of whether it is the first connection or reconnection, the Master machine will start a background process to save the data snapshot to the data file (perform rdb operation), and the Master will also record all commands to modify the data and cache them in the data file.
  • After the background process completes the cache operation, the Master machine will send the data file to the Slave machine, and the Slave machine will save the data file to the hard disk, and then load it into the memory, and then the Master machine will combine all operations that modify the data Send to the Slave end machine. If the Slave fails and causes a downtime, it will automatically reconnect after returning to normal.
  • After the Master machine receives the connection from the Slave machine, it sends its complete data file to the Slave machine. If the Mater receives synchronization requests from multiple Slaves at the same time, the Master will start a process in the background to save the data file. Then send it to all Slave-side machines to ensure that all Slave-side machines are normal.

 

   1.4 Deploy Redis master-slave replication

1.4.1. Environment deployment

Master节点 192.168.40.172   redis-5.0.7.tar.gz
Slave1节点 192.168.40.170   redis-5.0.7.tar.gz
Slave2节点 192.168.40.17   redis-5.0.7.tar.gz

 1.4.2. All servers must first close the firewall

systemctl stop firewalld
setenforce 0
systemctl disable firewalld

 1.4.3. Redis is installed on all servers

systemctl stop firewalld
setenforce 0
 
yum install -y gcc gcc-c++ make
 
tar zxvf redis-5.0.7.tar.gz -C /opt/
 
cd /opt/redis-5.0.7/
make
make PREFIX=/usr/local/redis install
 
cd /opt/redis-5.0.7/utils
./install_server.sh
 
回车四次,下一步需要手动输入
 
Please select the redis executable path [] /usr/local/redis/bin/redis-server    
 
ln -s /usr/local/redis/bin/* /usr/local/bin/

 1.4.4 Modify the configuration file of the Master master node Redis

vim /etc/redis/6379.conf
#70行,修改bind 项,0.0.0.0监听所有网段
bind 0.0.0.0
#137行,开启守护进程
daemonize yes
#172行,指定日志文件目录
logfile /var/log/redis_6379.log
#264行,指定工作目录
dir /var/lib/redis/6379
#700行,开启AOF持久化功能
appendonly yes
 
/etc/init.d/redis_6379 restart
netstat -natp | grep redis

  1.6 Verify the master-slave effect (192.168.40.17)

首先在Master上节点上查看日志
 
tail -f /var/log/redis_6379.log

   1.6.1 Verify the slave node on the Master node

redis-cli info replication

 

 

2. Redis sentinel mode 

Sentinel's core function: Based on master-slave replication, Sentinel introduces automatic failover of the master node.

 2.1 The principle of sentinel mode 

  • Sentinel : It is a distributed system used to monitor each server in the master-slave structure. When a failure occurs, a new Master is selected through a voting mechanism, and all Slaves are connected to the new Master. So the number of clusters running Sentinels must not be less than 3 nodes.

 2.2 The role of sentinel mode

  • Monitoring: Sentry will constantly check whether the master node and slave nodes are functioning properly
  • Automatic failover: When the master node fails to work normally, Sentinel will start an automatic failover operation. It will upgrade one of the slave nodes of the failed master node to a new master node, and let other slave nodes replicate the new master node instead. ​​​​
  • Notification (reminder): Sentinel can send the result of failover to the client.

 2.3 Structure of Sentinel Mode

​​​​The sentinel structure consists of two parts, the sentinel node and the data node :

  • Sentinel node : The sentinel system consists of one or more sentinel nodes, which are special redis nodes that do not store data .
  • Data Nodes : Both master and slave nodes are data nodes.

 The start of Sentinel depends on the master-slave mode, so the Sentinel mode must be installed after the master-slave mode is installed. All nodes need to deploy the Sentinel mode. The Sentinel mode will monitor whether all Redis working nodes are normal. When the Master appears When there is a problem, because other nodes have lost contact with the master node, they will vote. If more than half of the votes are made, it is considered that there is a problem with the master, and then the sentry room will be notified, and one of the slaves will be selected as the new master.

  • Special attention should be paid to the fact that objective offline is a concept unique to the master node; if the slave node and sentinel node fail, after being sent offline subjectively, there will be no subsequent objective offline and failover operations.

  2.4 Failover mechanism

 2.4.1 The sentinel node regularly monitors to find out whether the master node is faulty

  • Each sentinel node will send a ping command to the master node, slave node and other sentinel nodes every 1 second for a heartbeat detection. If the master node does not reply within a certain time frame or replies with an error message, then the sentinel will consider the master node to be offline subjectively (unilaterally). When more than half of the sentinel nodes think that the master node is offline subjectively, it will be objectively offline

  2.4.2 When the primary node fails

  • At this time, the sentinel node will implement the election mechanism through the Raft algorithm (election algorithm) to jointly elect a sentinel node as the leader, which will be responsible for handling the failover and notification of the master node. So the number of clusters running Sentinels must not be less than 3 nodes.

  2.4.3 The failover is performed by the leader sentinel node, the process is as follows:

  • Upgrade a slave node to a new master node, and let other slave nodes point to the new master node;
  • If the original master node recovers, it becomes a slave node and points to the new master node;
  • Notify the client that the primary node has been replaced.

  Special attention should be paid to the fact that objective offline is a concept unique to the master node; if the slave node and sentinel node fail, after being subjectively offline by the sentinel, there will be no subsequent objective offline and failover operations.

  2.5 Election of master node

  • Filter out unhealthy (offline) slave nodes that do not respond to sentinel ping responses.
  • Select the slave node with the highest priority configuration in the configuration file. (replica-priority, default value is 100)
  • Select the slave node with the largest replication offset, that is, the most complete replication.

The start of sentry depends on the master-slave mode, so the master-slave mode must be installed before doing the sentinel mode

  2.7 Environment preparation

​​​Master:192.168.40.17
​​Slave1:​192.168.40.170
​​Slave2:​192.168.40.172

2.8 Modify the Redis configuration file (all node operations)

vim /opt/redis-5.0.7/sentinel.conf
protected-mode no     #17行,关闭保护模式
port 26379            #21行,Redis哨兵默认的监听端口
daemonize yes         #26行,指定sentinel为后台启动
logfile "/var/log/sentinel.log"     #36行,指定日志存放路径
dir "/var/lib/redis/6379"           #65行,指定数据库存放路径
sentinel monitor mymaster 192.168.40.17 6379 2        #84行, 修改
指定该哨兵节点监控192.168.40.17:6379这个主节点,该主节点的名称是mymaster,最后的2的含义与主节点的故障判定有关:至少需要2个哨兵节点同意,才能判定主节点故障并进行故障转移
sentinel down-after-milliseconds mymaster 30000   #113行,判定服务器down掉的时间周期,默认30000毫秒(30秒)
sentinel failover-timeout mymaster 180000        #146行,故障节点的最大超时时间为180000 (180秒 )

 2.9 Start sentry mode and view information

cd /opt/redis-5.0.7/
redis-sentinel sentinel.conf &
注意!先启动主服务器,再启动从服务器
redis-cli -p 26379 info Sentinel

  2.10 Fault simulation

#在Master 上查看redis-server进程号:
ps -elf | grep redis
 
#杀死 Master 节点上redis-server的进程号
kill -9  redis进程号     #Master节点上redis-server的进程号
 
#验证master是转换至从服务器
tail -f /var/log/sentinel.log
 
#在Slave上查看是否转换成功
redis-cli -p 26379 INFO Sentinel


  3. Redis cluster mode 

 3.1 The concept of redis cluster

  • Cluster, namely Redis Cluster, is a distributed storage solution introduced since Redis 3.0.
  • The cluster consists of multiple nodes (Nodes) , and Redis data is distributed among these nodes.
  • The nodes in the cluster are divided into master nodes and slave nodes ; only the master node is responsible for the maintenance of read and write requests and cluster information; the slave nodes only replicate the data and status information of the master node.

  3.2 The role of the cluster

 3.2.1 Data partition

  • Data partitioning (or data sharding) is the core function of the cluster
  • The cluster distributes data to multiple nodes. On the one hand, it breaks through the limit of Redis single-machine memory size, and the storage capacity is greatly increased; on the other hand, each master node can provide external read and write services, which greatly improves the responsiveness of the cluster.
  • Redis stand-alone memory size limitation is mentioned in the introduction of persistence and master-slave replication; for example, if the stand-alone memory is too large, the fork operation of bgsave and bgrewriteaof may cause the master process to block, and the master-slave environment may cause host switching. As a result, the slave node cannot provide services for a long time, and the replication buffer of the master node may overflow during the full replication phase.

 3.2.2 High availability

  • The cluster supports master-slave replication and automatic failover of the master node (similar to Sentinel); when any node fails, the cluster can still provide external services.

  3.3 Data Fragmentation in Cluster Mode

  • Redis cluster introduces the concept of hash slots​​​​
  • Redis cluster has 16384 hash slots (numbered 0-16383)
  • Each node of the cluster is responsible for a portion of hash slots​​​​
  • After each Key passes the CRC16 check, take the remainder of 16384 to determine which hash slot to place. Through this value, find the node corresponding to the corresponding slot, and then directly and automatically jump to the corresponding Access operations on the node

 <- - -Take a cluster composed of 3 nodes as an example- - ->
Node A contains hash slots 0 to 5460 Node
B contains hash slots 5461 to 10922 Node
C contains hash slots 10923 to 16383

 3.4 Master-slave replication model in cluster mode

  • There are three nodes A, B, and C in the cluster. If node B fails, the entire cluster will be unavailable due to the lack of slots in the range of 5461-10922.
  • Add a slave node A1, B1, and C1 to each node, and the entire cluster consists of three master nodes and three slave nodes. After node B fails, the cluster elects B1 as the master node to continue serving. When both B and B1 fail, the cluster will be unavailable.

 3.5 Redis cluster deployment

 3.5.1 Environment preparation

  • A redis cluster generally requires **6 nodes, 3 masters and 3 slaves**. For convenience, all nodes here are simulated on 3 servers, and each host is set with one master and one backup, distinguished by IP address and port:
  • Three master node port numbers: 6001, 6002, 6003
  • Corresponding slave node port numbers: 7001, 7002, 7003
192.168.40.16 master
 
这里为了方便所有的节点都在同一台服务器上模拟

3.6 Preparing for operation

cd /etc/redis
mkdir -p redis-cluster/redis600{1..6}
 
for i in {1..6}
do
cp /opt/redis-5.0.7/redis.conf /etc/redis/redis-cluster/redis600$i
cp /opt/redis-5.0.7/src/redis-cli /opt/redis-5.0.7/src/redis-server /etc/redis/redis-cluster/redis600$i
done

  3.7 Turn on the cluster function​

  • ​​The configuration files of the other 5 folders are modified by analogy, and the 6 ports should be different
cd /etc/redis/redis-cluster/redis6001
 
vim redis.conf
#bind 127.0.0.1                      #69行,注释掉bind项,默认监听所有网卡
protected-mode no                      #88行,修改,关闭保护模式
port 6001                              #92行,修改,redis监听端口,
daemonize yes                          #136行,开启守护进程,以独立进程启动
cluster-enabled yes                    #832行,取消注释,开启群集功能
cluster-config-file nodes-6001.conf    #840行,取消注释,群集名称文件设置
cluster-node-timeout 15000             #846行,取消注释群集超时时间设置
appendonly yes                         #700行,修改,开启AOF持久化

  3.8 Start the redis node

分别进入那六个文件夹,执行命令: redis-server redis.conf ,来启动redis节点
cd /etc/redis/redis-cluster/redis6001
redis-server redis.conf
 
for i in {1..6}
do
cd /etc/redis/redis-cluster/redis600$i
redis-server redis.conf
done
 
ps -ef | grep redis

 3.9 Start the cluster

redis-cli --cluster create 127.0.0.1:6001 127.0.0.1:6002 127.0.0.1:6003 127.0.0.1:6004 127.0.0.1:6005 127.0.0.1:6006 --cluster-replicas 1
 
yes

  3.10 Test cluster

redis-cli -p 6001 -c   #加-c参数,节点之间就可以互相跳转 
cluster slots     #查看节点的哈希槽编号范围
 
set test lisi
cluster keyslot test  #查看name键的槽编号

 

  Four. Summary

1. Master-slave replication is suitable for multi-machine backup of data, as well as load balancing for read operations and simple fault recovery.


2. Sentinel mode is based on master-slave replication. Deploying sentinel mode must first deploy master-slave replication, which provides automatic failure recovery on the basis of master-slave replication. However, its write operations cannot be load-balanced, and its storage capacity is limited by a single machine.


3. The Redis cluster provides a distributed storage solution to solve the problem that the write operation cannot be load balanced and the storage capacity is limited by a single machine, and realizes a relatively complete high-availability solution. The cluster requires a minimum of 6 nodes, three masters and three slaves, to achieve Redis High Availability

Guess you like

Origin blog.csdn.net/m0_57554344/article/details/131980643