NoSQL database case combat--Redis high-availability solution--Redis + sentinel

Redis high-availability solution--Redis + sentinel

Preface

This environment is based on the Centos 7.8 system to build a Redis learning environment. For
specific construction, please refer to Redis-5.0.9 Environment Deployment

Redis master-slave synchronization solves the problem of data redundancy, but in multiple Redis nodes, when the master node goes down, the collection of Redis servers will be paralyzed. This problem needs to be solved through the Redis high-availability solution.
Next, I will focus on the Redis high-availability solution-Redis Sentinel


Redis high availability solution

  • Master-slave replication (Replication-Sentinel mode)
  • Redis cluster (Redis-Cluster mode)

1. What is a sentinel

What is a sentinel

Sentinel, the English name Sentinel, is a distributed system used to monitor each server in the master-slave structure. When the master node fails, the new master node is selected through the voting mechanism, and all slave nodes are connected to On the new master node.

Sentinel sentinel is a highly available solution officially provided by redis, which can be used to monitor the operation of multiple Redis service instances.
Redis Sentinel is a Redis server running in a special mode. Redis Sentinel works cooperatively in the environment of multiple Sentinel processes.

The role of the sentinel

  • Monitoring: The sentinel will constantly check whether your Master and Slave are working properly.
  • Notification: When there is a problem with a monitored Redis, the sentinel can send notifications to the administrator or other applications through the API.
  • Automatic failover: When a Master fails to work normally, the sentinel will start an automatic failover operation, which will upgrade one of the slaves of the failed Master to the new Master, and let the other slaves of the failed Master Copy the new Master instead; when the client tries to connect to the failed Master, the cluster will also return the address of the new Master to the client, so that the cluster can use the Master instead of the failed Master.

Sentine architecture

Insert picture description here
Multiple sentinel monitoring masters
Insert picture description here
In addition to the monitoring master, the sentries monitor each other
Insert picture description here

Second, the configuration of the sentry

Sentinel configuration

The sentinel, as a monitoring of the redis instance, guarantees the robustness and high availability of the sentinel through the election algorithm. Therefore, the sentinel must be deployed at least 3 units, which conforms to the half-number principle. It needs 5 or 7, more than half, not including half when it is alive. Only when the leader is elected can the master-slave switching function be carried out.
At least one redis service needs to survive to ensure the normal operation of sentinel. The principle of selecting a new master is the latest available and the latest data, the highest priority and the longest active

There are several points to note in the process of building the sentry system

  • The master-slave nodes in the sentinel system are no different from ordinary master-slave nodes. Fault discovery and transfer are controlled and completed by the sentinel.
  • The sentinel node is essentially a redis node.
  • For each sentinel node, you only need to configure the monitoring master node to automatically discover other sentinel nodes and slave nodes.
  • During the startup and failover phase of the sentinel node, the configuration file of each node will be rewritten (config rewrite).
  • A sentinel can only monitor one master node; in fact, a sentinel can monitor multiple master nodes by configuring multiple sentinel monitors.

Environmental preparation

Master-slave synchronization of all nodes has been configured and enabled

role node ip Redis-Version
master reids-yum 192.168.5.11 Redis-5.0.9
slave1 reids_source_code 192.168.5.12 Redis-5.0.9
slave2 redis-server 192.168.5.13 Redis-5.0.9

reids_source_code provides system service scripts

[root@reids_source_code ~]# vim /usr/lib/systemd/system/redis-sentinel.service
[Unit]
Description=Redis Sentinel
After=network.target

[Service]
ExecStart=/usr/local/redis/bin/redis-sentinel /etc/redis/sentinel.conf --supervised systemd
ExecStop=/usr/bin/kill `pidof redis-sentinel`
Type=notify
User=redis
Group=redis
RuntimeDirectory=redis
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target

Three nodes provide Sentine configuration files

[root@reids-yum ~]# vim /etc/sentinel.conf
[root@reids_source_code redis]# vim /etc/redis/sentinel.conf
[root@redis-server ~]# vim /etc/sentinel.conf 
#是否为守护进程
daemonize yes
pidfile "/var/run/redis/redis-sentinel.pid"
logfile "/var/log/redis/redis-sentinel.log"
bind 192.168.5.13
port 26379
#工作目录
dir "/var/lib/redis"
#声明该哨兵的主库是mymaster,主库的ip和端口分别为127.0.0.1和6379
#最后一个2的含义是,在哨兵发生领导选举时,该哨兵需要获得2票才能成为leader
sentinel myid c0fc53842608bba5e5807226ce96d7c412bd069b
#在mymaster宕机30秒后进行主观下线
sentinel deny-scripts-reconfig yes
#指定在发生failover故障转移时最多可以有1个slave同时对新的master进行同步
sentinel monitor mymaster 192.168.5.11 6379 2
#设置故障转移超时时间为180秒
#这个参数的意义比较复杂,详细可以参考官方的注释说明
sentinel config-epoch mymaster 0
#发现两个从节点
sentinel leader-epoch mymaster 0
sentinel known-replica mymaster 192.168.5.13 6379
#epoch实现类似版本号的功能
sentinel known-replica mymaster 192.168.5.12 6379
# Generated by CONFIG REWRITE
protected-mode no
sentinel current-epoch 0

Start sentinel

[root@reids-yum ~]# systemctl start redis-sentinel
[root@reids_source_code ~]# systemctl start redis-sentinel
[root@redis-server ~]# systemctl start redis-sentinel

# 查看进程
[root@reids-yum ~]# netstat -lnutp | grep 6379
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      8236/redis-server 1 
tcp        0      0 192.168.5.11:6379       0.0.0.0:*               LISTEN      8236/redis-server 1 
tcp        0      0 0.0.0.0:26379           0.0.0.0:*               LISTEN      8077/redis-sentinel 
tcp6       0      0 :::26379                :::*                    LISTEN      8077/redis-sentinel 

[root@reids_source_code ~]# netstat -lnutp | grep 6379
tcp        0      0 192.168.5.12:26379      0.0.0.0:*               LISTEN      3800/redis-sentinel 
tcp        0      0 192.168.5.12:6379       0.0.0.0:*               LISTEN      3265/redis-server 1

[root@redis-server ~]# netstat -lnutp | grep 6379
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      12404/redis-server  
tcp        0      0 192.168.5.13:6379       0.0.0.0:*               LISTEN      12404/redis-server  
tcp        0      0 0.0.0.0:26379           0.0.0.0:*               LISTEN      12326/redis-sentine 
tcp6       0      0 :::26379                :::*                    LISTEN      12326/redis-sentine 

The sentry is configured successfully! ! !

Test
View node information

Main
Insert picture description here
slave1
Insert picture description here
slave2
Insert picture description here
stop the main library redis service

[root@reids-yum ~]# systemctl stop redis
[root@reids-yum ~]# netstat -lnutp | grep 6379
tcp        0      0 0.0.0.0:26379           0.0.0.0:*               LISTEN      9751/redis-sentinel 
tcp6       0      0 :::26379  

View node information
slave1
Insert picture description here
slave2
Insert picture description here
Redis Sentinel has successfully achieved =Master-slave failover

Guess you like

Origin blog.csdn.net/XY0918ZWQ/article/details/113804107