Redis High Availability Solution--Redis + Sentinel (Project Actual Combat)
Preface
This environment is based on the Centos 7.8 system to build a Redis learning environment. For
specific construction, please refer to Redis-5.0.9 Environment Deployment
1. Sentinel architecture design
2. Environmental preparation
role | host | ip | Redis-Version |
---|---|---|---|
master | node01 | 192.168.5.11 | Redis-5.0.9 |
slave1 | node02 | 192.168.5.12 | Redis-5.0.9 |
slave2 | node03 | 192.168.5.13 | Redis-5.0.9 |
Note: This environment is installed based on redis-yum
Three, configure master-slave synchronization
master
[root@node01 ~]# vim /etc/redis.conf
bind 192.168.5.11
daemonize yes
appendonly yes
[root@node01 ~]# systemctl enable --now redis
slave1
[root@node02 ~]# vim /etc/redis.conf
bind 192.168.5.12
daemonize yes
appendonly yes
replicaof 192.168.5.11 6379
[root@node02 ~]# systemctl enable --now redis
slave2
[root@node03 ~]# vim /etc/redis.conf
bind 192.168.5.13
daemonize yes
appendonly yes
replicaof 192.168.5.11 6379
[root@node03 ~]# systemctl enable --now redis
Check master-slave synchronization status
Master
slave1
slave2
master-slave synchronization configuration successful! ! !
Fourth, configure the sentry service
Modify l configuration file
master
[root@node01 ~]# vim /etc/redis-sentinel.conf
daemonize yes
sentinel monitor mymaster 192.168.5.11 6379 2
[root@node01 ~]# systemctl enable --now redis-sentinel.service
Created symlink from /etc/systemd/system/multi-user.target.wants/redis-sentinel.service to /usr/lib/systemd/system/redis-sentinel.service.
[root@node01 ~]# netstat -lnutp | grep 6379
tcp 0 0 192.168.5.11:26379 0.0.0.0:* LISTEN 11911/redis-sentine
tcp 0 0 192.168.5.11:6379 0.0.0.0:* LISTEN 1698/redis-server 1
slave1
[root@node02 ~]# vim /etc/redis-sentinel.conf
bind 192.168.5.11
daemonize yes
sentinel monitor mymaster 192.168.5.11 6379 2
[root@node02 ~]# systemctl enable --now redis-sentinel.service
Created symlink from /etc/systemd/system/multi-user.target.wants/redis-sentinel.service to /usr/lib/systemd/system/redis-sentinel.service.
[root@node02 ~]# netstat -lnutp | grep 6379
tcp 0 0 192.168.5.12:26379 0.0.0.0:* LISTEN 11884/redis-sentine
tcp 0 0 192.168.5.12:6379 0.0.0.0:* LISTEN 1717/redis-server 1
slave2
[root@node03 ~]# vim /etc/redis-sentinel.conf
bind 192.168.5.13
daemonize yes
sentinel monitor mymaster 192.168.5.11 6379 2
[root@node03 ~]# systemctl enable --now redis-sentinel.service
Created symlink from /etc/systemd/system/multi-user.target.wants/redis-sentinel.service to /usr/lib/systemd/system/redis-sentinel.service.
[root@node03 ~]# netstat -lnutp | grep 6379
tcp 0 0 192.168.5.13:26379 0.0.0.0:* LISTEN 11791/redis-sentine
tcp 0 0 192.168.5.13:6379 0.0.0.0:* LISTEN 1648/redis-server 1
View sentinel status
Track sentinel log files
View sentinel information
View master sentinel information
View Salve sentinel information
Five, simulate sentinel failover
Stop the master redis service
[root@node01 ~]# systemctl stop redis
[root@node01 ~]# netstat -lnutp | grep redis
tcp 0 0 192.168.5.11:26379 0.0.0.0:* LISTEN 11911/redis-sentine
View the status master-slave status after 3min
master switch to node02
Trace log
View master-slave status
The master writes and deletes data
failover successfully! ! !
Start node01 redis service
[root@node01 ~]# systemctl start redis
[root@node01 ~]# netstat -lnutp | grep redis
tcp 0 0 192.168.5.11:6379 0.0.0.0:* LISTEN 12031/redis-server
tcp 0 0 192.168.5.11:26379 0.0.0.0:* LISTEN 11911/redis-sentine
View master-slave status
The master writes the data, and the salve checks the synchronization status
master writes data
node01 view
node03 view
Failover and failover functions are realized! ! !
Successful failure recovery (non-preemptive election role)
Note: After the master sends the fault, sentinel switches the color calibration by changing the node configuration file, so as to meet the requirements of failover and failover
The Redis sentry is successfully configured! ! !