Redis Sentinel deployment solution

Summary

There are two main types of Redis clusters at this stage. One is a highly available cluster, Redis Sentinel, a master-slave architecture, and each instance holds complete data. The other is a distributed cluster, Redis Cluster, multi-master architecture, data is distributed between various instances, each instance is responsible for reading and writing data. Today we take a look at the construction process of Redis Sentinel.

Sentinel

Features

Redis' Sentinel system is used to manage multiple Redis servers. The system performs the following three tasks:

  • Monitoring (Monitoring ): Sentinel will constantly check whether your master server and slave server are working properly.
  • Notification : When there is a problem with a monitored Redis server, Sentinel can send a notification to the administrator or other applications through the API.
  • Automatic failover (Automatic failover) : When a master server is not working properly, Sentinel will start an automatic failover operation, which will upgrade one of the failed master servers from the slave server to the new master server, and let the failed master server The other slave servers are changed to copy the new master server; when the client tries to connect to the failed master server, the cluster will also return the address of the new master server to the client, so that the cluster can use the new master server to replace the failed server.

nature

Sentinel is a distributed system. Sentinel itself is designed to run in a configuration where multiple Sentinel processes work together. The advantages are as follows:

  • Multiple Sentinel votes to determine the automatic failover of the Master, reducing the false positive rate;
  • If the failover system is a single point, the entire system cannot achieve high availability.

Basic knowledge you need to know before deployment

The configuration file must be used when running Sentinel , otherwise Sentinel will only refuse to start.

Sentinels will listen for connections on TCP port 26379 by default , so for Sentinels to work properly, the server's port 26379 must be open to receive connections from the IP addresses of other Sentinel instances. !!! This is very important !!!

  • A robust system requires at least three Sentinel instances.

  • Three Sentinel instances should be placed on separate computers or virtual machines.

  • Because Redis uses asynchronous replication, Sentinel + Redis high availability solution. For the content of asynchronous replication, please refer to " Analysis of Redis Replication Process ".

  • The client must support Sentinel.

surroundings

Deployment architecture diagram

Insert picture description here

The deployment process

The overall deployment process is to first deploy the master-slave cluster in the lower part of the above figure, and then deploy the sentinel cluster.

Deploy master-slave cluster

Download redis on Master, Slave1, and Slave2, and store the path / usr / redis /

[root@localhost ~]wget -P /usr/redis/ http://download.redis.io/releases/redis-4.0.14.tar.gz

Unzip the downloaded files separately and install

[root@localhost ~]cd /usr/redis/
[root@localhost redis]gunzip redis-4.0.14.tar.gz
[root@localhost redis]tar -xvf redis-4.0.14.tar
[root@localhost redis]cd redis-4.0.14
[root@localhost redis-4.0.14]make install

Configure the redis.conf file of Slave1 and Slave2, so that Slave1 and Slave2 can copy data from the Master. The principle of replication can refer to the analysis of Redis replication process . The following gives the minimum configuration required for the master-slave cluster:

# slaveof <masterip> <masterport>
slaveof 192.168.33.160 6379

# masterauth <master-password>
masterauth 123456

#默认,推荐设置,slave设置只读
slave-read-only yes

#bind 127.0.0.1 允许其他机器访问
bind 0.0.0.0

#daemonize no 后台运行
daemonize yes

Start Master, Slave1, Slave2 respectively

[root@localhost redis-4.0.14]cd /src
[root@localhost src]./redis-server ../redis.conf

Connect to the Master and set some values ​​on it to see the synchronization

[root@localhost src]./redis-cli
127.0.0.1:6379>set key1 value1
"ok"
127.0.0.1:6379>quit

Connect to Slave1 and Slave2 and try to get the value of key1

[root@localhost src]./redis-cli
127.0.0.1:6379>get key1
"value1"
127.0.0.1:6379>quit

At this point, the master-slave synchronization has been deployed, let's come down to deploy the sentinel cluster.

Deploy sentinel cluster

Sentinel is installed on the same machine as Redis, and the corresponding relationship is as follows:

Host Redis Sentinel
[Host1] 192.168.33.4 Master Sentinel1
[Host2]192.168.33.5 Slave1 Sentinel2
[Host3]192.168.33.6 Slave2 Sentinel3

Since Sentinel is already included in Redis 2.8.0 and above, we can directly use the redis-4.0.14 deployment we downloaded earlier.

Configure sentinel.conf on the corresponding machines of Master, Slave1, and Slave2 respectively. The following shows the minimum configuration required for sentinel cluster:

sentinel monitor mymaster 192.168.33.4 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1
#后台启动
daemonize yes
#记录日志
logfile "/usr/redis/redis-4.0.14/sentinel_log.log"
#允许其他机器访问,特别要保证sentinel集群之间的机器能够相互访问
bind 0.0.0.0
#关闭保护模块,方便测试
protected-mode no

Description:

  • monitor : The configuration instructs Sentinel to monitor a master server named mymaster with an IP address of 127.0.0.1 and a port number of 6379. The master server determines that the failure requires at least 2 Sentinel consents (no matter how many Sentinel consents you set to determine that one server is invalid, A Sentinel needs the support of most Sentinel in the system to initiate an automatic failure migration).
  • down-after-milliseconds : The number of milliseconds Sentinel thinks the server has been disconnected.
  • parallel-syncs : The option specifies how many slave servers can simultaneously synchronize with the new master server when performing a failover. The smaller the number, the longer it takes to complete the failover.

Start the sentinel on Master, Slave1, Slave2 respectively

[root@localhost src]./redis-server ../sentinel.conf --sentinel

At this point, the sentinel cluster has been built.

test

Test the failover capability of sentinel cluster:

  1. Close the Master process, where 7081 is the Master process number:

    [root@localhost ~]ps -ef | grep redis
    [root@localhost ~]kill -s 9 7081
    
  2. Observe the sentinel logs:

    Insert picture description here
    Description:

    • + sdowm master, indicating that the Master is subjectively offline;
    • + odown master, indicating that the Master is objectively offline;
    • + switch-master, means to re-elect the master;
  3. Now the redis of 192.168.33.6 is elected as Master, which is Slave2, now we try to operate on the new Masert (Slave2) to see if it can be synchronized to Slave1:

    #进入新的Masert(Slave2)
    [[email protected] src]./redis-cli
    127.0.0.1:6379>set key2 value2
    "ok"
    
  4. We log in to 192.168.33.5 redis (Slave1) and see if we can get the value of key2:

    #进入Slave1
    [[email protected] src]./redis-cli
    127.0.0.1:6379>get key2
    "value2"
    

    We can clearly see that we can get the newly inserted value of the new Masert (Slave2) in Slave1, so sentinel successfully completed a failover.

reference

[1] Redis Sentinel Documentation.

Published 20 original articles · praised 77 · 20,000+ views

Guess you like

Origin blog.csdn.net/qq_36011946/article/details/105596638