Redis master-slave synchronization cluster construction

Preface

  • Through the previous article, we learned about the redis compilation and installation process, and configuration optimization content. Here, we will explain in depth the application of redis cluster and simulate the construction of master-slave redis service cluster.

1. Overview of Redis cluster

1.1 What is the role of redis cluster

  • Before the experimental deployment starts, we need to understand why we need to build a Redis cluster and what problems it solves? What are the advantages. We can explore this problem from a single Redis server.

1.2 Problems with a single Redis server

  • If MySQL master-slave replication read-write separation and MHA high availability have been deployed, it is very easy to think of the problems of a single Redis server, mainly as follows
存在单点故障;
不满足高并发的需求;
数据丢失引发灾难(容错率非常低);
  • To solve such a problem, the easiest thing to think of is to store backups and expand horizontally. This requires us to build a Redis cluster to meet business needs.

1.3 Introduction and advantages of Redis cluster

  • Introduction:
1) Redis集群是一个提供在多个Redis间节点共享数据的程序集;
2) Redis集群并不支持多处理多个Keys的命令,应为这需要在不同节点间移动数据,从而达不到像Redis那样的性能,在
高负载的情况下可能会导致不可预料的错误;
3) Redis集群通过分区来提供一定程度的可用性,在实际环境中档某个节点宕机或则不可达的情况下继续处理命令。
  • Advantage:
1) 自动分割数据到不同节点上;
2) 整个集群的部分节点失败或不可达的情况下依旧可以处理业务指令。

1.4 Data sharding of Redis-Clouder cluster

  • Redis cluster does not use consistent hash, but introduces the concept of hash slot;

  • The Redis cluster has a total of 16,384 hash slots (0-16383);

  • After each Key passes the CRC16 check, it modulates 16384 to determine how to store it;

  • Each node of the cluster is responsible for part of the hash slot;

  • In a Redis cluster, you can add or delete nodes without stopping the service.

Redis-Cluster数据分片详解,以3个节点组成的集群为例

节点A包含05500号哈希槽
节点B包含550111000号哈希槽
节点C包含1100116384号哈希槽
支持添加或者删除节点,添加删除节点无需停止服务

例如
如果想新添加个节点D,需要移动节点A,B,C中的部分槽到D上
如果想移除节点A,需要将A中的槽移到B和C节点上,再将没有任何槽的A节点从集群中移除

1.5 Redis-Cluster's master-slave replication model

1.集群中具有A,B,C三个节点,如果节点B失败了,整个集群就会因缺少这个范围的槽而不可用
  
2.为每个节点添加一个从节点A1, B1, C1,整个集群便有三个master节点和三个slave节点组成,在节点B失败后,集群
便会选举B1为新的主节点继续服务
  
3.当A和A1都失败后,集群将不可用

2. Build Redis master-slave synchronization

2.1 Case topology and environment

Insert picture description here

项目环境

master1 192.168.140.20
master2 192.168.140.21
master3 192.168.140.22
slvae1 92.168.140.13
slvae2 92.168.140.14
slvae3 92.168.140.15

2.2 Project operation

  • Set network parameters, turn off firewall and selinux on all nodes

  • Download and install Redis on all nodes

  • Modify the Redis configuration file on all nodes

  • Create a Redis cluster (on the master1 node)

创建集群步骤

1)导入key文件并安装rvm
2)执行环境变量让其生效
3)安装Ruby2.4. 1版本
4)安装redis客户端
5)创建redis集群

2.3 Specific implementation steps

  • Import the redis package to each server
    Insert picture description here

Send key input to all sessions, that is, perform the following configuration on each server

1) Unzip

tar zxvf redis-5.0.4.tar.gz

2) Configuration and installation

cd redis-5.0.4/
make
make PREFIX=/usr/local/redis install

3) Link shortcut commands

ln -s /usr/local/redis/bin/* /usr/local/bin/

4) Install and run the script, and view the port status

cd redis-5.0.4/utils/
./install_server.sh
netstat -anptu | grep redis

Insert picture description here

5) Modify the main configuration file and start the service

[root@master1 ~]# vi /etc/redis/6379.conf
...
bind 192.168.140.20            '//修改127.0.0.1为本机地址 (6台服务器均需单独更改IP)'
cluster-enabled yes
appendonly yes                    '//开启AOF持久化'
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
cluster-require-full-coverage yes
重启redis服务 
/etc/init.d/redis_6379 stop
/etc/init.d/redis_6379 start

6) On master1, use the script to create a cluster

  • First add redis-3.2.0.gem
    Insert picture description here
  • yum install software
[root@master1 ~]# yum -y install ruby rubygems
[root@master1 ~]# gem install redis-3.2.0.gem
  • Build a cluster
[root@master1 ~]# redis-cli --cluster create --cluster-replicas 1 \
192.168.140.20:6379 192.168.140.21:6379 192.168.140.22:6379 \
192.168.140.13:6379 192.168.140.14:6379 192.168.140.15:6379

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.140.14:6379 to 192.168.140.20:6379
Adding replica 192.168.140.15:6379 to 192.168.140.21:6379
Adding replica 192.168.140.13:6379 to 192.168.140.22:6379
M: b32bddc815edec59943aef28275b073925f3bf6c 192.168.140.20:6379
   slots:[0-5460] (5461 slots) master
M: 7ab1a75dbac2dd91be898874895c636d2fa3b790 192.168.140.21:6379
   slots:[5461-10922] (5462 slots) master
M: 440c768ed0378686f347244bf37d6e5adb191401 192.168.140.22:6379
   slots:[10923-16383] (5461 slots) master
S: 550f265ad5d2714a20731cbd7cd8a61e826da443 192.168.140.13:6379
   replicates 440c768ed0378686f347244bf37d6e5adb191401
S: 4b8052a36136df43078db53c7d472b4acc848dcb 192.168.140.14:6379
   replicates b32bddc815edec59943aef28275b073925f3bf6c
S: c3f7dc4e4c17c385fdad4af782100266d2c691e2 192.168.140.15:6379
   replicates 7ab1a75dbac2dd91be898874895c636d2fa3b790
Can I set the above configuration? (type 'yes' to accept): yes	'//需要输入yes'
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.....
>>> Performing Cluster Check (using node 192.168.140.20:6379)
M: b32bddc815edec59943aef28275b073925f3bf6c 192.168.140.20:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 440c768ed0378686f347244bf37d6e5adb191401 192.168.140.22:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 7ab1a75dbac2dd91be898874895c636d2fa3b790 192.168.140.21:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 4b8052a36136df43078db53c7d472b4acc848dcb 192.168.140.14:6379
   slots: (0 slots) slave
   replicates b32bddc815edec59943aef28275b073925f3bf6c
S: 550f265ad5d2714a20731cbd7cd8a61e826da443 192.168.140.13:6379
   slots: (0 slots) slave
   replicates 440c768ed0378686f347244bf37d6e5adb191401
S: c3f7dc4e4c17c385fdad4af782100266d2c691e2 192.168.140.15:6379
   slots: (0 slots) slave
   replicates 7ab1a75dbac2dd91be898874895c636d2fa3b790
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

2.4 Test cluster

  • Log in and write data
[root@master1 ~]# redis-cli -h 192.168.140.20 -p 6379 -c		//登录服务器
192.168.140.20:6379> set weather sunny							//添加weather键值为sunny
-> Redirected to slot [8949] located at 192.168.140.21:6379
OK
  • Get the data on other servers
[root@master3 ~]# redis-cli -h 192.168.140.22 -p 6379 -c		//登录服务器
192.168.140.22:6379> get weather
-> Redirected to slot [8949] located at 192.168.140.21:6379		//提示存储该键值的哈希槽为8949,定位到该服务器上
"sunny"
[root@master2 ~]# redis-cli -h 192.168.140.21 -p 6379 -c		//登录服务器
192.168.140.21:6379> get weather
"sunny"
[root@slave1 ~]# redis-cli -h 192.168.140.13 -p 6379 -c
192.168.140.13:6379> get weather
-> Redirected to slot [8949] located at 192.168.140.21:6379
"sunny"
  • When adding information on the alternate slave server
[root@slave1 ~]# redis-cli -h 192.168.140.13 -p 6379 -c
192.168.140.13:6379> set centos 7.5
-> Redirected to slot [467] located at 192.168.140.20:6379	
OK

'//提示存储该键值的哈希槽为467,定位到master1服务器'
[root@master1 ~]# redis-cli -h 192.168.140.20 -p 6379 -c
192.168.140.20:6379> get centos		'//获取centos的键值'
"7.5"
  • The cluster can be viewed on any server
登录到服务器上后

cluster info    查看群集信息
cluster nodes   查看节点信息

Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_42449832/article/details/111322902