CentOS7.0 installation and configuration of Redis cluster

1. Introduction to redis cluster

When a redis cluster is started, it is automatically divided among multiple nodes. At the same time, it provides availability between shards: when some redis nodes fail or the network is interrupted, the cluster can continue to work. However, when a large number of node failures or network outages (such as most of the master nodes are unavailable), the cluster cannot be used. 
Therefore, from a practical point of view, Redis cluster provides the following functions: 
● Automatically divide data into multiple redis nodes 
● When some nodes are down or unreachable, the cluster can still continue to work

2. Redis cluster data sharding

Instead of using consistent hashing, Redis Cluster uses hash slots. The entire redis cluster has 16384 hash slots. The algorithm for determining which slot a key should be assigned to is: calculate the CRC16 result of the key and then modulo 16834. 
Each node in the cluster is responsible for a part of the hash slot. For example, there are 3 nodes in the cluster, then: 
● The range of hash slots stored by node A is: 0 – 5500 
● The range of hash slots stored by node B is: 5501 – 11000 
● The range of hash slots stored by node C is: 11001 – 16384. 
The distribution method is convenient for adding and deleting nodes. For example, if you need to add a new node D, you only need to move some of the hash slot data in A, B, and C to the D node. Similarly, if you want to delete the A node from the cluster, you only need to move the data of the hash slot of the A node to the B and C nodes. When the A node's data is all removed, the A node can be completely deleted from the cluster. 
Because moving hash slots from one node to another requires no downtime, adding or removing nodes, or changing hash slots on a node, also requires no downtime. 
If multiple keys belong to a hash slot, the cluster supports simultaneous manipulation of these keys through a single command (or transaction, or lua script). Through the concept of "hash tags", users can assign multiple keys to the same hash slot. Hash tags are described in the cluster detailed documentation. Here is a brief introduction: if the key contains curly brackets "{}", only the strings in curly brackets will participate in the hash, such as "this{foo}" and "another" {foo}" will be assigned to the same hash slot, so they can be manipulated simultaneously in one command.

3. The master-slave mode of redis

In order to ensure that the cluster can still work normally when some nodes fail or the network is unavailable, the cluster uses a master-slave model, and each hash slot has one (master node) to N replicas (N-1 slave nodes). In our cluster example just now, there are three nodes A, B, and C. If node B fails, the cluster will not work properly, because the hash slot data in node B cannot be operated. However, if we add a slave node to each node, it becomes: A, B, C three nodes are master nodes, A1, B1, C1 are their slave nodes, when B node goes down, Our cluster also works fine. Node B1 is a replica of Node B. If Node B fails, the cluster will promote B1 as the master node, so that the cluster can continue to work normally. However, if B and B1 fail at the same time, the cluster cannot continue to work. 
Consistency Guarantee of 
Redis Cluster Redis Cluster cannot guarantee strong consistency. Some operations that have confirmed to the client that the write was successful will be lost in some uncertain circumstances. 
The first reason for the loss of write operations is that the master and slave nodes use an asynchronous way to synchronize data. 
A write operation is such a process: 
1) The client initiates a write operation to the master node B 2) The master node B responds to the client that the write operation is successful 3) The master node B synchronizes the write operation to its slave nodes B1, B2, B3 
It can be seen from the above process that the master node B does not wait for the slave nodes B1, B2, and B3 to complete the writing before replying to the client's result of this operation. Therefore, if the master node B fails after notifying the client that the write operation is successful, but before synchronizing to the slave node, one of the slave nodes that did not receive the write operation will be promoted to the master node, and the write operation will be like this forever lost. 
Just like a traditional database, it writes back to disk every second when no distribution is involved. In order to improve consistency, it is possible to reply to the client after the write to the disk is completed, but this will cost performance. This method is equivalent to the way Redis cluster uses synchronous replication. 
Basically, between performance and consistency, there is a trade-off.

4. Create and use redis cluster

4.1. Download the redis file

[root@apollo dtadmin]# wget http://download.redis.io/releases/redis-3.2.9.tar.gz

4.2. Unzip redis to /opt/ directory

[root@apollo dtadmin]# tar -zxvf redis-3.2.9.tar.gz -C /opt/

4.3. Compile redis

#进入目录/opt/redis-3.2.9
[root@apollo dtadmin]# cd /opt/redis-3.2.9/
[root@artemis redis-3.2.9]# make && make install #如果报错,说明缺少依赖包,要先执行以下命令
[root@artemis redis-cluster]# yum -y install ruby ruby-devel rubygems rpm-build gcc

4.4. Configure redis cluster

4.4.1. Environment preparation

# hostname ip software port notes
1 apollo.dt.com 192.168.56.181 say again 7000
7001
7002
 
2 artemis.dt.com 192.168.56.182 say again 7003
7004
7005
 
3 uranus.dt.com 192.168.56.183 say again 7006
7007
7008
 

4.4.2 Create a directory redis-cluster in the /opt/redis-3.2.9/ directory

#创建目录redis-cluster
[root@apollo redis-3.2.9]# mkdir redis-cluster
#在redis-cluster目录下创建三个子目录
[root@apollo redis-cluster]# mkdir -p 7000 7001 7002
#把/opt/redis-3.2.9目录下的redis.conf分别拷贝一份到7000, 7001和7002目录下:
[root@apollo redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7000
[root@apollo redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7001
[root@apollo redis-cluster]# cp /opt/redis-3.2.9/redis.conf /opt/redis-3.2.9/redis-cluster/7002

4.4.3. Configure the redis.conf files in the three subdirectories 7000, 7001 and 7002 in the subdirectory /opt/redis-3.2.9/redis-cluster/. The main modifications are:

[root@apollo redis-cluster]# vim 7000/redis.conf 
[root@apollo redis-cluster]# vim 7001/redis.conf 
[root@apollo redis-cluster]# vim 7002/redis.conf 
###############配置修改项########################
bind 192.168.56.181 #修改为本机IP
port 7000 #要根据所在的子目录下配置
daemonize yes
pidfile /var/run/redis_7000.pid  #要根据所在的子目录下配置
logfile "/var/log/redis-7000.log" #要根据所在的子目录下配置
appendonly yes
cluster-enabled yes
cluster-config-file nodes-7000.conf #要根据所在的子目录下配置
cluster-node-timeout 15000

4.4.4. Configure the other two servers in the same way

The difference is to use 7003, 7004, 7005, 7006, 7007, 7008 port numbers and create corresponding subdirectories.

5. Start the redis cluster

5.1. Start redis on the first server

[root@apollo redis-cluster]# redis-server 7001/redis.conf 
[root@apollo redis-cluster]# redis-server 7002/redis.conf 
[root@apollo redis-cluster]# redis-server 7003/redis.conf 

5.2. Start redis on the second machine

[root@artemis redis-cluster]# redis-server 7003/redis.conf 
[root@artemis redis-cluster]# redis-server 7004/redis.conf 
[root@artemis redis-cluster]# redis-server 7005/redis.conf 

5.3 Start redis on the third server

[root@uranus redis-cluster]# redis-server 7006/redis.conf 
[root@uranus redis-cluster]# redis-server 7007/redis.conf 
[root@uranus redis-cluster]# redis-server 7008/redis.conf 

6. Verify the redis startup status on each server

6.1. The first server

[root@apollo redis-cluster]# ps -ef | grep redis
root     18313     1  0 16:44 ?        00:00:00 redis-server 192.168.56.181:7001 [cluster]
root     18325     1  0 16:44 ?        00:00:00 redis-server 192.168.56.181:7002 [cluster]
root     18371     1  0 16:45 ?        00:00:00 redis-server 192.168.56.181:7000 [cluster]
root     18449  2564  0 16:46 pts/0    00:00:00 grep --color=auto redis

[root@apollo redis-cluster]# netstat -tnlp | grep redis
tcp        0      0 192.168.56.181:7001     0.0.0.0:*               LISTEN      18313/redis-server  
tcp        0      0 192.168.56.181:7002     0.0.0.0:*               LISTEN      18325/redis-server  
tcp        0      0 192.168.56.181:17000    0.0.0.0:*               LISTEN      18371/redis-server  
tcp        0      0 192.168.56.181:17001    0.0.0.0:*               LISTEN      18313/redis-server  
tcp        0      0 192.168.56.181:17002    0.0.0.0:*               LISTEN      18325/redis-server  
tcp        0      0 192.168.56.181:7000     0.0.0.0:*               LISTEN      18371/redis-server  

6.2. Second server

[root@artemis redis-cluster]# ps -ef | grep redis
root      5351     1  0 16:45 ?        00:00:00 redis-server 192.168.56.182:7003 [cluster]
root      5355     1  0 16:45 ?        00:00:00 redis-server 192.168.56.182:7004 [cluster]
root      5359     1  0 16:46 ?        00:00:00 redis-server 192.168.56.182:7005 [cluster]

[root@artemis redis-cluster]# netstat -tnlp | grep redis
tcp        0      0 192.168.56.182:7003     0.0.0.0:*               LISTEN      5351/redis-server 1 
tcp        0      0 192.168.56.182:7004     0.0.0.0:*               LISTEN      5355/redis-server 1 
tcp        0      0 192.168.56.182:7005     0.0.0.0:*               LISTEN      5359/redis-server 1 
tcp        0      0 192.168.56.182:17003    0.0.0.0:*               LISTEN      5351/redis-server 1 
tcp        0      0 192.168.56.182:17004    0.0.0.0:*               LISTEN      5355/redis-server 1 
tcp        0      0 192.168.56.182:17005    0.0.0.0:*               LISTEN      5359/redis-server 1 

6.3. The third server

[root@uranus redis-cluster]# ps -ef | grep redis
root     21138     1  0 16:46 ?        00:00:00 redis-server 192.168.56.183:7006 [cluster]
root     21156     1  0 16:46 ?        00:00:00 redis-server 192.168.56.183:7008 [cluster]
root     21387     1  0 16:50 ?        00:00:00 redis-server 192.168.56.183:7007 [cluster]
root     21394  9287  0 16:50 pts/0    00:00:00 grep --color=auto redis

[root@uranus redis-cluster]# netstat -tnlp | grep redis
tcp        0      0 192.168.56.183:7006     0.0.0.0:*               LISTEN      2959/redis-server 1 
tcp        0      0 192.168.56.183:7007     0.0.0.0:*               LISTEN      2971/redis-server 1 
tcp        0      0 192.168.56.183:7008     0.0.0.0:*               LISTEN      2982/redis-server 1 
tcp        0      0 192.168.56.183:17006    0.0.0.0:*               LISTEN      2959/redis-server 1 
tcp        0      0 192.168.56.183:17007    0.0.0.0:*               LISTEN      2971/redis-server 1 
tcp        0      0 192.168.56.183:17008    0.0.0.0:*               LISTEN      2982/redis-server 1 

7. Create a redis cluster

[root@apollo src]# ./redis-trib.rb create --replicas 1 192.168.56.181:7000 192.168.56.181:7001 192.168.56.181:7002 192.168.56.182:7003 192.168.56.182:7004 192.168.56.182:7005 192.168.56.183:7006 192.168.56.183:7007 192.168.56.183:7008


>>> Creating cluster
>>> Performing hash slots allocation on 9 nodes...
Using 4 masters:
192.168.56.181:7000
192.168.56.182:7003
192.168.56.183:7006
192.168.56.181:7001
Adding replica 192.168.56.182:7004 to 192.168.56.181:7000
Adding replica 192.168.56.183:7007 to 192.168.56.182:7003
Adding replica 192.168.56.181:7002 to 192.168.56.183:7006
Adding replica 192.168.56.182:7005 to 192.168.56.181:7001
Adding replica 192.168.56.183:7008 to 192.168.56.181:7000
M: 4d007a1e8efdc43ca4ec3db77029709b4e8413d0 192.168.56.181:7000
   slots:0-4095 (4096 slots) master
M: 0d0b4528f32db0111db2a78b8451567086b66d97 192.168.56.181:7001
   slots:12288-16383 (4096 slots) master
S: e7b8ba7a800683ba017401bde9a72bb34ad252d8 192.168.56.181:7002
   replicates 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa
M: 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce 192.168.56.182:7003
   slots:4096-8191 (4096 slots) master
S: 13863d63aa323fd58e7ceeba1ccc91b6304d0539 192.168.56.182:7004
   replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: da3556753efe388a64fafc259338ea420a795163 192.168.56.182:7005
   replicates 0d0b4528f32db0111db2a78b8451567086b66d97
M: 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa 192.168.56.183:7006
   slots:8192-12287 (4096 slots) master
S: ab90ee3ff9834a88416da311011e9bdfaa9a831f 192.168.56.183:7007
   replicates 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce
S: b0dda91a2527f71fe555cdd28fa8be4b571a4bed 192.168.56.183:7008
   replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join........
>>> Performing Cluster Check (using node 192.168.56.181:7000)
M: 4d007a1e8efdc43ca4ec3db77029709b4e8413d0 192.168.56.181:7000
   slots:0-4095 (4096 slots) master
   2 additional replica(s)
S: e7b8ba7a800683ba017401bde9a72bb34ad252d8 192.168.56.181:7002
   slots: (0 slots) slave
   replicates 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa
S: ab90ee3ff9834a88416da311011e9bdfaa9a831f 192.168.56.183:7007
   slots: (0 slots) slave
   replicates 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce
M: 4b34dcec53e46ad990b0e6bc36d5cd7b7f3f4cce 192.168.56.182:7003
   slots:4096-8191 (4096 slots) master
   1 additional replica(s)
M: 0d0b4528f32db0111db2a78b8451567086b66d97 192.168.56.181:7001
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
M: 3b9056e1c92ee9b94870a4100b89f6dc474ec1fa 192.168.56.183:7006
   slots:8192-12287 (4096 slots) master
   1 additional replica(s)
S: b0dda91a2527f71fe555cdd28fa8be4b571a4bed 192.168.56.183:7008
   slots: (0 slots) slave
   replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: 13863d63aa323fd58e7ceeba1ccc91b6304d0539 192.168.56.182:7004
   slots: (0 slots) slave
   replicates 4d007a1e8efdc43ca4ec3db77029709b4e8413d0
S: da3556753efe388a64fafc259338ea420a795163 192.168.56.182:7005
   slots: (0 slots) slave
   replicates 0d0b4528f32db0111db2a78b8451567086b66d97
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326939429&siteId=291194637