Redis Cluster Test

     1. Create a script to construct a cluster: cluster-start.sh

 

./redis-trib.rb create --replicas 1 192.168.58.101:7000 192.168.58.101:7001 192.168.58.101:7002 192.168.58.102:7000 192.168.58.102:7001 192.168.58.102:7002

    2. Start the service script of each node of redis: servers-start.sh

 

    

cd 7000
rm -rf appendonly.aof
rm -rf dump.rdb
rm -rf nodes.conf
redis-server redis.conf

cd ..
cd 7001
rm -rf appendonly.aof
rm -rf dump.rdb
rm -rf nodes.conf
redis-server redis.conf

cd ..
cd 7002
rm -rf appendonly.aof
rm -rf dump.rdb
rm -rf nodes.conf
redis-server redis.conf

   

 

    3. Cluster test process:

   

[root@localhost redis-cluster]# sh cluster-start.sh
>>> Creating cluster
Connecting to node 192.168.58.101:7000: OK
Connecting to node 192.168.58.101:7001: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.102:7000: OK
Connecting to node 192.168.58.102:7001: OK
Connecting to node 192.168.58.102:7002: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.58.102:7000
192.168.58.101:7000
192.168.58.102:7001
Adding replica 192.168.58.101:7001 to 192.168.58.102:7000
Adding replica 192.168.58.102:7002 to 192.168.58.101:7000
Adding replica 192.168.58.101:7002 to 192.168.58.102:7001
M: aec976f33acd4971cf5e087ceaf2b5e606c56f36 192.168.58.101:7000
   slots:5461-10922 (5462 slots) master
S: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   replicates d5ec4dee922385007f09005d0ef24024f3d513a3
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: d5ec4dee922385007f09005d0ef24024f3d513a3 192.168.58.102:7000
   slots:0-5460 (5461 slots) master
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
S: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   replicates aec976f33acd4971cf5e087ceaf2b5e606c56f36
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 192.168.58.101:7000)
M: aec976f33acd4971cf5e087ceaf2b5e606c56f36 192.168.58.101:7000
   slots:5461-10922 (5462 slots) master
M: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   slots: (0 slots) master
   replicates d5ec4dee922385007f09005d0ef24024f3d513a3
M: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) master
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: d5ec4dee922385007f09005d0ef24024f3d513a3 192.168.58.102:7000
   slots:0-5460 (5461 slots) master
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
M: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots: (0 slots) master
   replicates aec976f33acd4971cf5e087ceaf2b5e606c56f36
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

./redis-trib.rb check 192.168.58.102:7000 -- the command to check the cluster status check, as long as the ip:port behind it is any node inside the cluster

[root@localhost redis-cluster]# ./redis-trib.rb check 192.168.58.102:7000
Connecting to node 192.168.58.102:7000: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.101:7001: OK
Connecting to node 192.168.58.102:7001: OK
Connecting to node 192.168.58.101:7000: OK
Connecting to node 192.168.58.102:7002: OK
>>> Performing Cluster Check (using node 192.168.58.102:7000)
M: d5ec4dee922385007f09005d0ef24024f3d513a3 192.168.58.102:7000
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) slave
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
S: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   slots: (0 slots) slave
   replicates d5ec4dee922385007f09005d0ef24024f3d513a3
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: aec976f33acd4971cf5e087ceaf2b5e606c56f36 192.168.58.101:7000
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots: (0 slots) slave
   replicates aec976f33acd4971cf5e087ceaf2b5e606c56f36
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

I configured 6 nodes on two machines, clustered, and automatically generated 3 Master nodes with 3 Slave nodes hanging under them.

2. Simulate the situation that a master node hangs, (I directly kill a node process with the kill command)
The test deletes the master node of 192.168.102:7000 and the id is d5ec4dee922385007f09005d0ef24024f3d513a3. According to the master-slave mechanism of redis, when the master node hangs up, the slave node becomes the master node.
Then its slave node 514548a9a01d7e125d716fd51d9ffd36165a2647 should become the master node.

192.168.183.102 below:
[root@localhost redis-cluster]# ps -ef | grep redis
root      2460     1  0 14:00 ?        00:00:03 redis-server *:7000 [cluster]
root      2465     1  0 14:00 ?        00:00:03 redis-server *:7001 [cluster]
root      2474     1  0 14:00 ?        00:00:03 redis-server *:7002 [cluster]
root 2572 2378 0 14:16 pts/0 00:00:00 grep redis
[root@localhost redis-cluster]# kill 2460
[root@localhost redis-cluster]# ./redis-trib.rb check  192.168.58.102:7000
Connecting to node 192.168.58.102:7000: [ERR] Sorry, can't connect to node 192.168.58.102:7000
[root@localhost redis-cluster]# ./redis-trib.rb check  192.168.58.102:7000
Connecting to node 192.168.58.102:7000: [ERR] Sorry, can't connect to node 192.168.58.102:7000
[root@localhost redis-cluster]# ./redis-trib.rb check  192.168.58.102:7002
Connecting to node 192.168.58.102:7002: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.101:7000: OK
Connecting to node 192.168.58.101:7001: OK
Connecting to node 192.168.58.102:7001: OK
>>> Performing Cluster Check (using node 192.168.58.102:7002)
S: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots: (0 slots) slave
   replicates aec976f33acd4971cf5e087ceaf2b5e606c56f36
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) slave
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: aec976f33acd4971cf5e087ceaf2b5e606c56f36 192.168.58.101:7000
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   slots:0-5460 (5461 slots) master
   0 additional replica(s)
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

Sure enough, as expected, the node d5ec4dee922385007f09005d0ef24024f3d513a3 of the killed process in the cluster has disappeared, and its slave node
514548a9a01d7e125d716fd51d9ffd36165a2647 became the master node. At the same time, when the node is killed, directly accessing the node will fail to connect.





Continue to delete the master node of 192.168.183.101:7000 with id aec976f33acd4971cf5e087ceaf2b5e606c56f36
192.168.183.101 below:
[root@localhost src]# ps -ef | grep redis
root      2483     1  0 14:00 ?        00:00:13 redis-server *:7000 [cluster]
root      2488     1  0 14:00 ?        00:00:13 redis-server *:7001 [cluster]
root      2497     1  0 14:00 ?        00:00:13 redis-server *:7002 [cluster]
root 3001 2352 0 14:44 pts/0 00:00:00 grep redis
[root@localhost src]# kill 2483


[root@localhost src]# ./redis-trib.rb check  192.168.58.102:7001
Connecting to node 192.168.58.102:7001: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.101:7001: OK
Connecting to node 192.168.58.102:7002: OK
>>> Performing Cluster Check (using node 192.168.58.102:7001)
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) slave
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: 514548a9a01d7e125d716fd51d9ffd36165a2647 192.168.58.101:7001
   slots:0-5460 (5461 slots) master
   0 additional replica(s)
M: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots:5461-10922 (5462 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

As expected, its slave 990c7f1b44034646cacb51a7754668ee5ada6005 became the master.

Conclusion: If the deleted master node has slave nodes, it will not affect the cluster after deletion;

Continue to delete the master node of 192.168.183.101:7001 (id 514548a9a01d7e125d716fd51d9ffd36165a2647) without slave nodes
192.168.183.101 below:
[root@localhost src]# ps -ef | grep redis
root      2488     1  0 14:00 ?        00:00:15 redis-server *:7001 [cluster]
root      2497     1  0 14:00 ?        00:00:15 redis-server *:7002 [cluster]
root 3029 2352 0 14:49 pts/0 00:00:00 grep redis
[root@localhost src]# kill 2488
[root@localhost src]# ./redis-trib.rb check  192.168.58.102:7001
Connecting to node 192.168.58.102:7001: OK
Connecting to node 192.168.58.101:7002: OK
Connecting to node 192.168.58.102:7002: OK
>>> Performing Cluster Check (using node 192.168.58.102:7001)
M: 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b 192.168.58.102:7001
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 3a1962324921188520896b1e9329e210906c1641 192.168.58.101:7002
   slots: (0 slots) slave
   replicates 3c2a3600f9b8ea11f7991c8180ecc24ea4266a6b
M: 990c7f1b44034646cacb51a7754668ee5ada6005 192.168.58.102:7002
   slots:5461-10922 (5462 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.
[root@localhost src]# redis-cli -c  -h 192.168.58.101 -p 7001
Could not connect to Redis at 192.168.58.101:7001: Connection refused
not connected> set b
[root@localhost src]# redis-cli -c  -h 192.168.58.102 -p 7001
192.168.58.102:7001> set b c
(error) CLUSTERDOWN The cluster is down. Use CLUSTER INFO for more information
192.168.58.102:7001>

It is found that the cluster fails, and some slots cannot be covered. When a node assignment is called, an exception of (error) CLUSTERDOWN The cluster is down. will be reported.
No good solution has been found for the time being. We can only delete other files except redis.conf under the node file, kill the process of each node, restart the service, and then construct the cluster.

 

 

    

 

 

 

 

    

    

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326562706&siteId=291194637