redis 集群搭建 九 redis-cli --cluster 集群搭建

1、 使用redis-cli  --cluster help 进行安装redis集群。

[root@hadoop05 bin]# ./redis-cli --cluster  help
Cluster Manager Commands:
  create         host1:port1 ... hostN:portN
                 --cluster-replicas <arg>
  check          host:port
                 --cluster-search-multiple-owners
  info           host:port
  fix            host:port
                 --cluster-search-multiple-owners
  reshard        host:port
                 --cluster-from <arg>
                 --cluster-to <arg>
                 --cluster-slots <arg>
                 --cluster-yes
                 --cluster-timeout <arg>
                 --cluster-pipeline <arg>
                 --cluster-replace
  rebalance      host:port
                 --cluster-weight <node1=w1...nodeN=wN>
                 --cluster-use-empty-masters
                 --cluster-timeout <arg>
                 --cluster-simulate
                 --cluster-pipeline <arg>
                 --cluster-threshold <arg>
                 --cluster-replace
  add-node       new_host:new_port existing_host:existing_port
                 --cluster-slave
                 --cluster-master-id <arg>
  del-node       host:port node_id
  call           host:port command arg arg .. arg
  set-timeout    host:port milliseconds
  import         host:port
                 --cluster-from <arg>
                 --cluster-copy
                 --cluster-replace
  help           

For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.

---kill 单机多个节点的数据

# kill 
[root@hadoop05 bin]# ps -ef | grep redis-server | grep 800 | awk '{ print $2 }' | xargs kill

--启动redis-server

[root@hadoop05 bin]# ./redis-cli --cluster  create 127.0.0.1:8000  127.0.0.1:8001  127.0.0.1:8002 127.0.0.1:8003 127.0.0.1:8004 127.0.0.1:8005  --cluster-replicas 1 
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:8004 to 127.0.0.1:8000
Adding replica 127.0.0.1:8005 to 127.0.0.1:8001
Adding replica 127.0.0.1:8003 to 127.0.0.1:8002
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 432fd513c07235f74a31bed836f533bf4ecb3889 127.0.0.1:8000
   slots:[0-5460] (5461 slots) master
M: 30309728ad48df61e40d6f476740579e66398b02 127.0.0.1:8001
   slots:[5461-10922] (5462 slots) master
M: ba807182f5cc573fc100d4d64ef9a4b1ecf0419f 127.0.0.1:8002
   slots:[10923-16383] (5461 slots) master
S: a1021b42a93d35dbf75b5b515ea50e20515e92b6 127.0.0.1:8003
   replicates ba807182f5cc573fc100d4d64ef9a4b1ecf0419f
S: d238b4d6c4c54a614a57f0c6f99d07c296f9ff1b 127.0.0.1:8004
   replicates 432fd513c07235f74a31bed836f533bf4ecb3889
S: 8c2846698b49aed95674ad2e4eef5d501563f874 127.0.0.1:8005
   replicates 30309728ad48df61e40d6f476740579e66398b02
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
......
>>> Performing Cluster Check (using node 127.0.0.1:8000)
M: 432fd513c07235f74a31bed836f533bf4ecb3889 127.0.0.1:8000
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: a1021b42a93d35dbf75b5b515ea50e20515e92b6 127.0.0.1:8003
   slots: (0 slots) slave
   replicates ba807182f5cc573fc100d4d64ef9a4b1ecf0419f
S: 8c2846698b49aed95674ad2e4eef5d501563f874 127.0.0.1:8005
   slots: (0 slots) slave
   replicates 30309728ad48df61e40d6f476740579e66398b02
M: ba807182f5cc573fc100d4d64ef9a4b1ecf0419f 127.0.0.1:8002
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: d238b4d6c4c54a614a57f0c6f99d07c296f9ff1b 127.0.0.1:8004
   slots: (0 slots) slave
   replicates 432fd513c07235f74a31bed836f533bf4ecb3889
M: 30309728ad48df61e40d6f476740579e66398b02 127.0.0.1:8001
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
127.0.0.1:8000> get key1
(error) MOVED 9189 127.0.0.1:8001
127.0.0.1:8000> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:146
cluster_stats_messages_pong_sent:150
cluster_stats_messages_sent:296
cluster_stats_messages_ping_received:145
cluster_stats_messages_pong_received:146
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:296

---这种方式安装发现,其中一个节点挂掉了,但是master 的并没有的slave 的集群节点。并没有替换掉。而是失败了。。。

问题。。。待分析

发布了61 篇原创文章 · 获赞 1 · 访问量 659

猜你喜欢

转载自blog.csdn.net/u012842247/article/details/103568673