Redis HA篇 +集群搭建

说明:本文为面向Redis集群搭建的指导手册
标签:Redis集群、Redis高可用、Redis分布式、Redis 4.0.2
注意:文中删去了不需要的多余部分,让初学者一目了然一学就会
温馨提示:如果您发现本文哪里写的有问题或者有更好的写法请留言或私信我进行修改优化


★ 前言
※ 该文档架构采用:单机3节点2冗余架构(一共6个redis)
※ 系统信息:Centos 6.3 x86_64 4GB内存
※ 为了方便初学者学习了解redis本文简化了所有步骤
※ Redis全称“remote dictionary server”中文远程字典服务
※ Redis属于NoSql数据库中的key-value类型
※ Redis的哨兵机制是redis2.8开始支持,而集群模式是redis3.0开始支持。
※ Redis的sentinel系统用于管理多个redis服务器,该系统主要执行三个任务:监控、提醒、自动故障转移。
※ Redis的主为读写模式,从为只读模式;
※ Redis的更多参数介绍参考本人其他相关文档


★ 相关文章
Redis 运维篇+安装单实例
Redis HA篇  +主从搭建
Redis HA篇  +哨兵搭建
Redis HA篇  +集群搭建


★ 配置集群
※ 创建文件夹
rm -rf /soft/redis-4.0.2/zzt_cluster/
mkdir -p /soft/redis-4.0.2/zzt_cluster/{1..6}/data
tree /soft/redis-4.0.2/zzt_cluster/

※ 创建集群配置文件
cp /soft/redis-4.0.2/redis.conf    /soft/redis-4.0.2/zzt_redis_cluster_6391.conf
cp /soft/redis-4.0.2/redis.conf    /soft/redis-4.0.2/zzt_redis_cluster_6392.conf
cp /soft/redis-4.0.2/redis.conf    /soft/redis-4.0.2/zzt_redis_cluster_6393.conf
cp /soft/redis-4.0.2/redis.conf    /soft/redis-4.0.2/zzt_redis_cluster_6394.conf
cp /soft/redis-4.0.2/redis.conf    /soft/redis-4.0.2/zzt_redis_cluster_6395.conf
cp /soft/redis-4.0.2/redis.conf    /soft/redis-4.0.2/zzt_redis_cluster_6396.conf

※ 修改配置文件(1节点)
# 修改服务端口:port
sed -i 's/^port 6379/port 6391/g' /soft/redis-4.0.2/zzt_redis_cluster_6391.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6391.conf |grep "^port 63"
# 修改进程编号:pidfile
sed -i 's/redis_6379.pid/redis_6391.pid/g' /soft/redis-4.0.2/zzt_redis_cluster_6391.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6391.conf |grep "redis_639"
# 开启集群模式:cluster-enabled
sed -i 's/# cluster-enabled yes/cluster-enabled yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6391.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6391.conf |grep "cluster-enabled"
# 集群配置文件:cluster-config-file 
sed -i 's/# cluster-config-file nodes-6379.conf/cluster-config-file nodes-6391.conf/g' /soft/redis-4.0.2/zzt_redis_cluster_6391.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6391.conf |grep "cluster-config-file"
# 写盘模式设置:aof
sed -i 's/appendonly no/appendonly yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6391.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6391.conf |grep "appendonly "
# 集群超时设置:cluster-node-timeout
sed -i 's/# cluster-node-timeout 15000/cluster-node-timeout 15000/g' /soft/redis-4.0.2/zzt_redis_cluster_6391.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6391.conf |grep "cluster-node-timeout 15000"
# 集群数据目录:dir
sed -i 's:dir ./:dir /soft/redis-4.0.2/zzt_cluster/1/data:g' /soft/redis-4.0.2/zzt_redis_cluster_6391.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6391.conf |grep "dir "

※ 修改配置文件(2节点)
# 修改服务端口:port
sed -i 's/^port 6379/port 6392/g' /soft/redis-4.0.2/zzt_redis_cluster_6392.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6392.conf |grep "^port 63"
# 修改进程编号:pidfile
sed -i 's/redis_6379.pid/redis_6392.pid/g' /soft/redis-4.0.2/zzt_redis_cluster_6392.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6392.conf |grep "redis_639"
# 开启集群模式:cluster-enabled
sed -i 's/# cluster-enabled yes/cluster-enabled yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6392.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6392.conf |grep "cluster-enabled"
# 集群配置文件:cluster-config-file 
sed -i 's/# cluster-config-file nodes-6379.conf/cluster-config-file nodes-6392.conf/g' /soft/redis-4.0.2/zzt_redis_cluster_6392.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6392.conf |grep "cluster-config-file"
# 写盘模式设置:aof
sed -i 's/appendonly no/appendonly yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6392.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6392.conf |grep "appendonly "
# 集群超时设置:cluster-node-timeout
sed -i 's/# cluster-node-timeout 15000/cluster-node-timeout 15000/g' /soft/redis-4.0.2/zzt_redis_cluster_6392.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6392.conf |grep "cluster-node-timeout 15000"
# 集群数据目录:dir
sed -i 's:dir ./:dir /soft/redis-4.0.2/zzt_cluster/2/data:g' /soft/redis-4.0.2/zzt_redis_cluster_6392.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6392.conf |grep "dir "

※ 修改配置文件(3节点)
# 修改服务端口:port
sed -i 's/^port 6379/port 6393/g' /soft/redis-4.0.2/zzt_redis_cluster_6393.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6393.conf |grep "^port 63"
# 修改进程编号:pidfile
sed -i 's/redis_6379.pid/redis_6393.pid/g' /soft/redis-4.0.2/zzt_redis_cluster_6393.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6393.conf |grep "redis_639"
# 开启集群模式:cluster-enabled
sed -i 's/# cluster-enabled yes/cluster-enabled yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6393.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6393.conf |grep "cluster-enabled"
# 集群配置文件:cluster-config-file 
sed -i 's/# cluster-config-file nodes-6379.conf/cluster-config-file nodes-6393.conf/g' /soft/redis-4.0.2/zzt_redis_cluster_6393.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6393.conf |grep "cluster-config-file"
# 写盘模式设置:aof
sed -i 's/appendonly no/appendonly yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6393.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6393.conf |grep "appendonly "
# 集群超时设置:cluster-node-timeout
sed -i 's/# cluster-node-timeout 15000/cluster-node-timeout 15000/g' /soft/redis-4.0.2/zzt_redis_cluster_6393.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6393.conf |grep "cluster-node-timeout 15000"
# 集群数据目录:dir
sed -i 's:dir ./:dir /soft/redis-4.0.2/zzt_cluster/3/data:g' /soft/redis-4.0.2/zzt_redis_cluster_6393.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6393.conf |grep "dir "

※ 修改配置文件(4节点)
# 修改服务端口:port
sed -i 's/^port 6379/port 6394/g' /soft/redis-4.0.2/zzt_redis_cluster_6394.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6394.conf |grep "^port 63"
# 修改进程编号:pidfile
sed -i 's/redis_6379.pid/redis_6394.pid/g' /soft/redis-4.0.2/zzt_redis_cluster_6394.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6394.conf |grep "redis_639"
# 开启集群模式:cluster-enabled
sed -i 's/# cluster-enabled yes/cluster-enabled yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6394.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6394.conf |grep "cluster-enabled"
# 集群配置文件:cluster-config-file 
sed -i 's/# cluster-config-file nodes-6379.conf/cluster-config-file nodes-6394.conf/g' /soft/redis-4.0.2/zzt_redis_cluster_6394.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6394.conf |grep "cluster-config-file"
# 写盘模式设置:aof
sed -i 's/appendonly no/appendonly yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6394.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6394.conf |grep "appendonly "
# 集群超时设置:cluster-node-timeout
sed -i 's/# cluster-node-timeout 15000/cluster-node-timeout 15000/g' /soft/redis-4.0.2/zzt_redis_cluster_6394.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6394.conf |grep "cluster-node-timeout 15000"
# 集群数据目录:dir
sed -i 's:dir ./:dir /soft/redis-4.0.2/zzt_cluster/4/data:g' /soft/redis-4.0.2/zzt_redis_cluster_6394.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6394.conf |grep "dir "

※ 修改配置文件(5节点)
# 修改服务端口:port
sed -i 's/^port 6379/port 6395/g' /soft/redis-4.0.2/zzt_redis_cluster_6395.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6395.conf |grep "^port 63"
# 修改进程编号:pidfile
sed -i 's/redis_6379.pid/redis_6395.pid/g' /soft/redis-4.0.2/zzt_redis_cluster_6395.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6395.conf |grep "redis_639"
# 开启集群模式:cluster-enabled
sed -i 's/# cluster-enabled yes/cluster-enabled yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6395.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6395.conf |grep "cluster-enabled"
# 集群配置文件:cluster-config-file 
sed -i 's/# cluster-config-file nodes-6379.conf/cluster-config-file nodes-6395.conf/g' /soft/redis-4.0.2/zzt_redis_cluster_6395.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6395.conf |grep "cluster-config-file"
# 写盘模式设置:aof
sed -i 's/appendonly no/appendonly yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6395.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6395.conf |grep "appendonly "
# 集群超时设置:cluster-node-timeout
sed -i 's/# cluster-node-timeout 15000/cluster-node-timeout 15000/g' /soft/redis-4.0.2/zzt_redis_cluster_6395.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6395.conf |grep "cluster-node-timeout 15000"
# 集群数据目录:dir
sed -i 's:dir ./:dir /soft/redis-4.0.2/zzt_cluster/5/data:g' /soft/redis-4.0.2/zzt_redis_cluster_6395.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6395.conf |grep "dir "

※ 修改配置文件(6节点)
# 修改服务端口:port
sed -i 's/^port 6379/port 6396/g' /soft/redis-4.0.2/zzt_redis_cluster_6396.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6396.conf |grep "^port 63"
# 修改进程编号:pidfile
sed -i 's/redis_6379.pid/redis_6396.pid/g' /soft/redis-4.0.2/zzt_redis_cluster_6396.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6396.conf |grep "redis_639"
# 开启集群模式:cluster-enabled
sed -i 's/# cluster-enabled yes/cluster-enabled yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6396.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6396.conf |grep "cluster-enabled"
# 集群配置文件:cluster-config-file 
sed -i 's/# cluster-config-file nodes-6379.conf/cluster-config-file nodes-6396.conf/g' /soft/redis-4.0.2/zzt_redis_cluster_6396.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6396.conf |grep "cluster-config-file"
# 写盘模式设置:aof
sed -i 's/appendonly no/appendonly yes/g' /soft/redis-4.0.2/zzt_redis_cluster_6396.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6396.conf |grep "appendonly "
# 集群超时设置:cluster-node-timeout
sed -i 's/# cluster-node-timeout 15000/cluster-node-timeout 15000/g' /soft/redis-4.0.2/zzt_redis_cluster_6396.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6396.conf |grep "cluster-node-timeout 15000"
# 集群数据目录:dir
sed -i 's:dir ./:dir /soft/redis-4.0.2/zzt_cluster/6/data:g' /soft/redis-4.0.2/zzt_redis_cluster_6396.conf
cat /soft/redis-4.0.2/zzt_redis_cluster_6396.conf |grep "dir "


※ 迁移集群配置文件
mv /soft/redis-4.0.2/zzt_redis_cluster_6391.conf    /soft/redis-4.0.2/zzt_cluster/1/zzt_redis_cluster_6391.conf
mv /soft/redis-4.0.2/zzt_redis_cluster_6392.conf    /soft/redis-4.0.2/zzt_cluster/2/zzt_redis_cluster_6392.conf
mv /soft/redis-4.0.2/zzt_redis_cluster_6393.conf    /soft/redis-4.0.2/zzt_cluster/3/zzt_redis_cluster_6393.conf
mv /soft/redis-4.0.2/zzt_redis_cluster_6394.conf    /soft/redis-4.0.2/zzt_cluster/4/zzt_redis_cluster_6394.conf
mv /soft/redis-4.0.2/zzt_redis_cluster_6395.conf    /soft/redis-4.0.2/zzt_cluster/5/zzt_redis_cluster_6395.conf
mv /soft/redis-4.0.2/zzt_redis_cluster_6396.conf    /soft/redis-4.0.2/zzt_cluster/6/zzt_redis_cluster_6396.conf
tree /soft/redis-4.0.2/zzt_cluster/

★ 安装集群依赖环境
※ 说明
redis集群脚本“redis-trib.rb”是基于ruby编写的,所以需要安装ruby相关工具
网上说ruby版本需要2.2以上

※ 源码安装 Ruby
# 编译安装
tar xvf ruby-2.6.6.tar.gz
cd /soft/ruby-2.6.6
./configure 
make && make install
ruby -v

※ 安装RubyGems(它是一个用于Ruby的包管理框架)
tar xvf rubygems-3.2.12.tgz
cd /soft/rubygems-3.2.12/
ruby setup.rb
gem -v

※ 安装Ruby的redis客户端用来管理redis
cd /soft/
gem install -l redis-4.0.2.gem 


★ 创建集群
※ 架构(6个节点,3主3从)

※ 启动6个redis节点
# 集群中的节点不容许单独进行读写测试,否则提示:(error) CLUSTERDOWN Hash slot not served
pkill redis
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/1/zzt_redis_cluster_6391.conf &
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/2/zzt_redis_cluster_6392.conf &
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/3/zzt_redis_cluster_6393.conf &
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/4/zzt_redis_cluster_6394.conf &
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/5/zzt_redis_cluster_6395.conf &
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/6/zzt_redis_cluster_6396.conf &
ps -ef|grep redis

※ 配置集群
# 集群管理工具在自带的src文件夹下
/soft/redis-4.0.2/src/redis-trib.rb create --replicas 1 127.0.0.1:6391 127.0.0.1:6392 127.0.0.1:6393 127.0.0.1:6394 127.0.0.1:6395 127.0.0.1:6396 
# 输入“yes”完成安装


★ 验证集群
※ 查看状态
/soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6391
/soft/redis-4.0.2/src/redis-trib.rb info 127.0.0.1:6391
echo "info replication" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c
echo "cluster nodes" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c
echo "cluster info" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c


※ 数据测试,出现“Redirected to slot xxx located at xxx”即表示集群数据成功分配
# 集群中的任意节点均可管理数据
echo "set zzt_01 'hello zzt friend ^-^'" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6392 -c
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6393 -c
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6394 -c
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6395 -c
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c
echo "save" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c
# 清理数据
echo "del zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c


★ 关闭集群
# 快速
pkill redis
# 安全
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6392 -c
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6393 -c
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6394 -c
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6395 -c
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c

★ 测试从节点宕机
# 集群下,从节点宕机不影响集群正常工作,从节点重启后会自动加入集群状态为从,并且会自动同步主库数据
# 测试用命令
/soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6392 |grep 127 |sort
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c
/soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6392 |grep 127 |sort
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/1/zzt_redis_cluster_6391.conf &
/soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6392 |grep 127 |sort
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c


★ 测试主节点宕机
# 集群下,主节点宕机不影响集群正常工作,集群会自动把一个从节点作为主节点加入集群恢复N主的状态
# 测试用命令
/soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6391 |grep 127 |sort
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6392 -c
/soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6391 |grep 127 |sort
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/2/zzt_redis_cluster_6392.conf &
/soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6391 |grep 127 |sort
echo "get zzt_01" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c


★ 测试同组主从节点均宕机
# 当同组主从均宕机后集群无法正常工作,不过数据不会丢失。重启主从后自动加入集群,集群恢复正常
# 查看集群主从状态及主从对应关系
echo "cluster nodes" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c |sort --key=3
  ##当前状态
    c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395@16395 master - 0 1614760783000 10 connected 5461-10922
    67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393@16393 master - 0 1614760783227 3 connected 10923-16383
    e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394@16394 master - 0 1614760785242 8 connected 0-5460
    0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396@16396 myself,slave 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 0 1614760784000 6 connected
    17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392@16392 slave c281af3e3ec3156bd78ac2b55c9132d20a5106e8 0 1614760782220 10 connected
    b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391@16391 slave e31ab666f8b305c75a5dc9d88376c85830b5afb5 0 1614760784235 8 connected
  ##当前中主从对应关系
    m6393 > s6396
    m6394 > s6391
    m6395 > s6392
# 停止同组主从
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6392 -c
echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6395 -c
# 查看集群状态
echo "cluster nodes" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c |sort --key=3
    67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393@16393 master - 0 1614760911430 3 connected 10923-16383
    e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394@16394 master - 0 1614760913450 8 connected 0-5460
    c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395@16395 master - 1614760901018 1614760898000 10 disconnected 5461-10922
    0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396@16396 myself,slave 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 0 1614760911000 6 connected
    17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392@16392 slave c281af3e3ec3156bd78ac2b55c9132d20a5106e8 1614760901018 1614760899000 10 disconnected
    b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391@16391 slave e31ab666f8b305c75a5dc9d88376c85830b5afb5 0 1614760909000 8 connected
# 查看集群状态
/soft/redis-4.0.2/src/redis-trib.rb info 127.0.0.1:6391
    127.0.0.1:6394 (e31ab666...) -> 1 keys | 5461 slots | 1 slaves.
    127.0.0.1:6393 (67d7c1a1...) -> 0 keys | 5461 slots | 1 slaves.
    [OK] 1 keys in 2 masters.
    0.00 keys per slot on average.
# 集群状态为“fail”无法对外提供服务
echo "cluster info" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c |grep cluster_state
    cluster_state:fail
# 拉起刚才停止的同组主从节点
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/2/zzt_redis_cluster_6392.conf &
/soft/redis-4.0.2/src/redis-server /soft/redis-4.0.2/zzt_cluster/5/zzt_redis_cluster_6395.conf &
# 查看集群状态,集群恢复正常
echo "cluster info" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c |grep cluster_state
    cluster_state:ok
# 查看集群状态,集群恢复正常
echo "cluster nodes" | /soft/redis-4.0.2/src/redis-cli -p 6396 -c |sort --key=3
    c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395@16395 master - 0 1614762089000 10 connected 5461-10922
    67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393@16393 master - 0 1614762090269 3 connected 10923-16383
    e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394@16394 master - 0 1614762091278 8 connected 0-5460
    0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396@16396 myself,slave 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 0 1614762088000 6 connected
    17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392@16392 slave c281af3e3ec3156bd78ac2b55c9132d20a5106e8 0 1614762090000 10 connected
    b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391@16391 slave e31ab666f8b305c75a5dc9d88376c85830b5afb5 0 1614762088000 8 connected
# 查看集群状态,集群恢复正常
/soft/redis-4.0.2/src/redis-trib.rb info 127.0.0.1:6391
    127.0.0.1:6394 (e31ab666...) -> 1 keys | 5461 slots | 1 slaves.
    127.0.0.1:6393 (67d7c1a1...) -> 0 keys | 5461 slots | 1 slaves.
    127.0.0.1:6395 (c281af3e...) -> 0 keys | 5462 slots | 1 slaves.
    [OK] 1 keys in 3 masters.
    0.00 keys per slot on average.
# 测试结束,完美收工!
 

★ 输出案例

◆ 安装rubygems
[root@main rubygems-3.2.12]# ruby setup.rb
  Successfully built RubyGem
  Name: bundler
  Version: 2.2.12
  File: bundler-2.2.12.gem
Bundler 2.2.12 installed
RubyGems 3.2.12 installed
Regenerating binstubs
Regenerating plugins
Parsing documentation for rubygems-3.2.12
Installing ri documentation for rubygems-3.2.12

# 3.2.12 / 2021-03-01

## Bug fixes:

* Restore the ability to manually install extension gems. Pull request
  #4384 by cfis

# 3.2.11 / 2021-02-17

## Enhancements:

* Optionally fallback to IPv4 when IPv6 is unreachable. Pull request #2662
  by sonalkr132

# 3.2.10 / 2021-02-15

## Documentation:

* Add a `gem push` example to `gem help`. Pull request #4373 by
  deivid-rodriguez
* Improve documentation for `required_ruby_version`. Pull request #4343 by
  AlexWayfer

# 3.2.9 / 2021-02-08

## Bug fixes:

* Fix error message when underscore selection can't find bundler. Pull
  request #4363 by deivid-rodriguez
* Fix `Gem::Specification.stubs_for` returning wrong named specs. Pull
  request #4356 by tompng
* Don't error out when activating a binstub unless necessary. Pull request
  #4351 by deivid-rodriguez
* Fix `gem outdated` incorrectly handling platform specific gems. Pull
  request #4248 by deivid-rodriguez

# 3.2.8 / 2021-02-02

## Bug fixes:

* Fix `gem install` crashing on gemspec with nil required_ruby_version.
  Pull request #4334 by pbernays

# 3.2.7 / 2021-01-26

## Bug fixes:

* Generate plugin wrappers with relative requires. Pull request #4317 by
  deivid-rodriguez

# 3.2.6 / 2021-01-18

## Enhancements:

* Fix `Gem::Platform#inspect` showing duplicate information. Pull request
  #4276 by deivid-rodriguez

## Bug fixes:

* Swallow any system call error in `ensure_gem_subdirs` to support jruby
  embedded paths. Pull request #4291 by kares
* Restore accepting custom make command with extra options as the `make`
  env variable. Pull request #4271 by terceiro

# 3.2.5 / 2021-01-11

## Bug fixes:

* Don't load more specs after the whole set of specs has been setup. Pull
  request #4262 by deivid-rodriguez
* Fix broken `bundler` executable after `gem update --system`. Pull
  request #4221 by deivid-rodriguez

# 3.2.4 / 2020-12-31

## Enhancements:

* Use a CHANGELOG in markdown for rubygems. Pull request #4168 by
  deivid-rodriguez
* Never spawn subshells when building extensions. Pull request #4190 by
  deivid-rodriguez

## Bug fixes:

* Fix fallback to the old index and installation from it not working. Pull
  request #4213 by deivid-rodriguez
* Fix installing from source on truffleruby. Pull request #4201 by
  deivid-rodriguez

# 3.2.3 / 2020-12-22

## Enhancements:

* Fix misspellings in default API key name. Pull request #4177 by hsbt

## Bug fixes:

* Respect `required_ruby_version` and `required_rubygems_version`
  constraints when looking for `gem install` candidates. Pull request #4110
  by deivid-rodriguez

# 3.2.2 / 2020-12-17

## Bug fixes:

* Fix issue where CLI commands making more than one request to
  rubygems.org needing an OTP code would crash or ask for the code twice.
  Pull request #4162 by sonalkr132
* Fix building rake extensions that require openssl. Pull request #4165 by
  deivid-rodriguez
* Fix `gem update --system` displaying too many changelog entries. Pull
  request #4145 by deivid-rodriguez

# 3.2.1 / 2020-12-14

## Enhancements:

* Added help message for gem i webrick in gem server command. Pull request
  #4117 by hsbt

## Bug fixes:

* Added the missing loading of fileutils same as load_specs. Pull request
  #4124 by hsbt
* Fix Resolver::APISet to always include prereleases when necessary. Pull
  request #4113 by deivid-rodriguez


------------------------------------------------------------------------------

RubyGems installed the following executables:
	/usr/local/bin/gem
	/usr/local/bin/bundle
	/usr/local/bin/bundler

Ruby Interactive (ri) documentation was installed. ri is kind of like man 
pages for Ruby libraries. You may access it like this:
  ri Classname
  ri Classname.class_method
  ri Classname#instance_method
If you do not wish to install this documentation in the future, use the
--no-document flag, or set it as the default in your ~/.gemrc file. See
'gem help env' for details.


[root@main rubygems-3.2.12]# gem -v
3.2.12




◆ 安装ruby的redis客户端
[root@main soft]# gem install -l redis-4.0.2.gem 
Successfully installed redis-4.0.2
Parsing documentation for redis-4.0.2
Installing ri documentation for redis-4.0.2
Done installing documentation for redis after 0 seconds
1 gem installed



◆ 查看redis状态
[root@main redis-4.0.2]# ps -ef|grep redis
root     124538  66880  0 14:02 pts/1    00:00:00 /soft/redis-4.0.2/src/redis-server 127.0.0.1:6391 [cluster]    
root     124539  66880  0 14:02 pts/1    00:00:00 /soft/redis-4.0.2/src/redis-server 127.0.0.1:6392 [cluster]    
root     124540  66880  0 14:02 pts/1    00:00:00 /soft/redis-4.0.2/src/redis-server 127.0.0.1:6393 [cluster]    
root     124541  66880  0 14:02 pts/1    00:00:00 /soft/redis-4.0.2/src/redis-server 127.0.0.1:6394 [cluster]    
root     124545  66880  0 14:02 pts/1    00:00:00 /soft/redis-4.0.2/src/redis-server 127.0.0.1:6395 [cluster]    
root     124546  66880  0 14:02 pts/1    00:00:00 /soft/redis-4.0.2/src/redis-server 127.0.0.1:6396 [cluster]    
root     124581  66880  0 14:02 pts/1    00:00:00 grep redis




◆ 配置集群
[root@main redis-4.0.2]# /soft/redis-4.0.2/src/redis-trib.rb create --replicas 1 127.0.0.1:6391 127.0.0.1:6392 12
7.0.0.1:6393 127.0.0.1:6394 127.0.0.1:6395 127.0.0.1:6396 >>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
127.0.0.1:6391
127.0.0.1:6392
127.0.0.1:6393
Adding replica 127.0.0.1:6394 to 127.0.0.1:6391
Adding replica 127.0.0.1:6395 to 127.0.0.1:6392
Adding replica 127.0.0.1:6396 to 127.0.0.1:6393
M: b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391
   slots:0-5460 (5461 slots) master
M: 17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392
   slots:5461-10922 (5462 slots) master
M: 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393
   slots:10923-16383 (5461 slots) master
S: e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394
   replicates b9e2c4601f4d861063dc0101530e0014d851f863
S: c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395
   replicates 17405eeae5d972714e8d7384c3b1007bbe332ba7
S: 0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396
   replicates 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
124538:M 03 Mar 14:02:56.617 # configEpoch set to 1 via CLUSTER SET-CONFIG-EPOCH
124539:M 03 Mar 14:02:56.618 # configEpoch set to 2 via CLUSTER SET-CONFIG-EPOCH
124540:M 03 Mar 14:02:56.618 # configEpoch set to 3 via CLUSTER SET-CONFIG-EPOCH
124541:M 03 Mar 14:02:56.619 # configEpoch set to 4 via CLUSTER SET-CONFIG-EPOCH
124545:M 03 Mar 14:02:56.619 # configEpoch set to 5 via CLUSTER SET-CONFIG-EPOCH
124546:M 03 Mar 14:02:56.620 # configEpoch set to 6 via CLUSTER SET-CONFIG-EPOCH
>>> Sending CLUSTER MEET messages to join the cluster
124538:M 03 Mar 14:02:56.643 # IP address for this node updated to 127.0.0.1
124546:M 03 Mar 14:02:56.722 # IP address for this node updated to 127.0.0.1
124539:M 03 Mar 14:02:56.722 # IP address for this node updated to 127.0.0.1
124540:M 03 Mar 14:02:56.722 # IP address for this node updated to 127.0.0.1
124545:M 03 Mar 14:02:56.723 # IP address for this node updated to 127.0.0.1
124541:M 03 Mar 14:02:56.724 # IP address for this node updated to 127.0.0.1
Waiting for the cluster to join....124538:M 03 Mar 14:03:01.568 # Cluster state changed: ok
124540:M 03 Mar 14:03:01.593 # Cluster state changed: ok
124539:M 03 Mar 14:03:01.613 # Cluster state changed: ok
.124541:M 03 Mar 14:03:01.690 # Cluster state changed: ok
124546:M 03 Mar 14:03:01.692 # Cluster state changed: ok
....124545:M 03 Mar 14:03:06.541 # Cluster state changed: ok

124541:S 03 Mar 14:03:06.694 * Before turning into a slave, using my master parameters to synthesize a cached mas
ter: I may be able to synchronize with the new master with just a partial transfer.124545:S 03 Mar 14:03:06.695 * Before turning into a slave, using my master parameters to synthesize a cached mas
ter: I may be able to synchronize with the new master with just a partial transfer.124546:S 03 Mar 14:03:06.696 * Before turning into a slave, using my master parameters to synthesize a cached mas
ter: I may be able to synchronize with the new master with just a partial transfer.>>> Performing Cluster Check (using node 127.0.0.1:6391)
M: b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396
   slots: (0 slots) slave
   replicates 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d
S: e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394
   slots: (0 slots) slave
   replicates b9e2c4601f4d861063dc0101530e0014d851f863
S: c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395
   slots: (0 slots) slave
   replicates 17405eeae5d972714e8d7384c3b1007bbe332ba7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@main redis-4.0.2]# 124541:S 03 Mar 14:03:07.647 * Connecting to MASTER 127.0.0.1:6391
124541:S 03 Mar 14:03:07.647 * MASTER <-> SLAVE sync started
124541:S 03 Mar 14:03:07.647 * Non blocking connect for SYNC fired the event.
124546:S 03 Mar 14:03:07.647 * Connecting to MASTER 127.0.0.1:6393
124546:S 03 Mar 14:03:07.648 * MASTER <-> SLAVE sync started
124546:S 03 Mar 14:03:07.648 * Non blocking connect for SYNC fired the event.
124541:S 03 Mar 14:03:07.648 * Master replied to PING, replication can continue...
124546:S 03 Mar 14:03:07.648 * Master replied to PING, replication can continue...
124541:S 03 Mar 14:03:07.648 * Trying a partial resynchronization (request cd0ab8e342d43d5b53dc8736821e4c7a719adf
3f:1).124538:M 03 Mar 14:03:07.649 * Slave 127.0.0.1:6394 asks for synchronization
124538:M 03 Mar 14:03:07.649 * Partial resynchronization not accepted: Replication ID mismatch (Slave asked for '
cd0ab8e342d43d5b53dc8736821e4c7a719adf3f', my replication IDs are '2dbc5a3314b94529ea0395c396f633392160e5bb' and '0000000000000000000000000000000000000000')124538:M 03 Mar 14:03:07.649 * Starting BGSAVE for SYNC with target: disk
124538:M 03 Mar 14:03:07.649 * Background saving started by pid 124638
124541:S 03 Mar 14:03:07.650 * Full resync from master: 45edd21c9035b313e69bcd31bc3a2d5f78ddf51d:0
124541:S 03 Mar 14:03:07.650 * Discarding previously cached master state.
124546:S 03 Mar 14:03:07.650 * Trying a partial resynchronization (request 3668e7fce535784d66ffaafb224497183d174b
f7:1).124540:M 03 Mar 14:03:07.650 * Slave 127.0.0.1:6396 asks for synchronization
124540:M 03 Mar 14:03:07.650 * Partial resynchronization not accepted: Replication ID mismatch (Slave asked for '
3668e7fce535784d66ffaafb224497183d174bf7', my replication IDs are '0c8f1537ac99ea66172cb59385f8c3de329448d3' and '0000000000000000000000000000000000000000')124540:M 03 Mar 14:03:07.650 * Starting BGSAVE for SYNC with target: disk
124540:M 03 Mar 14:03:07.651 * Background saving started by pid 124639
124546:S 03 Mar 14:03:07.652 * Full resync from master: 3b8eb1ddb3df00407f29ba692c874da9f6bee723:0
124546:S 03 Mar 14:03:07.652 * Discarding previously cached master state.
124638:C 03 Mar 14:03:07.653 * DB saved on disk
124638:C 03 Mar 14:03:07.654 * RDB: 0 MB of memory used by copy-on-write
124545:S 03 Mar 14:03:07.669 * Connecting to MASTER 127.0.0.1:6392
124545:S 03 Mar 14:03:07.669 * MASTER <-> SLAVE sync started
124545:S 03 Mar 14:03:07.669 * Non blocking connect for SYNC fired the event.
124545:S 03 Mar 14:03:07.670 * Master replied to PING, replication can continue...
124639:C 03 Mar 14:03:07.673 * DB saved on disk
124545:S 03 Mar 14:03:07.673 * Trying a partial resynchronization (request 054e197e043e1a0706ee08519b7c497902ac4e
ee:1).124639:C 03 Mar 14:03:07.674 * RDB: 0 MB of memory used by copy-on-write
124539:M 03 Mar 14:03:07.678 * Slave 127.0.0.1:6395 asks for synchronization
124539:M 03 Mar 14:03:07.678 * Partial resynchronization not accepted: Replication ID mismatch (Slave asked for '
054e197e043e1a0706ee08519b7c497902ac4eee', my replication IDs are '49d230d1ba7b105e795b694dd38bbb8bcab3fdf3' and '0000000000000000000000000000000000000000')124539:M 03 Mar 14:03:07.679 * Starting BGSAVE for SYNC with target: disk
124539:M 03 Mar 14:03:07.679 * Background saving started by pid 124644
124545:S 03 Mar 14:03:07.680 * Full resync from master: ec08c4e02376cf2ce5aeae8ab456ab2699efdcf1:0
124545:S 03 Mar 14:03:07.680 * Discarding previously cached master state.
124644:C 03 Mar 14:03:07.692 * DB saved on disk
124644:C 03 Mar 14:03:07.692 * RDB: 0 MB of memory used by copy-on-write
124538:M 03 Mar 14:03:07.729 * Background saving terminated with success
124538:M 03 Mar 14:03:07.729 * Synchronization with slave 127.0.0.1:6394 succeeded
124541:S 03 Mar 14:03:07.729 * MASTER <-> SLAVE sync: receiving 175 bytes from master
124541:S 03 Mar 14:03:07.729 * MASTER <-> SLAVE sync: Flushing old data
124541:S 03 Mar 14:03:07.729 * MASTER <-> SLAVE sync: Loading DB in memory
124541:S 03 Mar 14:03:07.729 * MASTER <-> SLAVE sync: Finished with success
124541:S 03 Mar 14:03:07.729 * Background append only file rewriting started by pid 124645
124540:M 03 Mar 14:03:07.751 * Background saving terminated with success
124540:M 03 Mar 14:03:07.751 * Synchronization with slave 127.0.0.1:6396 succeeded
124546:S 03 Mar 14:03:07.751 * MASTER <-> SLAVE sync: receiving 175 bytes from master
124546:S 03 Mar 14:03:07.751 * MASTER <-> SLAVE sync: Flushing old data
124546:S 03 Mar 14:03:07.751 * MASTER <-> SLAVE sync: Loading DB in memory
124546:S 03 Mar 14:03:07.751 * MASTER <-> SLAVE sync: Finished with success
124546:S 03 Mar 14:03:07.751 * Background append only file rewriting started by pid 124646
124541:S 03 Mar 14:03:07.769 * AOF rewrite child asks to stop sending diffs.
124645:C 03 Mar 14:03:07.769 * Parent agreed to stop sending diffs. Finalizing AOF...
124645:C 03 Mar 14:03:07.769 * Concatenating 0.00 MB of AOF diff received from parent.
124645:C 03 Mar 14:03:07.769 * SYNC append only file rewrite performed
124645:C 03 Mar 14:03:07.769 * AOF rewrite: 0 MB of memory used by copy-on-write
124539:M 03 Mar 14:03:07.770 * Background saving terminated with success
124539:M 03 Mar 14:03:07.770 * Synchronization with slave 127.0.0.1:6395 succeeded
124545:S 03 Mar 14:03:07.770 * MASTER <-> SLAVE sync: receiving 175 bytes from master
124545:S 03 Mar 14:03:07.770 * MASTER <-> SLAVE sync: Flushing old data
124545:S 03 Mar 14:03:07.770 * MASTER <-> SLAVE sync: Loading DB in memory
124545:S 03 Mar 14:03:07.770 * MASTER <-> SLAVE sync: Finished with success
124545:S 03 Mar 14:03:07.770 * Background append only file rewriting started by pid 124647
124546:S 03 Mar 14:03:07.788 * AOF rewrite child asks to stop sending diffs.
124646:C 03 Mar 14:03:07.788 * Parent agreed to stop sending diffs. Finalizing AOF...
124646:C 03 Mar 14:03:07.788 * Concatenating 0.00 MB of AOF diff received from parent.
124646:C 03 Mar 14:03:07.788 * SYNC append only file rewrite performed
124646:C 03 Mar 14:03:07.788 * AOF rewrite: 0 MB of memory used by copy-on-write
124545:S 03 Mar 14:03:07.808 * AOF rewrite child asks to stop sending diffs.
124647:C 03 Mar 14:03:07.808 * Parent agreed to stop sending diffs. Finalizing AOF...
124647:C 03 Mar 14:03:07.808 * Concatenating 0.00 MB of AOF diff received from parent.
124647:C 03 Mar 14:03:07.808 * SYNC append only file rewrite performed
124647:C 03 Mar 14:03:07.808 * AOF rewrite: 0 MB of memory used by copy-on-write
124541:S 03 Mar 14:03:07.849 * Background AOF rewrite terminated with success
124541:S 03 Mar 14:03:07.849 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB)
124541:S 03 Mar 14:03:07.849 * Background AOF rewrite finished successfully
124546:S 03 Mar 14:03:07.849 * Background AOF rewrite terminated with success
124546:S 03 Mar 14:03:07.849 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB)
124546:S 03 Mar 14:03:07.849 * Background AOF rewrite finished successfully
124545:S 03 Mar 14:03:07.871 * Background AOF rewrite terminated with success
124545:S 03 Mar 14:03:07.871 * Residual parent diff successfully flushed to the rewritten AOF (0.00 MB)
124545:S 03 Mar 14:03:07.871 * Background AOF rewrite finished successfully

[root@main redis-4.0.2]# 



◆ 集群状态
[root@main ~]# /soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6391
>>> Performing Cluster Check (using node 127.0.0.1:6391)
M: b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396
   slots: (0 slots) slave
   replicates 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d
S: e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394
   slots: (0 slots) slave
   replicates b9e2c4601f4d861063dc0101530e0014d851f863
S: c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395
   slots: (0 slots) slave
   replicates 17405eeae5d972714e8d7384c3b1007bbe332ba7
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.



[root@main ~]# /soft/redis-4.0.2/src/redis-trib.rb info 127.0.0.1:6391
127.0.0.1:6391 (b9e2c460...) -> 0 keys | 5461 slots | 1 slaves.
127.0.0.1:6392 (17405eea...) -> 0 keys | 5462 slots | 1 slaves.
127.0.0.1:6393 (67d7c1a1...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.



[root@main ~]# echo "cluster nodes" | /soft/redis-4.0.2/src/redis-cli -p 6391
17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392@16392 master - 0 1614753544000 2 connected 5461-10922
67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393@16393 master - 0 1614753544000 3 connected 10923-16383
0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396@16396 slave 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 0 1614753545461 6 connected
e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394@16394 slave b9e2c4601f4d861063dc0101530e0014d851f863 0 1614753543000 4 connected
c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395@16395 slave 17405eeae5d972714e8d7384c3b1007bbe332ba7 0 1614753544451 5 connected
b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391@16391 myself,master - 0 1614753543000 1 connected 0-5460



[root@main ~]# echo "cluster info" | /soft/redis-4.0.2/src/redis-cli -p 6391 -c
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:4767
cluster_stats_messages_pong_sent:4362
cluster_stats_messages_sent:9129
cluster_stats_messages_ping_received:4357
cluster_stats_messages_pong_received:4767
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:9129




◆ 集群主从切换案例
[root@main 1]# /soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6392 |grep 127 |sort
>>> Performing Cluster Check (using node 127.0.0.1:6392)
M: 17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392
M: 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393
M: e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394
S: 0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396
S: b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391
S: c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395

[root@main 1]# echo "shutdown" | /soft/redis-4.0.2/src/redis-cli -p 6392 -c
6801:M 03 Mar 15:55:05.306 # User requested shutdown...
6801:M 03 Mar 15:55:05.306 * Calling fsync() on the AOF file.
6801:M 03 Mar 15:55:05.306 * Saving the final RDB snapshot before exiting.
6801:M 03 Mar 15:55:05.318 * DB saved on disk
6801:M 03 Mar 15:55:05.318 * Removing the pid file.
6801:M 03 Mar 15:55:05.318 # Redis is now ready to exit, bye bye...

[root@main 1]# /soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6391 |grep 127 |sort
>>> Performing Cluster Check (using node 127.0.0.1:6391)
M: 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393
M: c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395
M: e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394
S: 0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396
S: b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391

[root@main 1]# /soft/redis-4.0.2/src/redis-trib.rb check 127.0.0.1:6391 |grep 127 |sort
>>> Performing Cluster Check (using node 127.0.0.1:6391)
M: 67d7c1a1e0f38041ae9bba2ebb181de65ccddc2d 127.0.0.1:6393
M: c281af3e3ec3156bd78ac2b55c9132d20a5106e8 127.0.0.1:6395
M: e31ab666f8b305c75a5dc9d88376c85830b5afb5 127.0.0.1:6394
S: 0ad2962f55949e0d63245fcd6e13099c6683cec5 127.0.0.1:6396
S: 17405eeae5d972714e8d7384c3b1007bbe332ba7 127.0.0.1:6392
S: b9e2c4601f4d861063dc0101530e0014d851f863 127.0.0.1:6391




◆ 集群文件树
[root@main ~]# tree /soft/redis-4.0.2/zzt_cluster/
/soft/redis-4.0.2/zzt_cluster/
├── 1
│?? ├── data
│?? │?? ├── appendonly.aof
│?? │?? ├── dump.rdb
│?? │?? └── nodes-6391.conf
│?? └── zzt_redis_cluster_6391.conf
├── 2
│?? ├── data
│?? │?? ├── appendonly.aof
│?? │?? ├── dump.rdb
│?? │?? └── nodes-6392.conf
│?? └── zzt_redis_cluster_6392.conf
├── 3
│?? ├── data
│?? │?? ├── appendonly.aof
│?? │?? ├── dump.rdb
│?? │?? └── nodes-6393.conf
│?? └── zzt_redis_cluster_6393.conf
├── 4
│?? ├── data
│?? │?? ├── appendonly.aof
│?? │?? ├── dump.rdb
│?? │?? └── nodes-6394.conf
│?? └── zzt_redis_cluster_6394.conf
├── 5
│?? ├── data
│?? │?? ├── appendonly.aof
│?? │?? ├── dump.rdb
│?? │?? └── nodes-6395.conf
│?? └── zzt_redis_cluster_6395.conf
└── 6
    ├── data
    │?? ├── appendonly.aof
    │?? ├── dump.rdb
    │?? └── nodes-6396.conf
    └── zzt_redis_cluster_6396.conf

12 directories, 24 files


※ 如果您觉得文章写的还不错, 别忘了在文末给作者点个赞哦 ~

over

猜你喜欢

转载自blog.csdn.net/zzt_2009/article/details/114395642