【图文】redis3.2.0-cluster集群(三主三从)搭建到jedis编码测试

前言:
本文主要是使用redis3.2.0搭建cluster集群,三主三从,然后使用 JedisCluster进行编码测试;
为方便广大同行,开源出来本文,特此声明!


说明:
https://chenssy.blog.csdn.net/article/details/107739262
由于上方链接使用 redis 5.0.3 版本搭建的cluster集群,无法使用jedis操作,所以本文降版本


1、申请三台虚拟机 CentOS7.6-ARM64

  • 鹏城实验室虚拟机申请

    鲲鹏arm64 CentOS7 虚拟机学习

    准备工作

    IP hostname redis实例
    16.0.0.146 pc146 redis1(6379、主)、redis2(6380、从)
    16.0.0.147 pc147 redis1(6379、主)、redis2(6380、从)
    16.0.0.148 pc148 redis1(6379、主)、redis2(6380、从)

2、配置基本环境

  • yum源配置(三台都要配置)
# 配置前
[root@localhost yum.repos.d]# yum repolist 
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.bfsu.edu.cn
 * extras: mirrors.bfsu.edu.cn
 * updates: mirrors.bfsu.edu.cn
repo id                                            repo name                                           status
base/7/aarch64                                     CentOS-7 - Base                                     7,622
extras/7/aarch64                                   CentOS-7 - Extras                                     418
updates/7/aarch64                                  CentOS-7 - Updates                                    832
repolist: 8,872
[root@localhost yum.repos.d]#

# 配置中
[root@localhost ~]# cd /etc/yum.repos.d/
[root@localhost yum.repos.d]# ls
CentOS-Base.repo  CentOS-CR.repo  CentOS-Debuginfo.repo  CentOS-fasttrack.repo  CentOS-Media.repo  CentOS-Sources.repo  CentOS-Vault.repo
[root@localhost yum.repos.d]# mkdir yum-bak.repo
[root@localhost yum.repos.d]# mv CentOS-* yum-bak.repo/
[root@localhost yum.repos.d]# ls
yum-bak.repo
[root@localhost yum.repos.d]# vim CentOS-Base-kunpeng.repo
[root@localhost yum.repos.d]# cat CentOS-Base-kunpeng.repo
[kunpeng]
name=CentOS-kunpeng - Base - mirrors.huaweicloud.com
baseurl=https://mirrors.huaweicloud.com/kunpeng/yum/el/7/aarch64/
gpgcheck=0
enabled=1
[root@localhost yum.repos.d]#
[root@localhost yum.repos.d]# yum clean all
Loaded plugins: fastestmirror, langpacks
Cleaning repos: kunpeng
Cleaning up list of fastest mirrors
[root@localhost yum.repos.d]# yum makecache
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
kunpeng                                                                               | 2.9 kB  00:00:00     
(1/3): kunpeng/primary_db                                                             |  89 kB  00:00:00     
(2/3): kunpeng/other_db                                                               | 8.3 kB  00:00:00     
(3/3): kunpeng/filelists_db                                                           | 656 kB  00:00:00     
Metadata Cache Created
[root@localhost yum.repos.d]#

# 配置后
[root@localhost yum.repos.d]# yum repolist 
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
repo id                        repo name                                                 status
kunpeng                        CentOS-kunpeng - Base - mirrors.huaweicloud.com             80
repolist: 80
[root@localhost yum.repos.d]#
  • hostname(三台都要配置)
[root@localhost ~]# hostname
localhost
[root@localhost ~]# hostnamectl set-hostname pc146
[root@localhost ~]# hostname
pc146
[root@localhost ~]# reboot

# 重启后
[root@pc146 ~]#
[root@pc147 ~]#
[root@pc148 ~]#
  • hosts(三台都要配置)
[root@pc146 ~]# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 16.0.0.146  netmask 255.255.255.0  broadcast 16.0.0.255
        inet6 fe80::f816:3eff:fe31:bf08  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:31:bf:08  txqueuelen 1000  (Ethernet)
        RX packets 592  bytes 90996 (88.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 115  bytes 16755 (16.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 48  bytes 4944 (4.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 48  bytes 4944 (4.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@pc146 ~]#
[root@pc146 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
[root@pc146 ~]# vim /etc/hosts
[root@pc146 ~]# cat /etc/hosts
# 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
# ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
16.0.0.146 pc146
16.0.0.147 pc147
16.0.0.148 pc148
[root@pc146 ~]#

# 复制/etc/hosts文件内容到另外两台
[root@pc147 ~]# vim /etc/hosts
[root@pc147 ~]# cat /etc/hosts
# 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
# ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
16.0.0.146 pc146
16.0.0.147 pc147
16.0.0.148 pc148
[root@pc147 ~]#

[root@pc148 ~]# vim /etc/hosts
[root@pc148 ~]# cat /etc/hosts
# 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
# ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
16.0.0.146 pc146
16.0.0.147 pc147
16.0.0.148 pc148
[root@pc148 ~]# 
  • 测试连通性

    [root@pc146 ~]# ping pc147
    PING pc147 (16.0.0.147) 56(84) bytes of data.
    64 bytes from pc147 (16.0.0.147): icmp_seq=1 ttl=64 time=0.467 ms
    64 bytes from pc147 (16.0.0.147): icmp_seq=2 ttl=64 time=0.273 ms
    ^C
    --- pc147 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1048ms
    rtt min/avg/max/mdev = 0.273/0.370/0.467/0.097 ms
    [root@pc146 ~]# ping pc148
    PING pc148 (16.0.0.148) 56(84) bytes of data.
    64 bytes from pc148 (16.0.0.148): icmp_seq=1 ttl=64 time=0.303 ms
    64 bytes from pc148 (16.0.0.148): icmp_seq=2 ttl=64 time=0.164 ms
    64 bytes from pc148 (16.0.0.148): icmp_seq=3 ttl=64 time=0.191 ms
    ^C
    --- pc148 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2088ms
    rtt min/avg/max/mdev = 0.164/0.219/0.303/0.061 ms
    [root@pc146 ~]# 
    
  • 内网三台机子免密登录(三台都要配置),方便集群配置

    [root@pc146 ~]# ssh-keygen # 一路回车
    ...
    
    [root@pc146 ~]# ssh-copy-id root@pc147
    ...
    Are you sure you want to continue connecting (yes/no)? yes
    ...
    root@pc147's password: 
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@pc147'"
    and check to make sure that only the key(s) you wanted were added.
    
    
    [root@pc146 ~]# ssh-copy-id root@pc148
    ...
    Are you sure you want to continue connecting (yes/no)? yes
    ...
    root@pc148's password: 
    
    Number of key(s) added: 1
    
    Now try logging into the machine, with:   "ssh 'root@pc148'"
    and check to make sure that only the key(s) you wanted were added.
    
    # 测试
    [root@pc146 ~]# ssh pc147
    [root@pc147 ~]# exit
    logout
    Connection to pc147 closed.
    [root@pc146 ~]# 
    [root@pc146 ~]# ssh pc148
    [root@pc148 ~]# exit
    logout
    Connection to pc148 closed.
    [root@pc146 ~]# 
    
    
    # 另外两台相同操作
    
    [root@pc147 ~]# ssh-keygen # 一路回车
    [root@pc147 ~]# ssh-copy-id root@pc146
    [root@pc147 ~]# ssh-copy-id root@pc148
    
    [root@pc147 ~]# ssh pc146
    [root@pc146 ~]# exit
    logout
    Connection to pc146 closed.
    [root@pc147 ~]# ssh pc148
    [root@pc148 ~]# exit
    logout
    Connection to pc148 closed.
    [root@pc147 ~]# 
    
    
    [root@pc148 ~]# ssh-keygen 
    [root@pc148 ~]# ssh-copy-id root@pc146
    [root@pc148 ~]# ssh-copy-id root@pc147
    [root@pc148 ~]# ssh pc146
    [root@pc146 ~]# exit
    logout
    Connection to pc146 closed.
    [root@pc148 ~]# ssh pc147
    [root@pc147 ~]# exit
    logout
    Connection to pc147 closed.
    [root@pc148 ~]# 
    

3、防火墙放行端口

[root@pc146 ~]# systemctl status firewalld.service 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@pc146 ~]# systemctl start firewalld.service 
[root@pc146 ~]# systemctl status firewalld.service 
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-08-21 01:47:07 CST; 4s ago
     Docs: man:firewalld(1)
 Main PID: 6295 (firewalld)
   CGroup: /system.slice/firewalld.service
           ├─6295 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid
           └─6505 /usr/sbin/iptables-restore -w -n

Aug 21 01:47:06 pc146 systemd[1]: Starting firewalld - dynamic firewall daemon...
Aug 21 01:47:07 pc146 systemd[1]: Started firewalld - dynamic firewall daemon.
[root@pc146 ~]# firewall-cmd --list-ports 

[root@pc146 ~]# firewall-cmd --zone=public --add-port=6379/tcp --permanent
success
[root@pc146 ~]# firewall-cmd --zone=public --add-port=6380/tcp --permanent
success
[root@pc147 ~]# firewall-cmd --reload
success
[root@pc147 ~]#
[root@pc147 ~]# systemctl start firewalld.service
[root@pc147 ~]# firewall-cmd --zone=public --add-port=6379/tcp --permanent
success
[root@pc147 ~]# firewall-cmd --zone=public --add-port=6380/tcp --permanent
success
[root@pc147 ~]# firewall-cmd --reload
success
[root@pc147 ~]#
[root@pc148 ~]# systemctl start firewalld.service
[root@pc148 ~]# firewall-cmd --zone=public --add-port=6379/tcp --permanent
success
[root@pc148 ~]# firewall-cmd --zone=public --add-port=6380/tcp --permanent
success
[root@pc148 ~]# firewall-cmd --reload
success
[root@pc148 ~]# 

4、下载 redis

  • 由于使用jedis,所以选取 redis 3.2.0 版本(一台下载即可)
[root@pc146 ~]# cd /opt
[root@pc146 opt]# ls
rh
[root@pc146 opt]# wget http://download.redis.io/releases/redis-3.2.0.tar.gz
--2020-08-20 22:50:29--  http://download.redis.io/releases/redis-3.2.0.tar.gz
Resolving download.redis.io (download.redis.io)... 45.60.125.1
Connecting to download.redis.io (download.redis.io)|45.60.125.1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1525900 (1.5M) [application/octet-stream]
Saving to: ‘redis-3.2.0.tar.gz’

100%[=======================================================================================================================================================================>] 1,525,900   11.4KB/s   in 81s    

2020-08-20 22:51:50 (18.4 KB/s) - ‘redis-3.2.0.tar.gz’ saved [1525900/1525900]

[root@pc146 opt]# ll
total 1496
-rw-r--r--. 1 root root 1525900 Jun 27 23:51 redis-3.2.0.tar.gz
drwxr-xr-x. 2 root root    4096 Oct 31  2018 rh
[root@pc146 opt]#

5、解压redis

[root@pc146 opt]# tar -zxf redis-3.2.0.tar.gz 
[root@pc146 opt]# ll
total 1500
drwxrwxr-x. 6 root root    4096 May  6  2016 redis-3.2.0
-rw-r--r--. 1 root root 1525900 Jun 27 23:51 redis-3.2.0.tar.gz
drwxr-xr-x. 2 root root    4096 Oct 31  2018 rh
[root@pc146 opt]#

6、重命名redis

[root@pc146 opt]# mv redis-3.2.0 redis
[root@pc146 opt]# ll
total 1500
drwxrwxr-x. 6 root root    4096 May  6  2016 redis
-rw-r--r--. 1 root root 1525900 Jun 27 23:51 redis-3.2.0.tar.gz
drwxr-xr-x. 2 root root    4096 Oct 31  2018 rh
[root@pc146 opt]# 

7、编译redis

[root@pc146 opt]# cd redis/
[root@pc146 redis]# ls
00-RELEASENOTES  BUGS  CONTRIBUTING  COPYING  deps  INSTALL  Makefile  MANIFESTO  README.md  redis.conf  runtest  runtest-cluster  runtest-sentinel  sentinel.conf  src  tests  utils
[root@pc146 redis]# 
[root@pc146 redis]# make MALLOC=libc

编译完成后就可以进行下面的集群安装了

只需要配置好一台机器的配置,其余的配置直接采用 scp 命令进行。

8、集群安装

  • 创建redis目录
[root@pc146 redis]# mkdir -p /var/redis/{6379,6380}
  • 复制redis.conf配置文件
[root@pc146 redis]# cp redis.conf /var/redis/6379/redis6379.conf
  • 修改redis.conf配置文件
[root@pc146 redis]# vim /var/redis/6379/redis6379.conf

修改配置,内容如下:

vim编辑器使用 :set nu 显示行号

根据行号修改,(使用 :61 快速跳到61行,其余类似)

61 		#bind 127.0.0.1
80 		protected-mode no
84 		port 6379
127 	daemonize yes
138 	supervised no
149 	pidfile /var/run/redis_6379.pid
162 	logfile "/var/redis/6379/log"
691 	cluster-enabled yes
699 	cluster-config-file /var/redis/6379/nodes-6379.conf
705 	cluster-node-timeout 15000

修改完毕,先Esc,然后 :wq 保存文件

将redis6379.conf文件复制到 /var/redis/6380 目录,

[root@pc146 redis]# ll /var/redis/6379
-rw-r--r--. 1 root root 45411 Aug 20 23:26 redis6379.conf
[root@pc146 redis]# ll /var/redis/6380
total 0
[root@pc146 redis]# cp /var/redis/6379/redis6379.conf /var/redis/6380/redis6380.conf
[root@pc146 redis]# ll /var/redis/6380
total 48
-rw-r--r--. 1 root root 45411 Aug 20 23:33 redis6380.conf
[root@pc146 redis]#

同时将其内容中所有的 6379 修改为 6380

vim批量修改参考链接:https://blog.csdn.net/frdevolcqzyxynjds/article/details/105282200

[root@pc146 redis]# vim /var/redis/6380/redis6380.conf 

使用这一条命令 :%s/6379/6380/g

修改完毕,先Esc,然后 :wq 保存文件

9、scp 复制到另外两台机子上

[root@pc146 ~]# scp -r /opt/redis root@pc147:/opt/
[root@pc146 ~]# scp -r /opt/redis root@pc148:/opt/
[root@pc146 ~]# scp -r /var/redis root@pc147:/var
[root@pc146 ~]# scp -r /var/redis root@pc148:/var

10、启动 Redis

  • 启动两个redis实例
[root@pc146 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf 
[root@pc146 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf 
  • 查看redis进程
[root@pc146 ~]# ps -ef | grep redis
root      6131     1  0 00:13 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root      6135     1  0 00:13 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root      6139  4967  0 00:13 pts/0    00:00:00 grep --color=auto redis
[root@pc146 ~]#
  • 查看节点信息
[root@pc146 ~]# cat /var/redis/6379/nodes-6379.conf 
7e1f7ae752a60876aa39d4c9adf83c4185724a21 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc146 ~]# cat /var/redis/6380/nodes-6380.conf 
7bfe9bb206fc4acbe69e034220fc953a53cccb94 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc146 ~]#

11、启动另外两台机子 redis

[root@pc147 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf
[root@pc147 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf
[root@pc147 ~]# ps -ef | grep redis
root      5254     1  0 00:16 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root      5258     1  0 00:16 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root      5262  5140  0 00:16 pts/0    00:00:00 grep --color=auto redis
[root@pc147 ~]# cat /var/redis/6379/nodes-6379.conf
e1adfd2212b4fc7fdff1bc9d2f476be31cd8d71c :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc147 ~]# cat /var/redis/6380/nodes-6380.conf
9feccd677d3d48ef975efaa26263d6a91408f3a1 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc147 ~]#
[root@pc148 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf
[root@pc148 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf
[root@pc148 ~]# ps -ef | grep redis
root      5205     1  0 00:16 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root      5209     1  0 00:16 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root      5213  5098  0 00:16 pts/0    00:00:00 grep --color=auto redis
[root@pc148 ~]# cat /var/redis/6379/nodes-6379.conf
c6652cf8018988d2f3717d3da66cca42b0c8e5aa :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc148 ~]# cat /var/redis/6380/nodes-6380.conf
f933491925b7b8934003077f31278223807dc990 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc148 ~]# 

内容中最重要的是节点 ID,是一个 40 位 16 进制的字符串,用来唯一标识集群内的一个节点。节点 ID 在集群初始化时只创建一次,节点重启时会加载集群配置文件进行重用。

6 个 redis 实例都已经启动成功了,但是 Redis 集群并没有搭建完成,这些实例都不知道其他实例的存在,都是孤立的并没有形成一个集群。

12、节点握手

可以通过节点握手让 6 个 redis 节点建立联系,从而形成一个集群。

节点握手是指一批运行在集群模式下的节点通过 Gossip 协议彼此通信,达到感知对方的过程。

通过在客户端执行命令 cluster meet {ip} {port} 就可以建立两个节点之间的握手,该命令是一个异步命令,执行后立刻返回,内部执行握手的过程。过程如下:

  1. 节点 6379 本地创建 6380 节点信息对象,并发送 meet 消息。
  2. 节点 6380 接受到 meet 消息后,保存 6379 节点信息并回复 pong 消息。
  3. 之后节点 6379 和 6380 彼此定期通过 ping/pong 消息进行正常的节点通信。
  • 节点握手
[root@pc146 ~]# /opt/redis/src/redis-cli -p 6379
127.0.0.1:6379> cluster meet 16.0.0.146 6380
OK
127.0.0.1:6379> cluster meet 16.0.0.147 6379
OK
127.0.0.1:6379> cluster meet 16.0.0.147 6380
OK
127.0.0.1:6379> cluster meet 16.0.0.148 6379
OK
127.0.0.1:6379> cluster meet 16.0.0.148 6380
OK
127.0.0.1:6379> exit
[root@pc146 ~]#
  • 查看集群节点信息
127.0.0.1:6379> cluster nodes
f933491925b7b8934003077f31278223807dc990 16.0.0.148:6380 master - 0 1597941368316 0 connected
7bfe9bb206fc4acbe69e034220fc953a53cccb94 16.0.0.146:6380 master - 0 1597941364311 1 connected
e1adfd2212b4fc7fdff1bc9d2f476be31cd8d71c 16.0.0.147:6379 master - 0 1597941369317 5 connected
c6652cf8018988d2f3717d3da66cca42b0c8e5aa 16.0.0.148:6379 master - 0 1597941367316 4 connected
7e1f7ae752a60876aa39d4c9adf83c4185724a21 16.0.0.146:6379 myself,master - 0 0 2 connected
9feccd677d3d48ef975efaa26263d6a91408f3a1 16.0.0.147:6380 master - 0 1597941368816 3 connected
127.0.0.1:6379> 

在其余5个节点,查看集群节点信息是一样的

13、分配槽点

分配槽点的命令如下:

redis-cli -h IP -p port cluster addslots {
    
    begin.. end}

Redis 集群分为主从,其中首次启动的节点和被分配槽的节点都是主节点,从节点负责复制主节点槽信息和相关的数据。

这里配置的三主三从,主节点为三台服务器的 6379 节点,从节点为 6380 节点。

  • 分配槽点
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379 cluster addslots {0..5460}
OK
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6379 cluster addslots {5461..10922}
OK
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.148 -p 6379 cluster addslots {10923..16383}
OK
[root@pc146 ~]#
[root@pc146 ~]# /opt/redis/src/redis-cli -p 6379
127.0.0.1:6379> cluster nodes
f933491925b7b8934003077f31278223807dc990 16.0.0.148:6380 master - 0 1597942299358 0 connected
7bfe9bb206fc4acbe69e034220fc953a53cccb94 16.0.0.146:6380 master - 0 1597942298356 1 connected
e1adfd2212b4fc7fdff1bc9d2f476be31cd8d71c 16.0.0.147:6379 master - 0 1597942301360 5 connected 5461-10922
c6652cf8018988d2f3717d3da66cca42b0c8e5aa 16.0.0.148:6379 master - 0 1597942300360 4 connected 10923-16383
7e1f7ae752a60876aa39d4c9adf83c4185724a21 16.0.0.146:6379 myself,master - 0 0 2 connected 0-5460
9feccd677d3d48ef975efaa26263d6a91408f3a1 16.0.0.147:6380 master - 0 1597942294350 3 connected
127.0.0.1:6379>

这样主节点就已经配置完成了,下面则是配置从节点。

cluster replicate {nodeId} 可以让某个节点成为从节点,

在三台服务器执行如下三个命令,注意 redis 节点是 6380,不是 6379。

把146_6380设置为146_6379的从节点,

把147_6380设置为147_6379的从节点,

把148_6380设置为148_6379的从节点。

  • 配置从节点
16.0.0.146:6380> cluster replicate 7e1f7ae752a60876aa39d4c9adf83c4185724a21

16.0.0.147:6380> cluster replicate e1adfd2212b4fc7fdff1bc9d2f476be31cd8d71c

16.0.0.148:6380> cluster replicate c6652cf8018988d2f3717d3da66cca42b0c8e5aa

  • 验证集群
[root@pc146 ~]# /opt/redis/src/redis-cli -p 6379 cluster slots
1) 1) (integer) 5461
   2) (integer) 10922
   3) 1) "16.0.0.147"
      2) (integer) 6379
      3) "e1adfd2212b4fc7fdff1bc9d2f476be31cd8d71c"
   4) 1) "16.0.0.147"
      2) (integer) 6380
      3) "9feccd677d3d48ef975efaa26263d6a91408f3a1"
2) 1) (integer) 10923
   2) (integer) 16383
   3) 1) "16.0.0.148"
      2) (integer) 6379
      3) "c6652cf8018988d2f3717d3da66cca42b0c8e5aa"
   4) 1) "16.0.0.148"
      2) (integer) 6380
      3) "f933491925b7b8934003077f31278223807dc990"
3) 1) (integer) 0
   2) (integer) 5460
   3) 1) "16.0.0.146"
      2) (integer) 6379
      3) "7e1f7ae752a60876aa39d4c9adf83c4185724a21"
   4) 1) "16.0.0.146"
      2) (integer) 6380
      3) "7bfe9bb206fc4acbe69e034220fc953a53cccb94"
[root@pc146 ~]#
[root@pc146 ~]# /opt/redis/src/redis-cli -c -p 6379
127.0.0.1:6379> cluster nodes
f933491925b7b8934003077f31278223807dc990 16.0.0.148:6380 slave c6652cf8018988d2f3717d3da66cca42b0c8e5aa 0 1597944935908 4 connected
7bfe9bb206fc4acbe69e034220fc953a53cccb94 16.0.0.146:6380 slave 7e1f7ae752a60876aa39d4c9adf83c4185724a21 0 1597944937911 3 connected
e1adfd2212b4fc7fdff1bc9d2f476be31cd8d71c 16.0.0.147:6379 master - 0 1597944939914 5 connected 5461-10922
c6652cf8018988d2f3717d3da66cca42b0c8e5aa 16.0.0.148:6379 master - 0 1597944938913 4 connected 10923-16383
7e1f7ae752a60876aa39d4c9adf83c4185724a21 16.0.0.146:6379 myself,master - 0 0 2 connected 0-5460
9feccd677d3d48ef975efaa26263d6a91408f3a1 16.0.0.147:6380 slave e1adfd2212b4fc7fdff1bc9d2f476be31cd8d71c 0 1597944936909 5 connected
127.0.0.1:6379> set name zhangsan
-> Redirected to slot [5798] located at 16.0.0.147:6379
OK
16.0.0.147:6379> get name
"zhangsan"
16.0.0.147:6379>

14、RedisManageDesktop远程测试

6个节点,均连接正常

15、jedis编码测试

16.0.0.146:6379 >> 210.22.22.150:4406

16.0.0.146:6380 >> 210.22.22.150:4415

16.0.0.147:6379 >> 210.22.22.150:4425

16.0.0.147:6380 >> 210.22.22.150:4434

16.0.0.148:6379 >> 210.22.22.150:4387

16.0.0.148:6380 >> 210.22.22.150:4396

报错1

redis.clients.jedis.exceptions.JedisClusterMaxAttemptsException: No more cluster attempts left.

报错2

Exception in thread “main” redis.clients.jedis.exceptions.JedisClusterMaxRedirectionsException: Too many Cluster redirections?

16、排错

  • 防火墙加开端口
[root@pc146 ~]# firewall-cmd --zone=public --add-port=16379/tcp --permanent
success
[root@pc146 ~]# firewall-cmd --zone=public --add-port=16380/tcp --permanent
success
[root@pc146 ~]# firewall-cmd --reload
success
[root@pc146 ~]# firewall-cmd --list-ports 
6379/tcp 6380/tcp 16379/tcp 16380/tcp
[root@pc146 ~]# systemctl stop firewalld.service 
[root@pc146 ~]#


[root@pc147 ~]# firewall-cmd --zone=public --add-port=16379/tcp --permanent
success
[root@pc147 ~]# firewall-cmd --zone=public --add-port=16380/tcp --permanent
success
[root@pc147 ~]# firewall-cmd --reload
success
[root@pc147 ~]# firewall-cmd --list-ports
6379/tcp 6380/tcp 16379/tcp 16380/tcp
[root@pc147 ~]# systemctl stop firewalld.service
[root@pc147 ~]#


[root@pc148 ~]# firewall-cmd --zone=public --add-port=16379/tcp --permanent
success
[root@pc148 ~]# firewall-cmd --zone=public --add-port=16380/tcp --permanent
success
[root@pc148 ~]# firewall-cmd --reload
success
[root@pc148 ~]# firewall-cmd --list-ports
6379/tcp 6380/tcp 16379/tcp 16380/tcp
[root@pc148 ~]# systemctl stop firewalld.service
[root@pc148 ~]#

查看日志

[root@pc148 ~]# cat /var/redis/6379/log

5205:M 21 Aug 11:32:52.203 # Cluster state changed: fail
5205:M 21 Aug 11:32:52.860 * Marking node 7bfe9bb206fc4acbe69e034220fc953a53cccb94 as failing (quorum reached).
  • 根据进程 id 杀死6个节点redis进程

kill -9 redis进程id

  • 每个节点删除nodes-*.conf文件
[root@pc146 ~]# rm -rf /var/redis/6379/nodes-6379.conf 
[root@pc146 ~]# rm -rf /var/redis/6380/nodes-6380.conf 
[root@pc146 ~]# ls /var/redis/6379/
log  redis6379.conf
[root@pc146 ~]# ls /var/redis/6380/
log  redis6380.conf
[root@pc146 ~]# 
  • 启动redis实例
[root@pc146 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf 
[root@pc146 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf 

[root@pc146 ~]# ps -ef|grep redis
root      9315     1  0 23:52 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root      9319     1  0 23:52 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root      9323  9055  0 23:52 pts/0    00:00:00 grep --color=auto redis
[root@pc146 ~]# cat /var/redis/6379/nodes-6379.conf 
0e856e8de62c8e099bc451f94d855045bf38de75 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc146 ~]# cat /var/redis/6380/nodes-6380.conf 
cb05a9ffe6b5481a146715107d6e1d9aca08da78 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc146 ~]#
[root@pc147 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf 
[root@pc147 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf
[root@pc147 ~]# ps -ef|grep redis
root      8048     1  0 23:55 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root      8052     1  0 23:55 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root      8056  7061  0 23:55 pts/0    00:00:00 grep --color=auto redis
[root@pc147 ~]# cat /var/redis/6379/nodes-6379.conf 
cad71fa88b55ea82b783afc8560656b588f92e20 :6379 myself,master - 0 0 0 connected 6918 9189
vars currentEpoch 0 lastVoteEpoch 0
[root@pc147 ~]# cat /var/redis/6380/nodes-6380.conf
a936350408caf9754d21f92349ab57c38d2d6e52 :6380 myself,master - 0 0 0 connected 6918 9189
vars currentEpoch 0 lastVoteEpoch 0
[root@pc147 ~]#
[root@pc148 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf 
[root@pc148 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf
[root@pc148 ~]# ps -ef|grep redis
root      8067     1  0 23:55 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root      8071     1  0 23:55 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root      8075  7032  0 23:56 pts/0    00:00:00 grep --color=auto redis
[root@pc148 ~]# cat /var/redis/6379/nodes-6379.conf 
c09f3f39164bff102abfe48529557fffc5d70da4 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc148 ~]# cat /var/redis/6380/nodes-6380.conf
c782696a4932474e9db6a8833323fc2a754ea1a5 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc148 ~]#
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379
16.0.0.146:6379> cluster meet 16.0.0.146 6380
OK
16.0.0.146:6379> cluster meet 16.0.0.147 6379
OK
16.0.0.146:6379> cluster meet 16.0.0.147 6380
OK
16.0.0.146:6379> cluster meet 16.0.0.148 6379
OK
16.0.0.146:6379> cluster meet 16.0.0.148 6380
OK
16.0.0.146:6379> cluster nodes
a936350408caf9754d21f92349ab57c38d2d6e52 16.0.0.147:6380 master - 0 1598025595977 3 connected 6918 9189
c782696a4932474e9db6a8833323fc2a754ea1a5 16.0.0.148:6380 master - 0 1598025594974 5 connected
c09f3f39164bff102abfe48529557fffc5d70da4 16.0.0.148:6379 master - 0 1598025597980 4 connected
cb05a9ffe6b5481a146715107d6e1d9aca08da78 16.0.0.146:6380 master - 0 1598025596977 0 connected
0e856e8de62c8e099bc451f94d855045bf38de75 16.0.0.146:6379 myself,master - 0 0 1 connected
cad71fa88b55ea82b783afc8560656b588f92e20 16.0.0.147:6379 slave a936350408caf9754d21f92349ab57c38d2d6e52 0 1598025593972 3 connected
16.0.0.146:6379> cluster info
cluster_state:fail
cluster_slots_assigned:2
cluster_slots_ok:2
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:1
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:283
cluster_stats_messages_received:280
16.0.0.146:6379> 
16.0.0.146:6379> 
[root@pc146 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf 
[root@pc146 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf 
[root@pc146 ~]# ps -ef|grep redis
root      9813     1  0 00:34 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root      9817     1  0 00:34 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root      9821  9055  0 00:34 pts/0    00:00:00 grep --color=auto redis
[root@pc146 ~]# /opt/redis/src/redis-cli cluster nodes
4a11dee7f2b7ba5639a5822ca95b7433f7100353 :6379 myself,master - 0 0 0 connected
[root@pc146 ~]# cat /var/redis/6379/nodes-6379.conf 
4a11dee7f2b7ba5639a5822ca95b7433f7100353 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -p 6379
127.0.0.1:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379
16.0.0.146:6379> cluster meet 16.0.0.146 6380
OK
16.0.0.146:6379> cluster meet 16.0.0.147 6379
OK
16.0.0.146:6379> cluster meet 16.0.0.147 6380
OK
16.0.0.146:6379> cluster meet 16.0.0.148 6379
OK
16.0.0.146:6379> cluster meet 16.0.0.148 6380
OK
16.0.0.146:6379> 
16.0.0.146:6379> cluster nodes
9679e4bdbe0d9d12b4682b81bea4b9cde7978463 16.0.0.147:6380 master - 0 1598028083682 3 connected 6918 9189
fd9bf9ecd07a95a11209f0db99bb7d42cc97d573 16.0.0.148:6379 master - 0 1598028081680 0 connected
f35ac49e83762292c1bda3a60d78fa070e3f0c86 16.0.0.147:6379 slave 9679e4bdbe0d9d12b4682b81bea4b9cde7978463 0 1598028077674 3 connected
4a11dee7f2b7ba5639a5822ca95b7433f7100353 16.0.0.146:6379 myself,master - 0 0 1 connected
9e81993c1443c0797d36d26673815f8a8315f950 16.0.0.148:6380 master - 0 1598028079676 4 connected
b5a600050927738c9977dd33fde448515bc0ae1e 16.0.0.146:6380 master - 0 1598028082682 2 connected
16.0.0.146:6379> 
16.0.0.146:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379 cluster addslots {0..5460}
OK
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6380 cluster addslots {5461..10922}
(error) ERR Slot 6918 is already busy
[root@pc146 ~]# 
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379
16.0.0.146:6379> flush all
(error) ERR unknown command 'flush'
16.0.0.146:6379> 
16.0.0.146:6379> flushall
OK
16.0.0.146:6379> exit
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6380
16.0.0.146:6380> flushall
OK
16.0.0.146:6380> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6379
16.0.0.147:6379> flushall
(error) READONLY You can't write against a read only slave.
16.0.0.147:6379> cluster reset
OK
16.0.0.147:6379> 
16.0.0.147:6379> flushall
OK
16.0.0.147:6379> 
16.0.0.147:6379> cluster reset
OK
16.0.0.147:6379> 
16.0.0.147:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6380
16.0.0.146:6380> flushall
OK
16.0.0.146:6380> cluster reset
OK
16.0.0.146:6380> 
16.0.0.146:6380> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379
16.0.0.146:6379> flushall
OK
16.0.0.146:6379> cluster reset
OK
16.0.0.146:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6379
16.0.0.147:6379> flushall
OK
16.0.0.147:6379> cluster reset
OK
16.0.0.147:6379> exit
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6380
16.0.0.147:6380> flushall
OK
(0.65s)
16.0.0.147:6380> cluster reset
OK
16.0.0.147:6380> 
16.0.0.147:6380> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.148 -p 6379
16.0.0.148:6379> flushall
OK
16.0.0.148:6379> cluster reset
OK
16.0.0.148:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.148 -p 6380
16.0.0.148:6380> flushall
OK
(0.57s)
16.0.0.148:6380> cluster reset
OK
16.0.0.148:6380> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379
16.0.0.146:6379> 
16.0.0.146:6379> cluster nodes
4a11dee7f2b7ba5639a5822ca95b7433f7100353 16.0.0.146:6379 myself,master - 0 0 1 connected
16.0.0.146:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.146 -p 6379
16.0.0.146:6379> cluster nodes
4a11dee7f2b7ba5639a5822ca95b7433f7100353 16.0.0.146:6379 myself,master - 0 0 1 connected
16.0.0.146:6379> 
16.0.0.146:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379
16.0.0.146:6379> cluster meet 16.0.0.146 6380
OK
16.0.0.146:6379> cluster meet 16.0.0.147 6379
OK
16.0.0.146:6379> cluster meet 16.0.0.147 6380
OK
16.0.0.146:6379> cluster meet 16.0.0.148 6379
OK
16.0.0.146:6379> cluster meet 16.0.0.148 6380
OK
16.0.0.146:6379> 
16.0.0.146:6379> cluster nodes
9679e4bdbe0d9d12b4682b81bea4b9cde7978463 16.0.0.147:6380 master - 0 1598028700957 3 connected
fd9bf9ecd07a95a11209f0db99bb7d42cc97d573 16.0.0.148:6379 master - 0 1598028697951 0 connected
f35ac49e83762292c1bda3a60d78fa070e3f0c86 16.0.0.147:6379 master - 0 1598028699955 5 connected
4a11dee7f2b7ba5639a5822ca95b7433f7100353 16.0.0.146:6379 myself,master - 0 0 1 connected
9e81993c1443c0797d36d26673815f8a8315f950 16.0.0.148:6380 master - 0 1598028695947 4 connected
b5a600050927738c9977dd33fde448515bc0ae1e 16.0.0.146:6380 master - 0 1598028698953 2 connected
16.0.0.146:6379> 
16.0.0.146:6379> /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379 cluster addslots {0..5460}
(error) ERR unknown command '/opt/redis/src/redis-cli'
16.0.0.146:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379 cluster addslots {0..5460}
OK
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6379 cluster addslots {5461..10922}
OK
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.148 -p 6379 cluster addslots {10923..16383}
OK
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379
16.0.0.146:6379> cluster nodes
9679e4bdbe0d9d12b4682b81bea4b9cde7978463 16.0.0.147:6380 master - 0 1598028833741 3 connected
fd9bf9ecd07a95a11209f0db99bb7d42cc97d573 16.0.0.148:6379 master - 0 1598028830738 0 connected 10923-16383
f35ac49e83762292c1bda3a60d78fa070e3f0c86 16.0.0.147:6379 master - 0 1598028834742 5 connected 5461-10922
4a11dee7f2b7ba5639a5822ca95b7433f7100353 16.0.0.146:6379 myself,master - 0 0 1 connected 0-5460
9e81993c1443c0797d36d26673815f8a8315f950 16.0.0.148:6380 master - 0 1598028831740 4 connected
b5a600050927738c9977dd33fde448515bc0ae1e 16.0.0.146:6380 master - 0 1598028835743 2 connected
16.0.0.146:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6380
16.0.0.146:6380> cluster replicate 4a11dee7f2b7ba5639a5822ca95b7433f7100353
OK
16.0.0.146:6380> exit
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6380
16.0.0.147:6380> cluster replicate f35ac49e83762292c1bda3a60d78fa070e3f0c86
OK
16.0.0.147:6380> exit
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.148 -p 6380
16.0.0.148:6380> cluster replicate fd9bf9ecd07a95a11209f0db99bb7d42cc97d573
OK
16.0.0.148:6380> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379
16.0.0.146:6379> cluster nodes
9679e4bdbe0d9d12b4682b81bea4b9cde7978463 16.0.0.147:6380 slave f35ac49e83762292c1bda3a60d78fa070e3f0c86 0 1598029044603 5 connected
fd9bf9ecd07a95a11209f0db99bb7d42cc97d573 16.0.0.148:6379 master - 0 1598029043602 0 connected 10923-16383
f35ac49e83762292c1bda3a60d78fa070e3f0c86 16.0.0.147:6379 master - 0 1598029042599 5 connected 5461-10922
4a11dee7f2b7ba5639a5822ca95b7433f7100353 16.0.0.146:6379 myself,master - 0 0 1 connected 0-5460
9e81993c1443c0797d36d26673815f8a8315f950 16.0.0.148:6380 slave fd9bf9ecd07a95a11209f0db99bb7d42cc97d573 0 1598029039597 4 connected
b5a600050927738c9977dd33fde448515bc0ae1e 16.0.0.146:6380 slave 4a11dee7f2b7ba5639a5822ca95b7433f7100353 0 1598029040598 2 connected
16.0.0.146:6379> 
16.0.0.146:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:1812
cluster_stats_messages_received:1811
16.0.0.146:6379> 
16.0.0.146:6379> cluster slots
1) 1) (integer) 10923
   2) (integer) 16383
   3) 1) "16.0.0.148"
      2) (integer) 6379
      3) "fd9bf9ecd07a95a11209f0db99bb7d42cc97d573"
   4) 1) "16.0.0.148"
      2) (integer) 6380
      3) "9e81993c1443c0797d36d26673815f8a8315f950"
2) 1) (integer) 5461
   2) (integer) 10922
   3) 1) "16.0.0.147"
      2) (integer) 6379
      3) "f35ac49e83762292c1bda3a60d78fa070e3f0c86"
   4) 1) "16.0.0.147"
      2) (integer) 6380
      3) "9679e4bdbe0d9d12b4682b81bea4b9cde7978463"
3) 1) (integer) 0
   2) (integer) 5460
   3) 1) "16.0.0.146"
      2) (integer) 6379
      3) "4a11dee7f2b7ba5639a5822ca95b7433f7100353"
   4) 1) "16.0.0.146"
      2) (integer) 6380
      3) "b5a600050927738c9977dd33fde448515bc0ae1e"
16.0.0.146:6379> 
16.0.0.146:6379> keys *
(empty list or set)
16.0.0.146:6379> set key1 value1
(error) MOVED 9189 16.0.0.147:6379
16.0.0.146:6379> get key1
(error) MOVED 9189 16.0.0.147:6379
16.0.0.146:6379> 
16.0.0.146:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.146 -p 6379
16.0.0.146:6379> cluster nodes
9679e4bdbe0d9d12b4682b81bea4b9cde7978463 16.0.0.147:6380 slave f35ac49e83762292c1bda3a60d78fa070e3f0c86 0 1598029189357 5 connected
fd9bf9ecd07a95a11209f0db99bb7d42cc97d573 16.0.0.148:6379 master - 0 1598029190861 0 connected 10923-16383
f35ac49e83762292c1bda3a60d78fa070e3f0c86 16.0.0.147:6379 master - 0 1598029188856 5 connected 5461-10922
4a11dee7f2b7ba5639a5822ca95b7433f7100353 16.0.0.146:6379 myself,master - 0 0 1 connected 0-5460
9e81993c1443c0797d36d26673815f8a8315f950 16.0.0.148:6380 slave fd9bf9ecd07a95a11209f0db99bb7d42cc97d573 0 1598029191862 4 connected
b5a600050927738c9977dd33fde448515bc0ae1e 16.0.0.146:6380 slave 4a11dee7f2b7ba5639a5822ca95b7433f7100353 0 1598029189859 2 connected
16.0.0.146:6379> 
16.0.0.146:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:2052
cluster_stats_messages_received:2051
16.0.0.146:6379> 
16.0.0.146:6379> cluster slots
1) 1) (integer) 10923
   2) (integer) 16383
   3) 1) "16.0.0.148"
      2) (integer) 6379
      3) "fd9bf9ecd07a95a11209f0db99bb7d42cc97d573"
   4) 1) "16.0.0.148"
      2) (integer) 6380
      3) "9e81993c1443c0797d36d26673815f8a8315f950"
2) 1) (integer) 5461
   2) (integer) 10922
   3) 1) "16.0.0.147"
      2) (integer) 6379
      3) "f35ac49e83762292c1bda3a60d78fa070e3f0c86"
   4) 1) "16.0.0.147"
      2) (integer) 6380
      3) "9679e4bdbe0d9d12b4682b81bea4b9cde7978463"
3) 1) (integer) 0
   2) (integer) 5460
   3) 1) "16.0.0.146"
      2) (integer) 6379
      3) "4a11dee7f2b7ba5639a5822ca95b7433f7100353"
   4) 1) "16.0.0.146"
      2) (integer) 6380
      3) "b5a600050927738c9977dd33fde448515bc0ae1e"
16.0.0.146:6379> 
16.0.0.146:6379> keys *
(empty list or set)
16.0.0.146:6379> 
16.0.0.146:6379> set key1 value11
-> Redirected to slot [9189] located at 16.0.0.147:6379
OK
16.0.0.147:6379> get key1
"value11"
16.0.0.147:6379> exit
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.146 -p 6379
16.0.0.146:6379> get key1
-> Redirected to slot [9189] located at 16.0.0.147:6379
"value11"
16.0.0.147:6379> exit
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.146 -p 6380
16.0.0.146:6380> get key1
-> Redirected to slot [9189] located at 16.0.0.147:6379
"value11"
16.0.0.147:6379> exit
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.148 -p 6380
16.0.0.148:6380> get key1
-> Redirected to slot [9189] located at 16.0.0.147:6379
"value11"
16.0.0.147:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.148 -p 6379
16.0.0.148:6379> get key1
-> Redirected to slot [9189] located at 16.0.0.147:6379
"value11"
16.0.0.147:6379> exit
[root@pc146 ~]# 

17、排错方案

方案一、对6个节点操作

(1)杀掉redis进程

kill -9 {
    
    redis进程ID}

(2)删掉rdb

- rm -rf /root/dump.rdb

(3)删掉nodes.conf

rm -rf /var/redis/6379/nodes-6379.conf 
rm -rf /var/redis/6380/nodes-6380.conf

(1)~(3)步在三台机子都搞完,再搞下方第(4)步

(4)重新配置 redis.conf

61 		#bind 127.0.0.1
80 		protected-mode no
84 		port 6379
127 	daemonize yes
138 	supervised no
149 	pidfile /var/run/redis_6379.pid
162 	logfile "/var/redis/6379/log"
691 	cluster-enabled yes
699 	cluster-config-file /var/redis/6379/nodes-6379.conf
705 	#cluster-node-timeout 15000		20000
706		cluster-node-timeout 60000
  • cluster-node-timeout 延时调大,调到 60s
vim /var/redis/6379/redis6379.conf 
vim /var/redis/6380/redis6380.conf 

三台都修改完毕

(5)重启集群

启动6个节点

[root@pc146 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf 
[root@pc146 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf 
[root@pc146 ~]# ps -ef|grep redis
root     12405     1  0 11:47 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root     12413     1  0 11:47 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root     12417 12112  0 11:47 pts/0    00:00:00 grep --color=auto redis
[root@pc146 ~]# 
[root@pc147 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf
[root@pc147 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf 
[root@pc147 ~]# ps -ef|grep redis
root     11007     1  0 11:53 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root     11015     1  0 11:54 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root     11019 10967  0 11:54 pts/0    00:00:00 grep --color=auto redis
[root@pc147 ~]#
[root@pc148 ~]# /opt/redis/src/redis-server /var/redis/6379/redis6379.conf
[root@pc148 ~]# /opt/redis/src/redis-server /var/redis/6380/redis6380.conf 
[root@pc148 ~]# ps -ef|grep redis
root     11086     1  0 11:55 ?        00:00:00 /opt/redis/src/redis-server *:6379 [cluster]
root     11090     1  0 11:55 ?        00:00:00 /opt/redis/src/redis-server *:6380 [cluster]
root     11094 11046  0 11:55 pts/0    00:00:00 grep --color=auto redis
[root@pc148 ~]#

(6)查看集群状态

[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.146 -p 6379
16.0.0.146:6379> cluster nodes
540aa2fe460e5453fa10410b8b3a9c151a4e1ab5 :6379 myself,master - 0 0 0 connected
16.0.0.146:6379> exit
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.146 -p 6380 cluster nodes
f308758941d1c76df7a1792bd749d0634929ddb6 :6380 myself,master - 0 0 0 connected
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.147 -p 6379 cluster nodes
2dfc5edacdbf1a9bd0aeacf0704d283092d60cbc :6379 myself,master - 0 0 0 connected
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.147 -p 6380 cluster nodes
56c6206790f6dbb49776f0098e246424bb5d65b0 :6380 myself,master - 0 0 0 connected
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.148 -p 6379 cluster nodes
16d0831427c99d550e346ce69eca6c55a9b0483f :6379 myself,master - 0 0 0 connected
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -c -h 16.0.0.148 -p 6380 cluster nodes
5f1baa4accba21b3a4ebd0008adaeb4a897182b7 :6380 myself,master - 0 0 0 connected
[root@pc146 ~]# 

(7)节点握手、槽点分配

  • 节点握手
[root@pc146 ~]# /opt/redis/src/redis-cli -p 6379
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> 
127.0.0.1:6379> cluster meet 16.0.0.146 6380
OK
127.0.0.1:6379> 
127.0.0.1:6379> cluster meet 16.0.0.147 6379
OK
127.0.0.1:6379> 
127.0.0.1:6379> cluster meet 16.0.0.147 6380
OK
127.0.0.1:6379> 
127.0.0.1:6379> cluster meet 16.0.0.148 6379
OK
127.0.0.1:6379> 
127.0.0.1:6379> cluster meet 16.0.0.148 6380
OK
127.0.0.1:6379> 
127.0.0.1:6379> exit
[root@pc146 ~]#
  • 查看集群状态
[root@pc146 ~]# /opt/redis/src/redis-cli cluster nodes
f308758941d1c76df7a1792bd749d0634929ddb6 16.0.0.146:6380 master - 0 1598182874880 0 connected
540aa2fe460e5453fa10410b8b3a9c151a4e1ab5 16.0.0.146:6379 myself,master - 0 0 1 connected
5f1baa4accba21b3a4ebd0008adaeb4a897182b7 16.0.0.148:6380 master - 0 1598182872877 5 connected
16d0831427c99d550e346ce69eca6c55a9b0483f 16.0.0.148:6379 master - 0 1598182875882 4 connected
2dfc5edacdbf1a9bd0aeacf0704d283092d60cbc 16.0.0.147:6379 master - 0 1598182870874 2 connected
56c6206790f6dbb49776f0098e246424bb5d65b0 16.0.0.147:6380 master - 0 1598182873878 3 connected
[root@pc146 ~]#
  • 槽点分配
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6379 cluster addslots {0..5460}
OK
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6379 cluster addslots {5461..10922}
OK
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.148 -p 6379 cluster addslots {10923..16383}
OK
[root@pc146 ~]# 
  • 查看集群状态
[root@pc146 ~]# /opt/redis/src/redis-cli cluster nodes
f308758941d1c76df7a1792bd749d0634929ddb6 16.0.0.146:6380 master - 0 1598184321898 0 connected
540aa2fe460e5453fa10410b8b3a9c151a4e1ab5 16.0.0.146:6379 myself,master - 0 0 1 connected 0-5460
5f1baa4accba21b3a4ebd0008adaeb4a897182b7 16.0.0.148:6380 master - 0 1598184326907 5 connected
16d0831427c99d550e346ce69eca6c55a9b0483f 16.0.0.148:6379 master - 0 1598184327909 4 connected 10923-16383
2dfc5edacdbf1a9bd0aeacf0704d283092d60cbc 16.0.0.147:6379 master - 0 1598184324904 2 connected 5461-10922
56c6206790f6dbb49776f0098e246424bb5d65b0 16.0.0.147:6380 master - 0 1598184325905 3 connected
[root@pc146 ~]#
  • 分配从节点(单台机子6380设置为6379的从节点,根据6379的实例ID)
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.146 -p 6380 cluster replicate 540aa2fe460e5453fa10410b8b3a9c151a4e1ab5
OK
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.147 -p 6380 cluster replicate 2dfc5edacdbf1a9bd0aeacf0704d283092d60cbc
OK
[root@pc146 ~]# 
[root@pc146 ~]# /opt/redis/src/redis-cli -h 16.0.0.148 -p 6380 cluster replicate 16d0831427c99d550e346ce69eca6c55a9b0483f
OK
[root@pc146 ~]# 
  • 查看集群状态
[root@pc146 ~]# /opt/redis/src/redis-cli cluster nodes
f308758941d1c76df7a1792bd749d0634929ddb6 16.0.0.146:6380 slave 540aa2fe460e5453fa10410b8b3a9c151a4e1ab5 0 1598184567772 1 connected
540aa2fe460e5453fa10410b8b3a9c151a4e1ab5 16.0.0.146:6379 myself,master - 0 0 1 connected 0-5460
5f1baa4accba21b3a4ebd0008adaeb4a897182b7 16.0.0.148:6380 slave 16d0831427c99d550e346ce69eca6c55a9b0483f 0 1598184568774 5 connected
16d0831427c99d550e346ce69eca6c55a9b0483f 16.0.0.148:6379 master - 0 1598184566769 4 connected 10923-16383
2dfc5edacdbf1a9bd0aeacf0704d283092d60cbc 16.0.0.147:6379 master - 0 1598184569776 2 connected 5461-10922
56c6206790f6dbb49776f0098e246424bb5d65b0 16.0.0.147:6380 slave 2dfc5edacdbf1a9bd0aeacf0704d283092d60cbc 0 1598184565767 3 connected
[root@pc146 ~]# 
  • 查看集群slots
[root@pc146 ~]# /opt/redis/src/redis-cli cluster slots
1) 1) (integer) 0
   2) (integer) 5460
   3) 1) "16.0.0.146"
      2) (integer) 6379
      3) "540aa2fe460e5453fa10410b8b3a9c151a4e1ab5"
   4) 1) "16.0.0.146"
      2) (integer) 6380
      3) "f308758941d1c76df7a1792bd749d0634929ddb6"
2) 1) (integer) 10923
   2) (integer) 16383
   3) 1) "16.0.0.148"
      2) (integer) 6379
      3) "16d0831427c99d550e346ce69eca6c55a9b0483f"
   4) 1) "16.0.0.148"
      2) (integer) 6380
      3) "5f1baa4accba21b3a4ebd0008adaeb4a897182b7"
3) 1) (integer) 5461
   2) (integer) 10922
   3) 1) "16.0.0.147"
      2) (integer) 6379
      3) "2dfc5edacdbf1a9bd0aeacf0704d283092d60cbc"
   4) 1) "16.0.0.147"
      2) (integer) 6380
      3) "56c6206790f6dbb49776f0098e246424bb5d65b0"
[root@pc146 ~]# 
  • 查看集群info
[root@pc146 ~]# /opt/redis/src/redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:3770
cluster_stats_messages_received:3770
[root@pc146 ~]# 

(8)调优配置redis.conf

​ 见 (4)重新配置 redis.conf

8、集群安装 修改redis.conf配置文件

  • 重点是 cluster-node-timeout
  • 原始值15s,调到20s,25s,40s,再调到60s
  • 再调到80s,发现5节点没问题,6节点有时候行
  • 再调到100s,发现6节点行了通了,然后也出现不行了
  • 调到150s,测试6节点,有时通了,有时还是不行
    • @RepeatedTest(10) 重复测试6节点10次,3次连接成功通了,7次失败
    • @RepeatedTest(100)重复测试6节点100次,34次连接成功通了,66次失败

方案二、增大网络带宽

  • 个人感觉,鹏城实验室开发者云,华为鲲鹏虚拟机,那个云服务器的网络带宽存在延迟问题

  • 所以需要联系鹏城实验室开发者云管理员,申请调大网络带宽

18、预期成功情况

  • 开发环境:jdk8,jedis 2.7.2

  • 6节点全通了

<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>2.7.2</version>
    <type>jar</type>
    <scope>compile</scope>
</dependency>
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;

public class Cluster {
    
    

    public static void main(String[] args) throws IOException {
    
    
        //创建jedisCluster对象,有一个参数 nodes是Set类型,Set包含若干个HostAndPort对象
        Set<HostAndPort> nodes = new HashSet<HostAndPort>();
        nodes.add(new HostAndPort("210.22.22.150",4406));//16.0.0.146:6379
        nodes.add(new HostAndPort("210.22.22.150",4415));//16.0.0.146:6380
        nodes.add(new HostAndPort("210.22.22.150",4425));//16.0.0.147:6379 星星
        nodes.add(new HostAndPort("210.22.22.150",4434));//16.0.0.147:6380
        nodes.add(new HostAndPort("210.22.22.150",4387));//16.0.0.148:6379
        nodes.add(new HostAndPort("210.22.22.150",4396));//16.0.0.148:6380
        JedisCluster jedisCluster = new JedisCluster(nodes);
        //使用jedisCluster操作redis
        System.out.println("---------通了---------");
        System.out.println(jedisCluster.get("key1"));
        System.out.println("---------成功---------");

//        jedisCluster.set("test3","汪总,测试成功了!");
//        System.out.println(jedisCluster.get("test3"));


        //关闭连接池
        jedisCluster.close();
    }

}

运行结果

---------通了---------
success
---------成功---------

Process finished with exit code 0


附:
成功截图
在这里插入图片描述

在这里插入图片描述


2020-08-24更新

  • 修改redis.conf 中的cluster-node-timeout,设置为150000,保持不变
    在这里插入图片描述

  • 然后编码测试使用单元测试


— —

待续……

创作辛苦!

您的每一次 点赞 就是我努力的前进动力!

更多精彩,请 关注 本博主!

猜你喜欢

转载自blog.csdn.net/frdevolcqzyxynjds/article/details/108196993