Redis主从复制(单机器,集群)

一、redis实现主从复制-单机测试
1、安装redis 
tar -zxvf redis-2.8.4.tar.gz
cd redis-2.8.4
make && make install
2、配置主从关系
需要在slave服务器的redis.conf中配置
slaveof 192.168.1.1 6379 #指定master的ip和端口
具体配置见下:
cp redis.conf redis-master-6379.conf
vi2 redis-master-6379.conf
logfile "/appcom/Redis/redis-2.8.4/redis-master-6379.log"

cp redis.conf redis-master-6389.conf
vi2 redis-master-6379.conf
port 6389
slaveof localhost 6379
logfile "/appcom/Redis/redis-2.8.4/redis-slave-6389.log"

3、启动master服务器和slave服务器
./src/redis-server redis-master-6379.conf &
[19810] 28 Jan 14:18:55.825 * The server is now ready to accept connections on port 6379
[19810] 28 Jan 14:23:19.918 * Slave asks for synchronization
[19810] 28 Jan 14:23:19.919 * Full resync requested by slave.
[19810] 28 Jan 14:23:19.919 * Starting BGSAVE for SYNC
[19810] 28 Jan 14:23:19.928 * Background saving started by pid 22336
[22336] 28 Jan 14:23:19.947 * DB saved on disk
[22336] 28 Jan 14:23:19.948 * RDB: 6 MB of memory used by copy-on-write
[19810] 28 Jan 14:23:19.985 * Background saving terminated with success
[19810] 28 Jan 14:23:19.986 * Synchronization with slave succeeded
[19810] 28 Jan 14:23:21.038 # Connection with slave ::1:6389 lost.
[19810] 28 Jan 14:23:25.159 * Slave asks for synchronization
[19810] 28 Jan 14:23:25.159 * Full resync requested by slave.
[19810] 28 Jan 14:23:25.159 * Starting BGSAVE for SYNC
[19810] 28 Jan 14:23:25.163 * Background saving started by pid 22399
[22399] 28 Jan 14:23:25.177 * DB saved on disk
[22399] 28 Jan 14:23:25.178 * RDB: 6 MB of memory used by copy-on-write
[19810] 28 Jan 14:23:25.210 * Background saving terminated with success
[19810] 28 Jan 14:23:25.210 * Synchronization with slave succeeded

./src/redis-server redis-slave-6389.conf &
[22327] 28 Jan 14:23:18.915 * The server is now ready to accept connections on port 6389
[22327] 28 Jan 14:23:19.913 * Connecting to MASTER localhost:6379
[22327] 28 Jan 14:23:19.915 * MASTER <-> SLAVE sync started
[22327] 28 Jan 14:23:19.915 * Non blocking connect for SYNC fired the event.
[22327] 28 Jan 14:23:19.916 * Master replied to PING, replication can continue...
[22327] 28 Jan 14:23:19.917 * Partial resynchronization not possible (no cached master)

在master shutdown之后slave中可以使用数据,但在后台日志中出现下面信息,并不会将slave转化为master
[7084] 28 Jan 14:04:59.940 * Connecting to MASTER localhost:6379
[7084] 28 Jan 14:04:59.941 * MASTER <-> SLAVE sync started
[7084] 28 Jan 14:04:59.941 # Error condition on socket for SYNC: Connection refused

二、利用Redis-sentinel实现redis集群的故障恢复-单机测试
1、redis安装
master localhost 6379
slave1 localhost 6389
slave2 localhost 6399
master-sentinel: localhost 26379
slave1-sentinel: localhost 26389
slave2-sentinel: localhost 26399
2、redis配置
master配置
cp redis.conf redis-master-6379.conf
vi2 redis-master-6379.conf
port 6379
requirepass rd123 
masterauth rd123
#rename-command
appendonly yes //开启aof
save “”
slave-read-only yes
logfile "/appcom/Redis/redis-2.8.4/redis-master-6379.log"

vi2 sentinel-6379.conf
port 26379
sentinel monitor mymaster 127.0.0.1 6379 2 //sentinel需要监控的master信息:<mastername> <masterIP> <masterPort> <quorum>. <quorum>应该小于集群中slave的个数,只有当至少<quorum>个sentinel实例提交"master失效" 才会认为master为ODWON("客观"失效) .
sentinel auth-pass mymaster rd123
sentinel down-after-milliseconds mymaster 30000 //master被当前sentinel实例认定为“失效”(SDOWN)的间隔时间
sentinel parallel-syncs mymaster 1 //当新master产生时,同时进行“slaveof”到新master并进行同步复制的slave个数。
sentinel failover-timeout mymaster 180000 //failover过期时间,当failover开始后,在此时间内仍然没有触发任何failover操作,当前sentinel将会认为此次failoer失败

slave1配置
cp redis-master-6379.conf redis-slave-6389.conf
vi2 redis-slave-6389.conf
prot 6389
slaveof localhost 6379
logfile "/appcom/Redis/redis-2.8.4/redis-master-6389.log"

cp sentinel-6379.conf sentinel-6389.conf
vi2 sentinel-6389.conf
prot 26389

slave2配置
cp redis-master-6379.conf redis-slave-6399.conf
vi2 redis-slave-6399.conf
prot 6399
slaveof localhost 6379
logfile "/appcom/Redis/redis-2.8.4/redis-master-6399.log"

cp sentinel-6379.conf sentinel-6399.conf
vi2 sentinel-6399.conf
prot 26399

3、启动
首先启动master server和master sentinel
./src/redis-server --include redis-master-6379.conf &
./src/redis-sentinel sentinel-6379.conf > sentinel-6379.log &
启动slave1 server和sentinel
./src/redis-server --include redis-slave-6389.conf &
./src/redis-sentinel sentinel-6389.conf > sentinel-6389.log &
启动slave1 server和sentinel
./src/redis-server --include redis-slave-6399.conf &
./src/redis-sentinel sentinel-6399.conf > sentinel-6399.log &

[45564] 28 Jan 15:03:37.444 * +slave slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379
[45564] 28 Jan 15:03:37.444 * +slave slave 127.0.0.1:6399 127.0.0.1 6399 @ mymaster 127.0.0.1 6379
[45564] 28 Jan 15:04:02.364 * +sentinel sentinel 127.0.0.1:26389 127.0.0.1 26389 @ mymaster 127.0.0.1 6379
[45564] 28 Jan 15:04:19.711 * +sentinel sentinel 127.0.0.1:26399 127.0.0.1 26399 @ mymaster 127.0.0.1 6379

查看master的状态:

# ./src/redis-cli -h 127.0.0.1 -p 6379 -a rd123
localhost:6379> info Replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6389,state=online,offset=54505,lag=0
slave1:ip=127.0.0.1,port=6399,state=online,offset=54505,lag=1
master_repl_offset:54505
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:54504

查看slave1的状态:
# ./src/redis-cli -h localhost -p 6389 -a rd123
localhost:6389> info Replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:2
master_sync_in_progress:0
slave_repl_offset:59720
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

查看slave2的状态:
# ./src/redis-cli -h localhost -p 6399 -a rd123
localhost:6399> info Replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:68701
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

4、测试
(1)场景一,slave1宕机
localhost:6389> shutdown
在sentinel中
[45794] 28 Jan 15:12:10.335 # +sdown slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379

# ./src/redis-cli -h localhost -p 6379 -a rd123
localhost:6379> info Replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6399,state=online,offset=120536,lag=1
master_repl_offset:120669
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:120668

(2)场景二,slave恢复
重启slave1
./src/redis-server --include redis-slave-6389.conf &
[3] 52287

[45794] 28 Jan 15:15:19.726 * +reboot slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379
[45794] 28 Jan 15:15:19.874 # -sdown slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6379

localhost:6379> info Replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6399,state=online,offset=197860,lag=1
slave1:ip=127.0.0.1,port=6389,state=online,offset=197727,lag=1
master_repl_offset:198126
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:198125

(3)场景三,master宕机
localhost:6379> shutdown
localhost:6379> info Replication

[45564] 28 Jan 15:36:37.710 # +sdown master mymaster 127.0.0.1 6379
[45564] 28 Jan 15:36:37.967 # +new-epoch 1
[45564] 28 Jan 15:36:37.968 # +vote-for-leader 1f6f588c7c28a2176c2886e540a638ce92033e65 1
[45564] 28 Jan 15:36:38.892 # +odown master mymaster 127.0.0.1 6379 #quorum 3/2
[45564] 28 Jan 15:36:39.178 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6399
[45564] 28 Jan 15:36:39.178 * +slave slave 127.0.0.1:6389 127.0.0.1 6389 @ mymaster 127.0.0.1 6399
[45564] 28 Jan 15:36:39.180 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399
[45564] 28 Jan 15:37:09.193 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399

master转换为slave2
localhost:6399> info Replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6389,state=online,offset=21724,lag=1
master_repl_offset:21990
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:21989

(4)场景四,master恢复
./src/redis-server --include redis-master-6379.conf &
[1] 67400

[45564] 28 Jan 15:41:47.608 # -sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399
[45564] 28 Jan 15:41:57.513 * +reboot slave 127.0.0.1:6379 127.0.0.1 6379 @ mymaster 127.0.0.1 6399

原来的master自动切换成slave,不会自动恢复成master

localhost:6379> info Replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6399
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:70642
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

localhost:6399> info Replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6389,state=online,offset=93539,lag=0
slave1:ip=127.0.0.1,port=6379,state=online,offset=93539,lag=0
master_repl_offset:93553
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:93552

三、redis集群搭建
1、从github中下载最新开发版的redis,https://codeload.github.com/antirez/redis/zip/unstable
2、安装redis
node1 10.25.22.185 6379
node2 10.25.22.186 6379
node3 10.25.22.187 6379 
3、修改配置
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
logfile "/appcom/Redis/redis-unstable/redis.log"

./src/redis-server redis.conf &
[1] 6856
./src/redis-server redis.conf &
[1] 43951
./src/redis-server redis.conf &
[1] 80642

在node1中查看集群状态
# ./src/redis-cli
127.0.0.1:6379> cluster nodes
af6224cbc9ce9b66e21b90af442678ba096989d9 :0 myself,master - 0 0 0 connected
127.0.0.1:6379> cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0

通过cluster meet命令关联集群各个服务器
127.0.0.1:6379> cluster meet 10.25.22.186 6379
OK
127.0.0.1:6379> cluster meet 10.25.22.187 6379
OK
127.0.0.1:6379> cluster nodes
ed85b32aa566511bf917e8ecdc6150df7449dcf2 10.25.22.187:6379 master - 0 1390897200350 0 connected
af6224cbc9ce9b66e21b90af442678ba096989d9 :0 myself,master - 0 0 0 connected
918fc015490599a93e680893c7e387336dac35bc 10.25.22.186:6379 master - 0 1390897199347 0 connected
127.0.0.1:6379> cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:3
cluster_size:0
cluster_current_epoch:0
cluster_stats_messages_sent:23
cluster_stats_messages_received:23

为集群中各个服务器分配hash slots
Redis Cluster通过hash slot将数据根据主键来分区,所以一条key-value数据会根据算法自动映射到一个hash slot,
但是一个hash slot存储在哪个Redis节点上并不是自动映射的,是需要集群管理者自行分配的。
根据源码得知共有16384个hash slots

修改node-conf文件,保留myself那行记录,其余记录删除
node1的改为:af6224cbc9ce9b66e21b90af442678ba096989d9 :0 myself,master - 0 0 0 connected 0-5000

node2的改为:918fc015490599a93e680893c7e387336dac35bc :0 myself,master - 0 0 0 connected 5001-10000

node3的改为:ed85b32aa566511bf917e8ecdc6150df7449dcf2 :0 myself,master - 0 0 0 connected 10001-16383

之后重启服务器

重新使用cluster meet命令关联各个服务器节点

127.0.0.1:6379> cluster meet 10.25.22.186 6379
OK
127.0.0.1:6379> cluster meet 10.25.22.187 6379
OK
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:3
cluster_size:3
cluster_current_epoch:0
cluster_stats_messages_sent:29
cluster_stats_messages_received:29
127.0.0.1:6379>

[root@CNSZ141195 redis-unstable]# ./src/redis-cli
127.0.0.1:6379> set name "Make"
(error) MOVED 5798 10.25.22.186:6379
[root@CNSZ141196 redis-unstable]# ./src/redis-cli
127.0.0.1:6379> set name "Make"
OK
127.0.0.1:6379> get name
"Make"

猜你喜欢

转载自blog.csdn.net/qq_20960159/article/details/79004082