redis master-slave replication principle and optimization - High Availability

What is a master-slave replication

Machine failure; capacity bottlenecks; QPS Bottleneck

A master-slave, a master from multiple

Do separate read and write

Make a copy of data

Extended Data Performance

A maskter can have multiple slave

A slave can only have one master

The data flow is unidirectional from the master to the slave

Two replication configuration

2.1 slave command

From 6380, 6379 Lord
Performed on the 6389
of Slave 127.0.0.1 6379 # asynchronous 
slaveof NO One # cancel the copy, the data is not cleared before the

 

2.2 configuration files

Port ip slaveof # configurations from node ip and port 
Yes Read-only-Slave # from the node read-only, read-write because the data will be chaos 
' ''
mkdir -p redis1/conf redis1/data redis2/conf redis2/data redis3/conf redis3/data
I came redis.conf
daemonize no
pidfile redis.pid
bind 0.0.0.0
protected-mode no
port 6379
timeout 0
logfile redis.log
dbfilename dump.rdb
dir / data
slaveof 10.0.0.101 6379
slave-read-only yes
cp redis.conf /home/redis2/conf/
docker run -p 6379:6379 --name redis_6379 -v /home/redis1/conf/redis.conf:/etc/redis/redis.conf -v /home/redis1/data:/data -d redis redis-server /etc/redis/redis.conf
docker run -p 6378:6379 --name redis_6378 -v /home/redis2/conf/redis.conf:/etc/redis/redis.conf -v /home/redis2/data:/data -d redis redis-server /etc/redis/redis.conf
docker run -p 6377:6379 --name redis_6377 -v /home/redis3/conf/redis.conf:/etc/redis/redis.conf -v /home/redis3/data:/data -d redis redis-server /etc/redis/redis.conf
info replication
'''

 

 

Four Troubleshooting

slave failure

master failure

Five copy Frequently Asked Questions

1 separate read and write

Read apportioned to flow from node

May encounter problems: copy data latency, read obsolete data from node failure

2 from the master configuration inconsistencies

maxmemory inconsistencies: missing data

Data structure optimization parameters: the master node is optimized, the optimization is not provided from the node, there will be some problems

3 to avoid the full amount of replication

The first full-volume copy, inevitable: a small master node, low-peak (nighttime)

Node id does not match the operation: the master node restart (id change running)

Copy squeeze insufficient buffer: increasing the copy buffer size, rel_backlog_size

4 avoid replication storm

Single-master replication storm node, the master node restart, all copied from the node

 

 

From a master copy HA

# Master-slave replication problems: 
# 1 master-slave replication occurs, the primary node fails, the failover needs to be done, you can manually transfer: let one of the slave become Master
# 2 master-slave replication, the master can only write data, and writing and limited storage capacity

Two architectural representations

Fault diagnosis can be done, failover, inform the client (in fact, is a process), the client address directly connected to the sentinel

image-20191229230823911

More than 1 sentinel discovered and confirmed master in question

2 Touch a sentinel election as leader

3 as a slave select a new master

4 inform the rest of the slave become the new master of the slave

From 5 to inform the client of the main changes

6 waiting for the resurrection of the old master to become the new master of the slave

Three mounting configurations

1 configured to open the main slave node
 2 is turned on to monitor the main sentinel node configuration (a special sentinel Redis)
 . 3 should be multiple machines
 # Configure Open sentinel node monitors the main 
mkdir -p redis4 / conf redis4 / data redis5 / conf redis5 / data redis6 / redis6 Data / the conf
vi sentinel.conf
port 26379
daemonize no
you / data
protected-mode no
bind 0.0.0.0
logfile "redis_sentinel.log"
sentinel monitor mymaster 10.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000
docker run -p 26379:26379 --name redis_26379 -v /home/redis4/conf/sentinel.conf:/etc/redis/sentinel.conf -v /home/redis4/data:/data -d redis redis-sentinel /etc/redis/sentinel.conf
docker run -p 26378:26379 --name redis_26378 -v /home/redis5/conf/sentinel.conf:/etc/redis/sentinel.conf -v /home/redis5/data:/data -d redis redis-sentinel /etc/redis/sentinel.conf
docker run -p 26377:26379 --name redis_26377 -v /home/redis6/conf/sentinel.conf:/etc/redis/sentinel.conf -v /home/redis6/data:/data -d redis redis-sentinel /etc/redis/sentinel.conf
redis-sentinel sentinel.conf
info
Configuration will override the automatic discovery slave

 

Four client connections

import redis
from redis.sentinel import Sentinel
# Connection Sentinel server (host name may be a domain) 
# 10.0.0.101:26379 
Sentinel the Sentinel = ([( ' 10.0.0.101 ' , 26379 ),
                     ('10.0.0.101', 26378),
                     ('10.0.0.101', 26377)
             ],
                    socket_timeout=5)
Print (Sentinel)
 # obtain master server address 
Master sentinel.discover_master = ( ' mymaster ' )
 Print (Master)
# Obtained from the server address 
Slave = sentinel.discover_slaves ( ' mymaster ' )
 Print (Slave)
# Get the primary server writes 
# Master sentinel.master_for = ( 'mymaster', socket_timeout = 0.5) 
# w_ret = master.set ( 'foo', 'bar')
#
#
#
#
# slave = sentinel.slave_for('mymaster', socket_timeout=0.5)
# r_ret = slave.get('foo')
# print(r_ret)

 

 

Five implementation principles

Six Frequently Asked Questions

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/Gaimo/p/12121837.html