Redis (three), Redis master copy from

A master copy from

Master-slave replication: The master node is responsible for writing data from the node is responsible for reading data, read and write in order to achieve the separation improve redis high availability.

Allow a server to copy (replicate) to another server, we call server to be copied master node (master), and the server to the primary server replicates were called from the node (slave),

As shown below:

 

Main features of the copy:

1, there may be a plurality of slave master

2, a slave can only have one master

3, the data flow is unidirectional, the Master slave

 

The main role of the copy:

1, a copy of the data: multiple copies of data in one or more parts, to ensure high availability redis

2, scalability: single redis performance is limited, the master can copy, such as scale capacity, and the like QPS

 

Second, the implementation of the master-slave replication

Client command: slaveof

 

Configuration:

New redis-6380.conf, arranged added

# 1 indicate who is the primary node
slaveof  your-master-ip  your-master-port

# 2 so that only the read operation from the nodes, to ensure consistency and synchronization master node and the slave node separate read and write data.
slave-ready-only yes

 

Third, the full amount of replication and incremental copying

1. runid: Redis when every time you start, will generate a different id to mark Redis currently running. From the node will be saved run_id marked the master node, 
if Redis master node restart occurs, then the node is connected to the master node, you will find change (this change run_id master node marked in accordance with ip and port numbers mean a large amount of data in the master node changes may occur),
so in this case will cause the total amount of copy, that is all to copy all the data in the master node over. root@f9eb2360ed36:
/usr/local/bin# ./redis-cli -p 6379 info server # Server redis_version:4.0.14 redis_git_sha1:00000000 redis_git_dirty:0 redis_build_id:9ac979c18029eef1 redis_mode:standalone os:Linux 3.10.0-514.26.2.el7.x86_64 x86_64 arch_bits:64 multiplexing_api:epoll atomicvar_api:atomic-builtin gcc_version:6.3.0 process_id:1 run_id:49dbc223587cbdadd158adc21816979722b65ae1 tcp_port:6379 uptime_in_seconds:105535 uptime_in_days:1 hz:10 lru_clock:1086433 executable:/data/redis-server config_file: 2. Offset: the master node whenever a change deletions when data, the master node will have a value of this change is recorded, a recording mark is offset Redis data change, when the master changes data,
partial corresponding to the shift amount changes also occur, but the master node in the synchronization command to change the data, also sent from the node to the slave node offset, so that comparison can be offset from the master node, to see if from the appearance of the main inconsistencies.
./Redis-cli -p 6379 used to view the offset from the master node in the master node info replication command

3.1 full amount of replication

process:

1. PSYNC transmitted to the master node, there are two parameters, the first parameter is runid, the second parameter is the offset, the first transmission runid master node does not know, do not know the offset, the slave node so send? -1

2. The master node receives the message, according to? -1 can be judged first replication, the master node sends to and offset runId Slave node,

3. Save the basic information from the master node node

4-5-6. Master node performs a snapshot bgsave, during subsequent execution of the recorded data will change command to change the data until the master node to generate the file transfer from the far RDB node,

            Master node during execution of a write operation, the data of the new master node will change the record buffer is sent to the node from

7-8 clear all data from previous nodes, load RDB file recovery and data stored in the newly changed data

 

 Description:

  •  The total amount of performance overhead replication:... 1 bgsave Time required files generated RDB, RDB 2 file transfer time between the network node 3 from the emptying time of the data, file 5. 4. Load RDB possible rewrite AOF time
  •  Command buffer for data changes repl_back_buffer: When Redis opened by the fork Linux () function is a child process to deal with other matters (such as the main process executed when bgsave RDB generate a file, or generate a primary process when bgrewriteaof AOF file execution), and the main process (i.e., the processing procedure of the client commands) to change some data subsequent execution command is temporarily stored in the region, and the limited space in the region (repl-backlog-size 1mb where space can be arranged in the configuration file)

Section 3.2 Copy

Part replication problem solving: in the actual environment, the master node may be some fluctuation from the network between the nodes, the connection is lost (from the master node of the Redis are not closed) from the network between the node and the master node , if re-connected, you can use the full amount of copy to re-conduct a master sync from the node data, but the full amount of replication can cause problems a performance overhead, and from the node may have a lot of data is the primary node is no more that ever , which is not required to synchronize the data again, if you use the full amount of replication is certainly brought some unnecessary waste. So, part of the copy function is to solve the problem.

process:

1. From the master node is directly connected is disconnected,

2. The master node continues executing the data change commands are recorded in a buffer in repl_back_buffer

3. When reconnected node from the master node,

4. automatically issue a command (psync offset run_id), Redis master node from the node stored in the runtime id and saving offset from the node to the master node

5.  主节点接收从节点发送的偏移量和id,对比此时主节点的偏移量和接收的偏移量,如果两个偏移量之差大于repl_back_buffer中的数据,那么就表示在断开连接期间从节点已经丢失了超出规定数量的数据,此时就需要进行全量复制了,否则就进行部分复制

6.  将主节点缓冲区中的数据同步更新到从节点中,这样就实现了部分数据的复制同步,降低了性能开销

 

四、主从节点的故障处理 

  1. 故障发生时服务自动转移(自动故障转移):即当某个节点发生故障导致停止服务时,该节点提供的服务会有另一个节点自动代替提供,这样就实现了一个高可用的效果
  2. 从节点故障:即如果某个从节点发生了故障,导致无法向在该节点上的客户端提供读服务,解决办法就是使该客户端转移到另一个可用从节点上,但是在转移时,应该考虑该从节点能承受几个客户端的压力
  3. 主节点故障:如果主节点发生故障,在使用主节点进行读写操作的客户端就无法使用了,而使用从节点只进行读操作的客户端还是可以继续使用的,解决办法就是从从节点中选一个节点更改为主节点,并且将原主节点的客户端连接到新的主节点上,然后通过该客户端将其他从节点连接到新的主节点中
  4. 主从复制确实可以解决故障问题,但是主从复制不能实现自动故障转移,其必须要通过一些手动操作,而且非常麻烦,所以要实现自动故障转移还需要另一个功能,Redis中提供了sentinel功能来实现自动故障转移

 

五、主从节点的故障处理 

  1. 读写分离:即客户端发来的读写命令分开,写命令交给主节点执行,读命令交给从节点执行,不仅减少了主节点的压力,而且增强了读操作的能力;但也会造成一些问题

  • 但是主从节点之间数据复制造成的阻塞延迟也可能会导致主从不一致的情况,也就是主节点先进行了写操作,但可能因为数据复制造成的阻塞延迟,导致在从节点上进行的读操作获取的数据与主节点不一致
  • 读取过期数据:主从复制会将带有过期时间的数据一并复制到从节点中,但是从节点是没有删除数据的能力的,即使是过期数据,所以主节点中的已经删除了过期数据,但是因为主从复制的阻塞延迟问题导致从节点中的过期数据没有删除,此时客户端就会读到一个过期数据

    2. 主从配置不一致:造成的问题有

  • 比如配置中的maxmemory参数如果配置不一致,比如主节点2Gb,从节点1Gb,那么就可能会导致数据丢失;以及一些其他配置问题

    3. 规避全量复制:全量复制的性能开销较大,所以要尽量避免全量复制,

  • 在第一次建立主从节点关系式一定会发生全量复制;可以适当减小Redis的maxmemory参数,这样可以使得RDB更快,或者选择在客户端操作低峰期进行,比如深夜
  • 从节点中保存的主节点run_id不一致时也一定会发生全量复制(比如主节点的重启);可以通过故障转移来尽量避免,例如Redis Sentinel 与 Redis Cluster 
  • 当主从节点的偏移量之差大于命令缓冲区repl_back_buffer中对应数据的偏移差时,也会发生全量复制,也就是上面的部分复制的复制过程中所说的;可以适当增大配置文件中repl-backlog-size即数据缓冲区可尽量避免

    4. 规避复制风暴:

  • 单主节点导致的复制风暴,即当主节点重启后,要向其所有的从节点都进行一次全量复制,这非常消耗性能;可以更换主从节点的拓扑结构,更换为类似树形的结构,一个主节点只与少量的从节点建立主从关系,而而这些主节点又与其他从节点构成主从关系,
  • 如图所示
  • 单主节点机器复制风暴:即如果过一台机器专门用来部署多个主节点,然后其他机器部署从节点,那么一旦主节点机器宕机重启,就会引起所有的主从节点之间的全量复制,造成非常大的性能开销;可以采用多台机器,分散部署主节点,或者使用自动故障转移来将某个从节点变为主节点实现一个高可用

 

感谢支持,感谢观看。

参考:https://my.oschina.net/ProgramerLife/blog/2254321

 

Guess you like

Origin www.cnblogs.com/haoprogrammer/p/11077121.html