table of Contents
redis entry (b)
Foreword
In redis entry (a) a brief introduction to the history and redis installation and deployment, as well as basic data structures and api, redis This section explains the persistence, high availability, redis clusters and distributed knowledge-related.Endurance of
redis memory as a database , all the data stored in memory. However, if power outages and other reasons cause data loss. redis two kinds of built-persistent manner, RDB are persistent and persistent AOF.RDB
RDB persistence is a snapshot of the current process data is saved to the hard disk of the process, in other words the data is current in all redis memory to the hard disk. RDB trigger persistent process is divided into manual and automatic trigger trigger.
Manual trigger
Through
save
andbgsave
execute two commands manually save a snapshot of RDB.
save
Command: redis will block the current primary process until completion of RDB save, save command has been deprecated and is not recommended for production environments.
bgsave
Command: redis process will create a process to perform a fork operation to save a snapshot of RDB. Only a short time will be blocked fork child process. I suggest that you are usingbgsave
the command to save a snapshot of RDB. Currently redis all internal operations use RDBbgsave
command127.0.0.1:26379> save OK 127.0.0.1:26379> bgsave Background saving started
Automatically trigger
- Use the save configuration, such as
save m n
. When the modification indicates the presence of n times m second data set is automatically triggered bgsave. - If the total amount of nodes perform the copy operation, the master node is performed automatically
bgsave
generated and sent from the file RDB node. Execute
debug reload
command reload redis, it will trigger the save operation.redis debug command provides several very useful debug function
By default, when the shutdown command, if not open AOF persistent feature automatically saved and set off rdb strategy will be executed automatically bgsave.
- Use the save configuration, such as
principle
- Bgsave execute commands, Redis parent process child process to determine whether there is currently being implemented, such as RDB / AOF child process, if there is a direct return bgsave command.
Parent process fork operation to create a child process, the parent process fork operation will be blocked by
info stats
viewing latest_fork_usec Options command, you can get a fork recent time-consuming operation, in microseconds.127.0.0.1:26379> info stats # Stats total_connections_received:1 ... latest_fork_usec:5391
- After the completion of the parent process fork,
bgsave
the command returnsBackground saving started
information and will not clog the parent process, we can continue to respond to other commands. RDB child process created files, memory generates a temporary snapshot file according to the parent process, the original file is replaced after the completion of atoms. Execute
lastsave
commands to get the time of the last generation RDB, the corresponding statistical infordb_last_save_time
option. Process sends a signal to indicate the completion of the parent process, the parent process to update statistics, see the specificinfo Persistence
rdb_ * related options under. O127.0.0.1:26379> lastsave (integer) 1572423635
Common Configuration
Node name | Explanation |
---|---|
save | n-m second revision AutoSave |
dbfilename | RDB save the file name will be saved to the configuration path dir species |
By
config set dbfilename
dynamically modify RDB save the file name will be saved to a new file name to save in the next run of RDB.
experience
- RDB file compression can save a significant reduction in file size
- If the disk is destroyed by
config set
dynamically modifying redis root file path and RDB command. - RDB file to load much faster than the AOF document loading speed
- RDB way to save can not do real-time save, and therefore can not be used to store the data can not be lost.
- RDB way to save every time the full amount of data in memory to save, and therefore not suitable for large and frequent memory data stored in the scene.
AOF
AOF (appendonlyfile) Persistence: independent log recorded each write command, and then re-execute the purpose of AOF file command to achieve data recovery restart. AOF data is not saved, but each execution of the command, so the file will be larger than the AOF RDB files and more.
principle
- All the write command will be appended to the aof_buf (buffer).
AOF buffer do file operations to the hard disk buffer in accordance with the corresponding strategy.
There are three buffer AOF file synchronization strategy
- With AOF files increases, the need for regular files for AOF rewrite, to achieve the purpose of compression.
- When Redis server restarts, you can load AOF file for data recovery.
Buffer synchronization strategy
- Real-time synchronization
through configurationappendfsync always
, command write caching, file system fsync synchronous operation calls. - Per sync
through configurationappendfsync ecerysec
, the command write caching, the system calls the write operation. A dedicated thread calls every second fsync synchronize file operations. - The operating system decides when to synchronize
through configurationappendfsync no
, command write cache, do not do fsync synchronize files operation, synchronous operation by the operating system is responsible, generally synchronized cycle up to 30 seconds
- write operation will trigger delay write (delayedwrite) mechanism. In the Linux kernel provides a page buffer to improve the hard disk IO performance. write operation returns directly after writing the buffer system. Hard synchronization mechanism operating system dependent scheduling, for example: page buffer space is full or reaches a particular time period. Prior to synchronize files, if a system failure is down at this time, the buffer data will be lost.
- fsync for a single file operations (such as AOF file), forced to do hard sync, fsync will block written to disk until after the completion of return to ensure data persistence.
Rewriting mechanism
With the command continues to write AOF, the file will be larger in order to solve this problem, Redis AOF rewrite the introduction of mechanisms to optimize command. AOF file is to rewrite the data in Redis process into a write command to the new AOF file synchronization process. Timing AOF rewrite not only can reduce hard disk files take up, while faster loading AOF file when redis restart.
AOF rewrite rewrites the following, AOF rewrite data can be deleted has expired, the old AOF invalid command (deleted after first add), multiple write commands into one (insert multiple sets can be combined into an insert command )
Manual trigger: a direct call
bgrewriteaof
command.127.0.0.1:26379> bgrewriteaof Background append only file rewriting started
Auto: According
auto-aof-rewrite-min-size
andauto-aof-rewrite-percentage
parameter determines automatically trigger timing.
- AOF rewrite execution request. If the current process is carried out after the implementation of AOF rewrite again if the current process is executing bgsave operation, rewrite command delay to bgsave completed.
- Parent process fork to create a child process, the cost is equivalent to bgsave process.
- After the main course fork operation is completed, continue to respond to other commands. All modifications command buffer and still write AOF according to appendfsync policy synchronization to the hard drive to ensure the correctness of the original AOF mechanism.
Since the replication fork write operation using the child process can only shared memory data at the fork operation. - According to the child process memory snapshot written to the file in accordance with the new AOF command merger rules. Each batch the amount of data written to disk by the configuration
aof-rewrite-incremental-fsync
control, the default is 32MB, to prevent excessive brush single hard disk data caused by obstruction. - After the new AOF file write completed, the child process send a signal to the parent process.
- Because the parent process is still respond to commands, Redis using the "AOF rewrite buffer" to save this part of the new data, prevent data loss during this part of the new AOF file generation.
- Parent update statistics, see the specific
info persistence
aof_ *-related statistics under.
Persistent file is loaded
- When AOF AOF and there is persistent open file, the file is loaded first AOF
- When AOF AOF close or file does not exist, the file is loaded RDB
- After loading AOF / RDB file successfully, Redis started successfully.
- AOF / RDB files when there is an error, Redis fails to start and print an error message.
High Availability
Redis supports master-slave replication, but when failure must be manually failover, manual failover is not the actual service availability.
- Prior to version 2.8 using Redis replication sync command, whether it is the first master-slave replication or reconnection after copying the whole amount of synchronization are used, the cost is too high
- Copy command using psync between 2.8 to 4.0, this feature mainly added Redis reconnection time can be synchronized by using the offset information portion
- After version 4.0 also uses psync, compared to the 2.8 version of psync optimizes incremental replication, here psync we call psync2.0,2.8 version can be called psync
Master copy can be seen from the detailed flow difference Redis master copy from the psync1 and psync2
sentinel
Redis Sentinel Sentinel contains several nodes and Redis data nodes, each node will Redis Sentinel Sentinel node and the remaining node monitoring, and when it finds a node is unreachable, the node will do offline identity. If the master node is identified, it will Sentinel nodes and other "consultations", when most of the Sentinel nodes are considered primary node is unreachable, they will elect a Sentinel node to complete the work automatic failover, and it will the change to the real-time Redis application side. The whole process is completely automatic, without human intervention come, so this program is to effectively solve the problem of availability of Redis.
Sentry only when the owners do extra monitoring process from above the copy, so the actual architecture has not changed.
Redis2.8 version of Sentinel become Redis Sentinel 2, to rewrite the initial Sentinel implemented, the use of more powerful and simpler prediction algorithm. Redis Sentinel 1 is Redis 2.6 version of the factory, it has been abandoned.
Process
- Sentinel regular monitoring master node.
- When the primary node fails, if a fault occurs on the primary sentinel node agreement, Sentinel will elect a sentinel node as a leader responsible for failover.
- Sentinel from a new node from the election as the master node. Execute
slaveof no one
commands, set the master. - The remaining sentinel node is set to a new master node from the node. carried out
slaveof masterip masterport
Node copied from the master node from the total amount of
After redis4.0 version to avoid problems from the master copy of the full amount of the switch.
Installation and deployment
About Sentinel service structures can view my other blog post "Windows version of the program redis availability inquiry" , describes the building in the windows version of Sentinel, under linux it is much the same.
redis service configuration
Configuration name | Configuration instructions |
---|---|
slaveof | Ip and port master node |
requirepass | The current node password |
masterauth | The master node password |
From the time when the master password must be set to the same, or changes in a master-slave switching, password occurrence may appear on the main cause from not connect.
Configuring Sentinel
A complete configuration is as follows Sentinel
port 26379
daemonize yes
logfile "26379.log"
dir "/opt/soft/redis/data"
sentinel myid 5511e27289c117b38f42d2b8edb1d5446a3edf68
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds master 5000
sentinel failover-timeout master 10000
sentinel auth-pass mymaster test1
sentinel config-epoch mymaster 0
sentinel leader-epoch mymaster 0
#发现两个slave节点
sentinel known-slave mymaster 127.0.0.1 6380
sentinel known-slave mymaster 127.0.0.1 6381
#发现两个sentinel节点
sentinel known-sentinel mymaster 127.0.0.1 26380 282a70ff56c36ed56e8f7ee6ada741 24140d6f53
sentinel known-sentinel mymaster 127.0.0.1 26381 f714470d30a61a8e39ae031192f1fe ae7eb5b2be
sentinel current-epoch 0
- Main configuration redis
sentinel monitor <master-name> <ip> <port> <quorum>
- master-name: is an alias of the master node
- ip and port: the port of the master node and ip
- quorum: To determine on behalf of the master node does not reach the final votes needed.
A sentinel to monitor the same plurality of master nodes only need to set a different master node to different aliases.
- Id Sentinel
sentinel myid ID
Sentinel first start will generate a 40-bit unique id, id and will write to the configuration file: - Other optional Sentinel
sentinel <option_name> <master_name> <option_value>
other structures are arranged tosentinel
begin with, a configuration according to the back and then redis alias name and configuration values- down-after-milliseconds: Each sentinel node must periodically transmit
ping
a command to determine whether that Sentinel Hey Redis and the rest up to the point, if the time exceeds the configuration is not restored, it is considered unreachable, also known subjective offline . Format is configuredsentinel down-after-milliseconds <master-name> <times>
, times the timeout period, in milliseconds - parallel-syncs: When the Sentinel agreed on a set of primary node fails, the sentinel node left leader failover operation, elect a new master node. And data replication will operate from the new master node. If a lot of copy from both nodes, some impact on network bandwidth will be occupied, especially when Redis4.0 before each master-slave switch will need to continue the full amount of data synchronization. Format is configured
sentinel parallel-syncs <master-name> <nums>
, the number of synchronization of the nums parallel, configured as 1, polling will be synchronized from the node. - failover-timeout: When a failover failed, too little time before attempting failover. Format is configured
sentinel failover-timeout <master-name> <times>
, times failover failure retry time interval, in milliseconds. - auth-pass: If redis node configuration password, you also need to configure the Sentinel node redis password. Note that, if redis configuration password, the primary as well as from Redis Sentinel will need to configure the same password.
notification-script: when during a failover occurs, when some of the warning level of Sentinel events (eg -sdown: objective and offline, -odown: subjective offline), will trigger script configuration path, and transmitting the event parameters, It can be carried out by right-warning notification, SMS or other means through scripting. Configuring the format
sentinel notification-script <master-name> <script-path>
, script-path for the script path.Objective offline: Sentinel master node every 1 sec, and sent from node sentinel nodes other
ping
commands for heartbeat detection, when exceededdown-after-milliseconds
does not respond, that the node is not reachable, i.e. subjective offline.
Subjective off the assembly line: When the sentinel surveillance of master node subjective offline, sentinel node by通过 sentinel is-master-down-by-addr
confirming whether offline master node to another node sentry command. When therequorum
when the sentry considered the primary node unreachable (subjective offline), is considered the main objective offline node (Sentinel most agree that the primary node off the assembly line, is the objective), that is, when the main objective offline sentinel node leader on It will start failover master node.
* Client-reconfig-script: when a failure occurs when switching from the primary transfer occurs, the script may invoke a particular task to perform the specified position of the new master notification.sentinel client-reconfig-script <master-name> <script-path>
.
- down-after-milliseconds: Each sentinel node must periodically transmit
Dynamically modify the configuration
Sentinel node redis and the like also supports dynamic configuration changes, by
sentinel set <master_name> <option_name> <option_value>
modifying the currently designated master node sentinel sentinels configuration.
Configuration Tips
- More sentinel nodes should not be deployed on the same physical machine.
- Deploying at least three, and an odd number of sentinel. Because Sentinel Leaders need to add at least half a sentinel node vote.
Clusters
Redis Redis Cluster is the official distributed solutions, officially launched the 3.0 version.
principle
Redis cluster to store key-value pairs in the database by way of fragmentation. Hash partition and generally sequential slice partitioning in two ways, Redis using Hash partition data evenly distributed manner. Internal Redis into a virtual slot 0 to 16,383, will be distributed to a virtual slot Redis nodes. First of all need to be completed before the cluster virtual slot distribution on line.
When a node is set up Redis virtual slot, it informs the other nodes through which the message of their own virtual slot, so that all nodes are updated and saved Redis slot information.
Cluster command execution
When a client sends a command to the cluster a Redis, the node calculates key data to be processed belongs slots, if their own slot directly execute a command, if belonging to other nodes, it sends a request to re-execute error MOVED oriented to redirect the client request by the MOVE command will be sent to the node performing the redirection.
Re-fragmentation
When Redis cluster again be fragmented, the data is re-allocated virtual slots transferred to the destination node, this transfer does not affect the operation of the new command request.
ASK error
When the command is executed during the fragmentation, possible part of the data is migrated to the new node, the data is still part of the old nodes are not migrated, Redis cluster can calmly deal with the case, the error redirection by performing ASK ASK the client turned to the target node are migrated to the new client node then re-execute the command.
Cluster Setup
- Preparing to configure
- Start all nodes Redis
- Redis nodes handshake, found clusters
- Virtual slot allocation
- Cluster on the line
- Build a master-slave cluster
Redis build a cluster consists of three nodes. Redis the data directory to the root directory of all the RDB file, AOP files, logs, and configuration data are stored in the directory.
Preparing to configure
Prepare three configuration files to
redis-{port}.config
name.
For example the node 7379 redis ports configured as follows, 7380 and 7381 similarly configured.
port 7379 pidfile /var/run/redis_7379.pid logfile "log/redis-7379.txt" dbfilename dump-7379.rdb dir ./data/ appendfilename "appendonly-7379.aof" # 开启集群模式 cluster-enabled yes # 节点超时时间,单位毫秒 cluster-node-timeout 15000 # 集群内部配置文件 cluster-config-file "nodes-7379.conf"
Start node
Start three node redis
shell jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-server data/redis-7379.config jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-server data/redis-7380.config jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-server data/redis-7381.config
Since there is no startup is complete cluster configuration, the default will create a cluster configuration nodes- {port} .conf
jake@Jake-PC:~/tool/demo/redis-cluster/redis/data$ ls appendonly-7379.aof appendonly-7381.aof nodes-7380.conf redis-7379.config redis-7380.config redis-7381.config appendonly-7380.aof nodes-7379.conf nodes-7381.conf redis-7379.txt redis-7380.txt redis-7381.txt
After a successful start appears
Running in cluster mode
run as a cluster modeNode handshake
Handshake node refers to the cluster nodes communicate with each other through Gossip protocol, reached the other side of the process of perception. Only needs to be initiated on the client side
cluster meet {ip} {port}
command.127.0.0.1:7379> cluster meet 127.0.0.1 7380 127.0.0.1:7379> cluster meet 127.0.0.1 7381
After the handshake is complete you can
cluster nodes
view the current cluster node127.0.0.1:7379> cluster nodes ffff2fe734c1ae5be4f66d574484a89f8bd303f3 127.0.0.1:7379@17379 myself,master - 0 1572506163000 0 connected 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 127.0.0.1:7381@17381 master - 0 1572506162658 2 connected 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 127.0.0.1:7380@17380 master - 0 1572506163689 1 connected
By
cluster info
viewing the current cluster status127.0.0.1:7379> cluster info cluster_state:fail cluster_slots_assigned:0 cluster_slots_ok:0 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:3 cluster_size:0 cluster_current_epoch:2 cluster_my_epoch:0 cluster_stats_messages_ping_sent:84 cluster_stats_messages_pong_sent:88 cluster_stats_messages_meet_sent:2 cluster_stats_messages_sent:174 cluster_stats_messages_ping_received:88 cluster_stats_messages_pong_received:86 cluster_stats_messages_received:174
If the read and write data at this time will return an error
127.0.0.1:7379> set hello redis-cluster (error) CLUSTERDOWN Hash slot not served 127.0.0.1:7379> get hello (error) CLUSTERDOWN Hash slot not served
Since we mentioned the Cluster Setup must be completed after the first allocation of virtual slot.
cluster_ slots_ assigned
Is assigned virtual slots, currently it is 0, so we need to be allocated to a virtual slot.Virtual slot allocation
Command
CLUSTER ADDSLOTS <slot> [slot ...]
assigned virtual slots, but redis native command can only be assigned one or more primary distribution, no way to directly assign a range of virtual slot, and therefore need to edit your source redis support, you can write a script or batch allocation.Bulk distribution groove
On linux via shell script, by powershell on windows, and native support m..n powershell script to generate a m n-dimensional array, and more convenient.
I personally shell script on linux is not very understanding, find the following information does not like powershell or python syntax similar one-dimensional array initialization.
Has now released powershell core (powershell 6.0) supports cross-platform, we achieve the following quantities allocated slots by powershell script. Again before I have to install powershell on linux
I installed the machine is Ubuntu 18.04, registered as superuser Microsoft store once. After registration, you can
sudo apt-get upgrade powershell
update PowerShell.# Download the Microsoft repository GPG keys wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb # Register the Microsoft repository GPG keys sudo dpkg -i packages-microsoft-prod.deb # Update the list of products sudo apt-get update # Enable the "universe" repositories sudo add-apt-repository universe # Install PowerShell sudo apt-get install -y powershell # Start PowerShell pwsh
After download and installation, through
pwsh
to enable powershell, powershell script can be executed.We can
redis-cli -p port CLUSTER ADDSLOTS <slot> [slot ...]
directly execute scripts set up a virtual slot.
Allocating one-dimensional array of 0 to 5 in the powershellPS C:\Users\Dm_ca> 0..5 0 1 2 3 4 5
By
redis-cli -p 7379 CLUSTER ADDSLOTS (0..5000)
grooves 0 7379 to 5000 is assigned to the portPS /home/jake/tool/demo/redis-cluster/redis> ./src/redis-cli -p 7379 CLUSTER ADDSLOTS (0..5000) OK
Also dispensing from other nodes to other redis
PS /home/jake/tool/demo/redis-cluster/redis> ./src/redis-cli -p 7380 CLUSTER ADDSLOTS (5001..10000) OK PS /home/jake/tool/demo/redis-cluster/redis> ./src/redis-cli -p 7381 CLUSTER ADDSLOTS (10001..16383) OK
Redis View cluster status again, you can see the status has changed from fail ok, and cluster_slots_ok allocated 16,384 slots.
127.0.0.1:7379> cluster info cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:3 cluster_size:3 cluster_current_epoch:2 cluster_my_epoch:0 cluster_stats_messages_ping_sent:3734 cluster_stats_messages_pong_sent:3677 cluster_stats_messages_meet_sent:2 cluster_stats_messages_sent:7413 cluster_stats_messages_ping_received:3677 cluster_stats_messages_pong_received:3736 cluster_stats_messages_received:7413
View cluster node case, you can see the range of slots allocated to each node after
127.0.0.1:7379> cluster nodes ffff2fe734c1ae5be4f66d574484a89f8bd303f3 127.0.0.1:7379@17379 myself,master - 0 1572510104000 0 connected 0-5000 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 127.0.0.1:7381@17381 master - 0 1572510106000 2 connected 10001-16383 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 127.0.0.1:7380@17380 master - 0 1572510106756 1 connected 5001-10000
Build a master-slave cluster
Currently we have allocated three primary node redis form another cluster. However, if a node hung up, the entire cluster will become unavailable.
The 7379 node shut down, and then view the cluster status
127.0.0.1:7380> cluster nodes 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 127.0.0.1:7380@17380 myself,master - 0 1572510227000 1 connected 5001-10000 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 127.0.0.1:7381@17381 master - 0 1572510230245 2 connected 10001-16383 ffff2fe734c1ae5be4f66d574484a89f8bd303f3 127.0.0.1:7379@17379 master,fail - 1572510210543 1572510209905 0 disconnected 0-5000 127.0.0.1:7380> cluster info cluster_state:fail cluster_slots_assigned:16384 cluster_slots_ok:11383 cluster_slots_pfail:0 cluster_slots_fail:5001 cluster_known_nodes:3 cluster_size:3 cluster_current_epoch:2 cluster_my_epoch:1 ...
So we need to achieve high availability cluster, add nodes from each redis master node.
Preparation profile three ports 7479,7480 and 7481 are set at, 7379,7380 and 7381 respectively from the library. Start three node redis
jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-server data/redis-7479.config jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-server data/redis-7480.config jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-server data/redis-7481.config jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7479 cluster meet 127.0.0.1 7379 OK jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7480 cluster meet 127.0.0.1 7380 OK jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7481 cluster meet 127.0.0.1 7379 OK
View cluster node again
jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7481 cluster nodes 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 127.0.0.1:7380@17380 master - 0 1572514591000 1 connected 5001-10000 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 127.0.0.1:7381@17381 master - 0 1572514593720 2 connected 10001-16383 44b31c845115b8e20ad07c50ef1fa035a8f77574 127.0.0.1:7479@17479 master - 0 1572514592000 3 connected 57dd93502af7600b074ed1a021f4f64fbb56c3f4 127.0.0.1:7481@17481 myself,master - 0 1572514591000 5 connected 0e0899d1c692fa3106073880d974acd93c426011 127.0.0.1:7480@17480 master - 0 1572514592713 4 connected ffff2fe734c1ae5be4f66d574484a89f8bd303f3 127.0.0.1:7379@17379 master - 0 1572514592000 0 connected 0-5000
By
cluster replicate {nodeId}
command of the current node is set to the master node from the cluster node.jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7479 cluster replicate ffff2fe734c1ae5be4f66d574484a89f8bd303f3 OK jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7480 cluster replicate 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 OK jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7481 cluster replicate 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 OK
Check node status again, you can see three new node has been changed from the library
jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7481 cluster nodes 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 127.0.0.1:7380@17380 master - 0 1572514841965 1 connected 5001-10000 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 127.0.0.1:7381@17381 master - 0 1572514842981 2 connected 10001-16383 44b31c845115b8e20ad07c50ef1fa035a8f77574 127.0.0.1:7479@17479 slave ffff2fe734c1ae5be4f66d574484a89f8bd303f3 0 1572514842000 3 connected 57dd93502af7600b074ed1a021f4f64fbb56c3f4 127.0.0.1:7481@17481 myself,slave 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 0 1572514841000 5 connected 0e0899d1c692fa3106073880d974acd93c426011 127.0.0.1:7480@17480 slave 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 0 1572514841000 4 connected ffff2fe734c1ae5be4f66d574484a89f8bd303f3 127.0.0.1:7379@17379 master - 0 1572514840000 0 connected 0-5000
The primary library 7381 is disconnected, automatically becomes the primary after 7481.
jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7381 shutdown 127.0.0.1:7481> cluster nodes 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 127.0.0.1:7380@17380 master - 0 1572515223688 1 connected 5001-10000 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 127.0.0.1:7381@17381 master,fail - 1572515116020 1572515114203 2 disconnected 44b31c845115b8e20ad07c50ef1fa035a8f77574 127.0.0.1:7479@17479 slave ffff2fe734c1ae5be4f66d574484a89f8bd303f3 0 1572515221634 3 connected 57dd93502af7600b074ed1a021f4f64fbb56c3f4 127.0.0.1:7481@17481 myself,master - 0 1572515220000 6 connected 10001-16383 0e0899d1c692fa3106073880d974acd93c426011 127.0.0.1:7480@17480 slave 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 0 1572515221000 4 connected ffff2fe734c1ae5be4f66d574484a89f8bd303f3 127.0.0.1:7379@17379 master - 0 1572515222656 0 connected 0-5000
Finally, the 7381 recovery, 7381 becomes 7481 from the library
jake@Jake-PC:~/tool/demo/redis-cluster/redis$ src/redis-cli -p 7381 cluster nodes 57dd93502af7600b074ed1a021f4f64fbb56c3f4 127.0.0.1:7481@17481 master - 0 1572515324842 6 connected 10001-16383 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 127.0.0.1:7380@17380 master - 0 1572515325852 1 connected 5001-10000 44b31c845115b8e20ad07c50ef1fa035a8f77574 127.0.0.1:7479@17479 slave ffff2fe734c1ae5be4f66d574484a89f8bd303f3 0 1572515322000 3 connected 1d3f7bd0d705ce2926ccc847b4323fcfbfe29f53 127.0.0.1:7381@17381 myself,slave 57dd93502af7600b074ed1a021f4f64fbb56c3f4 0 1572515324000 2 connected 0e0899d1c692fa3106073880d974acd93c426011 127.0.0.1:7480@17480 slave 36f26b6c6a87202a4a29eba4daf7bf2ff47e2914 0 1572515324000 4 connected ffff2fe734c1ae5be4f66d574484a89f8bd303f3 127.0.0.1:7379@17379 master - 0 1572515323837 0 connected 0-5000
Reference Documents
- repeat
- redis development and operation and maintenance
- Detailed profiles redis
- redis debug command Detailed
- Redis main difference from the replication psync1 and psync2
- PowerShell Core installation on Linux
This article addresses: https://www.cnblogs.com/Jack-Blog/p/11776847.html
of the blog: Jiege busy
welcome to reprint, please indicate the source and link in a prominent location