Depth understanding Redis Cluster

Redis Cluster virtual partition grooves, all the key slot mapped to 16383 0 ~ hash function, the calculation formula:

slot = CRC16(key) & 16383

Each node is responsible for maintaining key part of the groove and the groove mapped right.

The relationship between the characteristics of virtual slot Redis partitions, decoupling node data, simplifies expansion and contraction of the node difficulty. But there is a limit:

1. key operational support limited quantities. Only supports the key slot has the same value to perform batch operations.

2. affairs support operation limited. Only support transactions with a number of key operations on a node.

3. key data partition is the smallest granularity, because the key can not speak of a large object, such as hash, list, etc. mapped to different nodes.

4. does not support multiple databases, Redis in stand alone supports 16 database, but can only use a cluster of database space, namely db 0.

The support structure is only one copy, copy only the master node from the node, replication does not support nested tree structure.

How to manually create a Redis Cluster

Create three directories are used to store data, configuration files and logs.

mkdir -p /opt/redis/data/
mkdir -p /opt/redis/conf/
mkdir -p /opt/redis/log

Edit Profile

the redis_6379.conf

port 6379
daemonize yes
pidfile "/opt/redis/data/redis_6379.pid"
loglevel notice
logfile "/opt/redis/log/redis_6379.log"
dbfilename "dump_6379.rdb"
dir "/opt/redis/data"
appendonly yes
appendfilename "appendonly_6379.aof"
cluster-enabled yes
cluster-config-file /opt/redis/conf/nodes-6379.conf
cluster-node-timeout 15000

For simplicity, here only a few key parameters posted redis, wherein the back three and Cluster-related parameters.

PC redis_6379.conf redis_6380.conf
PC redis_6379.conf redis_6381.conf
PC redis_6379.conf redis_6382.conf
PC redis_6379.conf redis_6383.conf
PC redis_6379.conf redis_6384.conf


But -i 's / 6379/6380 / G' redis_6380.conf
but -i 's / 6379/6381 / G' redis_6381.conf
but -i 's / 6379/6382 / G' redis_6382.conf
but i's / 6379/6383 / G 'redis_6383.conf
but -i' s / 6379/6384 / G 'redis_6384.conf

Start all nodes

cd /opt/redis/conf
redis-server redis_6379.conf
redis-server redis_6380.conf
redis-server redis_6381.conf
redis-server redis_6382.conf
redis-server redis_6383.conf
redis-server redis_6384.conf

After the node starts, it will create nodes-xxxx.conf file in the conf directory, file, record the node ID.

[root@slowtech conf]# ls
nodes-6379.conf  nodes-6381.conf  nodes-6383.conf  redis_6379.conf  redis_6381.conf  redis_6383.conf
nodes-6380.conf  nodes-6382.conf  nodes-6384.conf  redis_6380.conf  redis_6382.conf  redis_6384.conf

[root@slowtech conf]# cat  nodes-6379.conf
260a27a4afd7be954f7cb4fe12be10641f379746 :0@0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

The node added to the cluster

redis-cli -p 6379 cluster meet 127.0.0.1 6380
redis-cli -p 6379 cluster meet 127.0.0.1 6381
redis-cli -p 6379 cluster meet 127.0.0.1 6382
redis-cli -p 6379 cluster meet 127.0.0.1 6383
redis-cli -p 6379 cluster meet 127.0.0.1 6384

cluster meet process command to the first command as an example.

1.6379 nodes after receiving the command, create a structure for the 6380 clusterNode node and add it to your clusterState.nodes dictionary. Next, node 6379 sends a message to the 6380 node MEET.

2.6380 nodes after receiving the news meet node 6379, will create a structure for the 6379 clusterNode node and add it to your clusterSta.nodes dictionary. PONG message and returns a node 6379.

3.6379 node PONG after receiving this message, it returns a PING message to the 6380 node.

4.6380 6379 node node receives a PING message returned, know PONG 6379 node has received news of his return, the handshake is complete.

After that, 6379 will be 6380 news spread through the Gossip protocol to other nodes in the cluster, so that other nodes also shook hands with 6380 nodes, eventually, node 6380 will be recognized for all nodes in the cluster.

View the current cluster node information

127.0.0.1:6379> cluster nodes
260a27a4afd7be954f7cb4fe12be10641f379746 127.0.0.1:6379@16379 myself,master - 0 1539088861000 1 connected
645438fcdb241603fbc92770ef08fa6d2d4c7ffc 127.0.0.1:6380@16380 master - 0 1539088860000 2 connected
bf1aa1e626988a5a35bc2a837c3923d472e49a4c 127.0.0.1:6381@16381 master - 0 1539088860730 0 connected
5350673149500f4c2fd8b87a8ec1b01651572fae 127.0.0.1:6383@16383 master - 0 1539088861000 4 connected
7dd5f5cc8d96d08f35ff395d05eb30ac199f7568 127.0.0.1:6382@16382 master - 0 1539088862745 3 connected
8679f302610e9ea9a464c247f70924e34cd20512 127.0.0.1:6384@16384 master - 0 1539088862000 5 connected

Although six nodes have been added to the cluster, but this time the cluster is still in the offline state.

127.0.0.1:6379> cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:0
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_ping_sent:799
cluster_stats_messages_pong_sent:826
cluster_stats_messages_meet_sent:5
cluster_stats_messages_sent:1630
cluster_stats_messages_ping_received:826
cluster_stats_messages_pong_received:804
cluster_stats_messages_received:1630

Distribution groove

16384 slot will be evenly distributed to the three nodes 6379,6380,6381.

redis-cli -p 6379 cluster addslots {0..5461}

redis-cli -p 6380 cluster addslots {5462..10922}

redis-cli -p 6381 cluster addslots {10923..16383}

Entire database cluster is divided into 16,384 slots, each node in the cluster can handle up to 16,384 0 or grooves. When the database nodes have 16,384 grooves during handling, on-line in a cluster state (OK), and vice versa, if the database is not processed any of a slot, the cluster processing offline state (fail).

View cluster status

# redis-cli -p 6379
127.0.0.1:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_ping_sent:6212
cluster_stats_messages_pong_sent:6348
cluster_stats_messages_meet_sent:5
cluster_stats_messages_sent:12565
cluster_stats_messages_ping_received:6348
cluster_stats_messages_pong_received:6217
cluster_stats_messages_received:12565

View the relationship between nodes and slot allocation

127.0.0.1:6379> cluster nodes
260a27a4afd7be954f7cb4fe12be10641f379746 127.0.0.1:6379@16379 myself,master - 0 1539094639000 1 connected 0-5461
645438fcdb241603fbc92770ef08fa6d2d4c7ffc 127.0.0.1:6380@16380 master - 0 1539094636362 2 connected 5462-10922
bf1aa1e626988a5a35bc2a837c3923d472e49a4c 127.0.0.1:6381@16381 master - 0 1539094639389 0 connected 10923-16383
5350673149500f4c2fd8b87a8ec1b01651572fae 127.0.0.1:6383@16383 master - 0 1539094637000 4 connected
7dd5f5cc8d96d08f35ff395d05eb30ac199f7568 127.0.0.1:6382@16382 master - 0 1539094638000 3 connected
8679f302610e9ea9a464c247f70924e34cd20512 127.0.0.1:6384@16384 master - 0 1539094638381 5 connected

Adding slave nodes using cluster replicate

cluster replicate command must be performed from the corresponding node, then the node ID is behind the master node.

[root@slowtech conf]# redis-cli -p 6382
127.0.0.1:6382> cluster replicate 260a27a4afd7be954f7cb4fe12be10641f379746
OK
127.0.0.1:6382> quit
[root@slowtech conf]# redis-cli -p 6383
127.0.0.1:6383> cluster replicate 645438fcdb241603fbc92770ef08fa6d2d4c7ffc
OK
127.0.0.1:6383> quit
[root@slowtech conf]# redis-cli -p 6384
127.0.0.1:6384> cluster replicate bf1aa1e626988a5a35bc2a837c3923d472e49a4c
OK

Shortcut command

echo "cluster replicate `redis-cli -p 6379 cluster nodes | grep 6379 | awk '{print $1}'`" | redis-cli -p 6382 -x
echo "cluster replicate `redis-cli -p 6379 cluster nodes | grep 6380 | awk '{print $1}'`" | redis-cli -p 6383 -x
echo "cluster replicate `redis-cli -p 6379 cluster nodes | grep 6381 | awk '{print $1}'`" | redis-cli -p 6384 -x

View the relationship between nodes and slot allocation

127.0.0.1:6384> cluster nodes
8679f302610e9ea9a464c247f70924e34cd20512 127.0.0.1:6384@16384 myself,slave bf1aa1e626988a5a35bc2a837c3923d472e49a4c 0 1539094947000 5 connected
7dd5f5cc8d96d08f35ff395d05eb30ac199f7568 127.0.0.1:6382@16382 slave 260a27a4afd7be954f7cb4fe12be10641f379746 0 1539094947000 3 connected
5350673149500f4c2fd8b87a8ec1b01651572fae 127.0.0.1:6383@16383 slave 645438fcdb241603fbc92770ef08fa6d2d4c7ffc 0 1539094946000 4 connected
bf1aa1e626988a5a35bc2a837c3923d472e49a4c 127.0.0.1:6381@16381 master - 0 1539094948000 0 connected 10923-16383
645438fcdb241603fbc92770ef08fa6d2d4c7ffc 127.0.0.1:6380@16380 master - 0 1539094947306 2 connected 5462-10922
260a27a4afd7be954f7cb4fe12be10641f379746 127.0.0.1:6379@16379 master - 0 1539094948308 1 connected 0-5461

At this point, we manually create Redis protocol is based on a Cluster, which consists of six nodes, three master node is responsible for processing the data, three in charge of switching from the failed node.

The key to mapping algorithm slot

HASH_SLOT=CRC16(key)mod16384

Re-slicing process

1. Send cluster setslot <slot> importing <source-node-id> command to the target node, the target node is ready to import the data so that the groove.

2. Send cluster setslot <slot> migrating <destination-node-id> command to the source node, the source node is ready to move out so that data slot.

3. The source node loop execution cluster getkeysinslot {slot} {count} command, acquisition count keys belonging to the grooves {slot}.

4. Perform the source node

4. For each key acquired in step 3, redis-trib.rb the source node sends a MIGRATE <target_ip> <target_port> <key_name> 0 <timeout> command to the selected key from the source node atomically migrated to the destination node.

5. Repeat steps 3 and 4 until all the grooves belonging to the key slot of the source node is held up to both the migration of the target node.

6. redis-trib.rb node sends a CLUSTER SETSLOT arbitrary cluster <slot> NODE <node-id> command to assign the groove slot to the target node. This message is sent to the entire cluster.

ASK redirect the client process

Redis cluster support migration of data to complete the online slot and horizontal scaling, when the slot corresponding to the data from the source node to the destination node migration process, the client needs to do identification, to ensure that key commands can be executed properly. For example, a slot when data migration from the source node to the destination node, the source node may be another part of the portion of data at the destination node appears.

If this occurs, the client will change the flow of execution key, as shown below,

1. The client sends a command buffer slot to the source node, if there is direct execution key and returns the result.

2. If the key does not exist, there may be the target node, then will return ASK redirect exception, the following format: (error) ASK {slot} {targetIP}: {targetPort}.

3. The single exception is proposed target client node information from the redirection ASK, transmission asking command to the target client node to open a connection identifier, and then execute the command key. If there is executed, it does not exist there is no information returned.

ASK and MOVED though they are redirected into the client, but is essentially different from those Description Cluster slot ongoing data migration, it is only temporary redirection does not update the cache slot, but MOVED redirect key explanation corresponding slots has been clearly assigned to the new node will update the cache slot.

Simulation of Redis Cluster FAILOVER

Simulated primary node fails, the node manually kill 6379.

1. First, the node corresponding to the node from the log output there.

16387:S 15 Oct 10:34:30.149 # Connection with master lost.
16387:S 15 Oct 10:34:30.149 * Caching the disconnected master state.
16387:S 15 Oct 10:34:30.845 * Connecting to MASTER 127.0.0.1:6379
16387:S 15 Oct 10:34:30.845 * MASTER <-> SLAVE sync started
16387:S 15 Oct 10:34:30.845 # Error condition on socket for SYNC: Connection refused
...
16387:S 15 Oct 10:34:49.994 * MASTER <-> SLAVE sync started
16387:S 15 Oct 10:34:49.994 # Error condition on socket for SYNC: Connection refused
16387:S 15 Oct 10:34:50.898 * FAIL message received from bd341bb4c10e0dbff593bf7bafb1309842fba155 about 72af03587f5e9f064721d3b3a92b1439b3785623
16387:S 15 Oct 10:34:50.898 # Cluster state changed: fail

Found disconnection time is 10: 34: 30.149, the subjective determination of offline time is 10:34:50, a difference of 20s, this is the cluster-node-timeout settings.

2. Look at the log 6380 nodes.

16383:M 15 Oct 10:34:50.897 * Marking node 72af03587f5e9f064721d3b3a92b1439b3785623 as failing (quorum reached).
16383:M 15 Oct 10:34:50.897 # Cluster state changed: fail

The same is true node log 6381, more than half, thus marking 6379 as the objective node offline.

3. Look at the log from the node

16387:S 15 Oct 10:34:51.003 * Connecting to MASTER 127.0.0.1:6379
16387:S 15 Oct 10:34:51.003 * MASTER <-> SLAVE sync started
16387:S 15 Oct 10:34:51.003 # Start of election delayed for 566 milliseconds (rank #0, offset 154).
16387:S 15 Oct 10:34:51.003 # Error condition on socket for SYNC: Connection refused

After entering the main objective offline node identification from being copied preparing election time, the print execution log 566 ms delay after the election.

After the election delay time arrives, update the configuration from the node era and initiate fault elections.

16387:S 15 Oct 10:34:51.605 # Starting a failover election for epoch 7.

4.6380 master node and slave node 6381 to vote

16385:M 15 Oct 10:34:51.618 # Failover auth granted to 886c1f990191854df1972c4bc4d928e44bd36937 for epoch 7

5. After the node from the master node acquires two votes, more than half of replacement master node performing the operation, the completion of the failover.

16387:S 15 Oct 10:34:51.622 # Failover election won: I'm the new master.
16387:S 15 Oct 10:34:51.622 # configEpoch set to 7 after successful failover
16387:M 15 Oct 10:34:51.622 # Setting secondary replication ID to 207c65316707a8ec2ca83725ae53ab49fa25dbfb, valid up to offset: 155. New replication ID is 0ec4aac9562b3f4165244153646d9c9006953736
16387:M 15 Oct 10:34:51.622 * Discarding previously cached master state.
16387:M 15 Oct 10:34:51.622 # Cluster state changed: ok

Failover process

A subjective offline

Each node in the cluster periodically sends a ping message to other nodes, the receiving node replies pong message as a response. If the communication has failed in the cluster-node-timeout time, the transmitting node will think there is a fault the receiving node, the receiving node is marked as offline subjective (PFAIL) state.


Second, the objective offline

When a node determines another node subjective offline, it will follow the respective node status message propagating within the cluster. Gossip message propagates through the nodes in the cluster continue to report collected offline failed node. When the master node holds more than half a tank are marked offline node is subjective, objective triggers off the assembly line process.

Each cluster node to other nodes pfail receiving state, the objective will be to try to trigger off the assembly line, Process Description:

1. First, a statistically valid number of offline reports, if less than half of the total primary node in the cluster holds the slot exit.

2. When the number of slots is greater than half the reports offline master node, the node failure flag corresponding to an objective offline state.

3. a broadcast to a cluster fail message to inform all nodes will be marked offline objective failed node, the message body contains only fail message ID of the failed node.

Broadcast messages fail objective is the final step off the assembly line, which bears a very important responsibility:

All nodes labeled the failed node within the cluster 1. Notify objective offline state and take effect immediately.

2. Inform the failed node failover process is triggered from the node.

Third, failover

After the failed node becomes objective offline, the node is offline if the master node holding groove is required in it is selected from one node to replace it, thus ensuring high availability cluster. All obligations failover from node offline master node, the master node itself when finding objective copied into the offline node from the internal timing task, will trigger a failover process.

1. Eligibility check

Each node will check with the main break last time, to determine eligibility replace the primary node failure from the node. If it exceeds cluster-node-time * cluster-slave-validity-factor node from the master node disconnection time, then the current node does not have the eligibility failover. Cluster-slavevalidity-factor parameter from the significance factor for the node, the default is 10.

2. Prepare election time

When the node comply with failover eligibility, update trigger switching time of the election, to perform follow-up process only after the time of arrival.

Here the reason for using a delayed trigger mechanism, mainly to support multiple priority issues by using different delay time from node election. Copy the greater the offset from the lower node delay, it should have a higher priority to replace the failed master node.

3. initiating elections

After reaching the node from the failure detection timer task election time (failover_auth_time) reaches initiate selection process is as follows:

1> update the configuration era

2> broadcast election news

In a cluster broadcast election news (FAILOVER_AUTH_REQUEST), and records the state has sent a message to ensure that an election can only be initiated from a node in a configuration era.

4. Voting

Only holders of the master node slots will deal with failure election news (FAILOVER_AUTH_REQUEST), because each node in a holding tank configuration has only one vote era, when the message from the node to the first request to vote FAILOVER_AUTH_ACK reply message as a vote, after the election messages from other nodes within the same era configuration will be ignored.

Redis cluster not be used directly from the leader node election, mainly because the number of nodes must be greater than 3 to ensure lobbied N / 2 + 1 nodes, the nodes will lead to a waste of resources. Use all holders of the master node slots in the cluster leader election, even if only one node can also be done from the electoral process.

5. Replace the master node

After collecting enough votes from the node, the master node triggers replacement operations:

1> Current cancel the copy becomes the primary node from the node.

2> perform undo operations clusterDelSlot grooves failure of the primary node is responsible for and performs clusterAddSlot these slots assigned to their own.

3> the cluster broadcast their pong message to notify all nodes of the current node from the node becomes the master and takes over the failure of the primary groove information nodes within the cluster.

Failover time

After the introduction to fault detection and recovery process, we estimate failover time:

1> Subjective offline (PFAIL) identifying time = cluster-node-timeout.

2> Subjective offline status message propagation time <= cluster-node-timeout / 2. Message communication mechanism than cluster-node-timeout / 2 is not the communication node initiates the ping message, the message body when selecting which node contains the node prefers the offline state, it usually can be collected within a period of time more than half of the master node pfail failure to complete the report found.

3> transition time from the node <= 1000 msec. Because there is a delay initiate the electoral mechanisms, offset from the node up to a maximum delay of one second to initiate the election. Usually the first election will be successful, so the transfer time from execution node within one second.

Based on the above analysis the failover time can be estimated as follows:

failover-time(毫秒) ≤ cluster-node-timeout + cluster-node-timeout/2 + 1000

Therefore, the failover time with cluster-node-timeout parameter is closely related to the default 15 seconds.

Redis Cluster of relevant parameters

cluster-enabled <yes / no>: whether to open the cluster model.

cluster-config-file <filename>: cluster configuration file, automatically maintained by the cluster, not recommended for manual editing.

cluster-node-timeout <milliseconds>: each node in the cluster will periodically send a ping message to other nodes, the receiving node replies pong message as a response. If the communication has failed in the cluster-node-timeout time, the transmitting node will think there is a fault the receiving node, the receiving node is marked as offline subjective (PFAIL) state. 15,000 default, namely 15s.

cluster-slave-validity-factor <factor>: each of the master node will check the final disconnection time, the master node determines whether it has the qualification to replace a failed node. If it exceeds cluster-node-time * cluster-slave-validity-factor node from the master node disconnection time, then the current node does not have the eligibility failover.

cluster-migration-barrier <count>: the master node from the required minimum number of nodes, only to reach this number, the excess will migrate from node to other isolated master nodes.

cluster-require-full-coverage <yes / no>: By default, when a cluster 16384 slots assigned when there is not any one of the nodes, the entire cluster is unavailable. Correspondence online, if a primary node goes down, but no, it is not allowed to provide services outside the node. The recommended parameter set to no, to avoid a failure of the master node with the node is not the primary cause other.

Redis Cluster related commands

CLUSTER ADDSLOTS slot [slot ...]: slot assigned to the current node manually.

CLUSTER MEET ip port: add additional nodes to the Redis Cluster.

CLUSTER INFO: Cluster of print-related information.

# redis-cli -c cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:702
cluster_stats_messages_pong_sent:664
cluster_stats_messages_sent:1366
cluster_stats_messages_ping_received:659
cluster_stats_messages_pong_received:702
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1366

CLUSTER KEYSLOT key: View key corresponding slot

127.0.0.1:6379> cluster keyslot hello
(integer) 866
127.0.0.1:6379> cluster keyslot world
(integer) 9059
127.0.0.1:6379> cluster keyslot hello{tag}
(integer) 8338
127.0.0.1:6379> cluster keyslot world{tag}
(integer) 8338

CLUSTER NODES: Cluster of acquiring node information, consistent with the contents of the current node cluster configuration file in, but which also maintains configuration era of the current node.

[root@slowtech conf]# redis-cli -p 6380 -c cluster nodes
72969ae6214dce5783d5b13b1bad34701303e96c 127.0.0.1:6382@16382 slave 7396e133fd8143335d5991734e68fcfcfc5adfd1 0 1539594959692 4 connected
a0efce44c96f95b2cdaf1101805710f41dfe4d06 127.0.0.1:6381@16381 master - 0 1539594962724 3 connected 10923-16383
276cf1128c50faa81a6b073079cc5e2c7a51a4ec 127.0.0.1:6380@16380 myself,master - 0 1539594958000 2 connected 5461-10922
b39826ebe9e741c8dc1fea7ee6966a42c5030726 127.0.0.1:6384@16384 slave a0efce44c96f95b2cdaf1101805710f41dfe4d06 0 1539594961000 6 connected
81f99ce264626895e30a5030ac27b84efedfa622 127.0.0.1:6383@16383 slave 276cf1128c50faa81a6b073079cc5e2c7a51a4ec 0 1539594961713 5 connected
7396e133fd8143335d5991734e68fcfcfc5adfd1 127.0.0.1:6379@16379 master - 0 1539594960703 1 connected 0-5460

[root@slowtech conf]# cat nodes-6380.conf
72969ae6214dce5783d5b13b1bad34701303e96c 127.0.0.1:6382@16382 slave 7396e133fd8143335d5991734e68fcfcfc5adfd1 0 1539592972569 4 connected
a0efce44c96f95b2cdaf1101805710f41dfe4d06 127.0.0.1:6381@16381 master - 0 1539592969000 3 connected 10923-16383
276cf1128c50faa81a6b073079cc5e2c7a51a4ec 127.0.0.1:6380@16380 myself,master - 0 1539592969000 2 connected 5461-10922
b39826ebe9e741c8dc1fea7ee6966a42c5030726 127.0.0.1:6384@16384 slave a0efce44c96f95b2cdaf1101805710f41dfe4d06 0 1539592971000 6 connected
81f99ce264626895e30a5030ac27b84efedfa622 127.0.0.1:6383@16383 slave 276cf1128c50faa81a6b073079cc5e2c7a51a4ec 0 1539592971000 5 connected
7396e133fd8143335d5991734e68fcfcfc5adfd1 127.0.0.1:6379@16379 master - 0 1539592971558 1 connected 0-5460
vars currentEpoch 6 lastVoteEpoch 0

CLUSTER REPLICATE node-id: performing the corresponding slave node, then the node ID is behind the master node.

CLUSTER SLAVES node-id: View from the node of a node.

[root@slowtech conf]# redis-cli -c cluster slaves a0efce44c96f95b2cdaf1101805710f41dfe4d06
1) "b39826ebe9e741c8dc1fea7ee6966a42c5030726 127.0.0.1:6384@16384 slave a0efce44c96f95b2cdaf1101805710f41dfe4d06 0 1539596409000 6 connected"

[root@slowtech conf]# redis-cli -c cluster slaves b39826ebe9e741c8dc1fea7ee6966a42c5030726
(error) ERR The specified node is not a master

CLUSTER SLOTS: mapping relationship between the output node of the slot.

# redis-cli cluster slots
1) 1) (integer) 5461
  2) (integer) 10922
  3) 1) "127.0.0.1"
      2) (integer) 6380
      3) "276cf1128c50faa81a6b073079cc5e2c7a51a4ec"
  4) 1) "127.0.0.1"
      2) (integer) 6383
      3) "81f99ce264626895e30a5030ac27b84efedfa622"
2) 1) (integer) 0
  2) (integer) 5460
  3) 1) "127.0.0.1"
      2) (integer) 6379
      3) "7396e133fd8143335d5991734e68fcfcfc5adfd1"
  4) 1) "127.0.0.1"
      2) (integer) 6382
      3) "72969ae6214dce5783d5b13b1bad34701303e96c"
3) 1) (integer) 10923
  2) (integer) 16383
  3) 1) "127.0.0.1"
      2) (integer) 6381
      3) "a0efce44c96f95b2cdaf1101805710f41dfe4d06"
  4) 1) "127.0.0.1"
      2) (integer) 6384
      3) "b39826ebe9e741c8dc1fea7ee6966a42c5030726"

READONLY: By default, reading from a node does not provide external services, even if a read request is received, will be redirected to the corresponding master node. To read nodes provide external reading service, perform readonly.

# redis-cli -p 6382
127.0.0.1:6382> get k3
(error) MOVED 4576 127.0.0.1:6379
127.0.0.1:6382> readonly
OK
127.0.0.1:6382> get k3
"hello"

READWRITE: Close READONLY option.

# redis-cli -p 6382
127.0.0.1:6382> get k3
(error) MOVED 4576 127.0.0.1:6379
127.0.0.1:6382> readonly
OK
127.0.0.1:6382> get k3
"hello"
127.0.0.1:6382> readwrite
OK
127.0.0.1:6382> get k3
(error) MOVED 4576 127.0.0.1:6379

CLUSTER SETSLOT slot IMPORTING | MIGRATING | STABLE | NODE [node-id]: set the slot status.

CLUSTER DELSLOTS slot [slot ...]:

note:

1. cluster mode is turned on, the process name can also be seen.

[root@slowtech conf]# ps -ef | grep redis
root    17497    1  0 20:18 ?        00:00:00 redis-server 127.0.0.1:6379 [cluster]
root    17720    1  0 20:21 ?        00:00:00 redis-server 127.0.0.1:6380 [cluster]
root    17727    1  0 20:21 ?        00:00:00 redis-server 127.0.0.1:6381 [cluster]
root    17734    1  0 20:21 ?        00:00:00 redis-server 127.0.0.1:6382 [cluster]
root    17741    1  0 20:21 ?        00:00:00 redis-server 127.0.0.1:6383 [cluster]
root    17748    1  0 20:21 ?        00:00:00 redis-server 127.0.0.1:6384 [cluster]
root    18154 15726  0 20:29 pts/5    00:00:00 grep --color=auto redis

Guess you like

Origin www.linuxidc.com/Linux/2019-09/160665.htm