Redis Advanced - Redis Fragmentation Cluster

The address of the original text is updated, and the reading effect is better!

Redis Advanced - Redis Sharded Cluster | CoderMast Programming Mast icon-default.png?t=N5K3https://www.codermast.com/database/redis/redis-advance-sharded-cluster.html

Build a shard cluster

Master-slave and sentry can solve the problem of high availability and high concurrent reading. But there are still two unresolved issues:

  • Mass data storage problem
  • The problem of high concurrent writing

The above problems can be solved by using shard clusters. Features of shard clusters:

  • There are multiple masters in the cluster, and each master stores different data
  • Each master can have multiple slave nodes
  • The master monitors each other's health status through ping
  • Client requests can access any node in the cluster and will eventually be forwarded to the correct node

# cluster structure

Sharded clusters require a large number of nodes. Here we build a minimal sharded cluster, including 3 master nodes, and each master contains a slave node. The structure is as follows:

 

Here we will open 6 redis instances in the same virtual machine to simulate a fragmented cluster. The information is as follows:

IP PORT Role
192.168.150.101 7001 master
192.168.150.101 7002 master
192.168.150.101 7003 master
192.168.150.101 8001 slave
192.168.150.101 8002 slave
192.168.150.101 8003 slave

#Prepare instance and configuration

Delete the previous directories 7001, 7002, and 7003, and recreate the directories 7001, 7002, 7003, 8001, 8002, and 8003:

# 进入/tmp目录
cd /tmp
# 删除旧的,避免配置干扰
rm -rf 7001 7002 7003
# 创建目录
mkdir 7001 7002 7003 8001 8002 8003

Prepare a new redis.conf file under /tmp with the following content:

port 6379
# 开启集群功能
cluster-enabled yes
# 集群的配置文件名称,不需要我们创建,由redis自己维护
cluster-config-file /tmp/6379/nodes.conf
# 节点心跳失败的超时时间
cluster-node-timeout 5000
# 持久化文件存放目录
dir /tmp/6379
# 绑定地址
bind 0.0.0.0
# 让redis后台运行
daemonize yes
# 注册的实例ip
replica-announce-ip 192.168.150.101
# 保护模式
protected-mode no
# 数据库数量
databases 1
# 日志
logfile /tmp/6379/run.log

Copy this file into each directory:

# 进入/tmp目录
cd /tmp
# 执行拷贝
echo 7001 7002 7003 8001 8002 8003 | xargs -t -n 1 cp redis.conf

Modify redis.conf in each directory, and modify 6379 to be consistent with the directory:

# 进入/tmp目录
cd /tmp
# 修改配置文件
printf '%s\n' 7001 7002 7003 8001 8002 8003 | xargs -I{} -t sed -i 's/6379/{}/g' {}/redis.conf

# start

Since the background startup mode has been configured, the service can be started directly:

# 进入/tmp目录
cd /tmp
# 一键启动所有服务
printf '%s\n' 7001 7002 7003 8001 8002 8003 | xargs -I{} -t redis-server {}/redis.conf

View status via ps:

ps -ef | grep redis

Discovery services have been started normally:

 

If you want to close all processes, you can execute the command:

ps -ef | grep redis | awk '{print $2}' | xargs kill

or (recommended this way):

printf '%s\n' 7001 7002 7003 8001 8002 8003 | xargs -I{} -t redis-cli -p {} shutdown

#Create a cluster

Although the service is started, each service is currently independent without any association.

We need to execute commands to create a cluster. It is troublesome to create a cluster before Redis5.0. After 5.0, cluster management commands are integrated into redis-cli.

  1. Before Redis5.0

Cluster commands before Redis 5.0 were implemented using src/redis-trib.rb under the redis installation package. Because redis-trib.rb is written in ruby ​​language, it is necessary to install the ruby ​​environment.

# 安装依赖
yum -y install zlib ruby rubygems
gem install redis

Then use the command to manage the cluster:

# 进入redis的src目录
cd /tmp/redis-6.2.4/src
# 创建集群
./redis-trib.rb create --replicas 1 192.168.150.101:7001 192.168.150.101:7002 192.168.150.101:7003 192.168.150.101:8001 192.168.150.101:8002 192.168.150.101:8003
  1. After Redis5.0

We are using Redis6.2.4 version, cluster management and integrated into redis-cli, the format is as follows:

redis-cli --cluster create --cluster-replicas 1 192.168.150.101:7001 192.168.150.101:7002 192.168.150.101:7003 192.168.150.101:8001 192.168.150.101:8002 192.168.150.101:8003

Command description:

  • redis-cli --clusterOr ./redis-trib.rb: represents a cluster operation command
  • create: represents the creation of a cluster
  • --replicas 1Or --cluster-replicas 1 : specify that the number of copies of each master in the cluster is 1, and 节点总数 ÷ (replicas + 1) the number of masters is obtained at this time. Therefore, the first n in the node list is the master, and the other nodes are all slave nodes, which are randomly assigned to different masters.

What it looks like after running:

 

Enter yes here, and the cluster will start to be created:

 

You can view the cluster status with the command:

redis-cli -p 7001 cluster nodes

 

# test

Try to connect to node 7001 and store a piece of data:

# 连接
redis-cli -p 7001
# 存储数据
set num 123
# 读取数据
get num
# 再次存储
set a 1

The result is tragic:

 

During cluster operation, you need to redis-cliadd -cparameters:

redis-cli -c -p 7001

This time it works:

 

# hash slot

Redis will map each master node to a total of 16384 slots (hash slots) ranging from 0 to 16383. When viewing the cluster information, you can see:

Data keys are not bound to nodes, but to slots. Redis will calculate the slot value based on a small part of the key, in two cases:

  • Contains in key  {} , and {}contains at least 1 character, and {}the part in is a valid part
  • Not included in the key  {}, the entire key is a valid part

::: For example, if the key is num, it will be calculated according to num; if it is {itcast}num, it will be calculated according to itcast. The calculation method is to use the CRC16 algorithm to get a hash value, and then take the remainder of 16384, and the result is the slot value. :::

How does Redis determine which instance a key should be in?

  • Allocate 16384 slots to different instances
  • Calculate the hash value based on the effective part of the key, and take the remainder of 16384
  • The remainder is used as the slot, just find the instance where the slot is located

How to save the same type of data fixedly in the same Redis instance?

  • This type of data uses the same effective part, for example, keys are all prefixed with { typeid }

#cluster scaling

Cluster scaling means that cluster nodes can be dynamically increased and decreased, and while the cluster is scaling, it is also accompanied by the movement of slots and data in the slots between nodes.

redis-cli --cluster provides many commands to operate the cluster, which can be redis-cli --cluster helpviewed through instructions.

Add a new master node to the cluster and store num = 1000:

  1. Start a new Redis instance with port 7004

# 创建实例目录
mkdir 7004
# 创建 redis 服务
sed -i s/6379/7004/g 7004/redis.conf
# 运行 redis 服务
redis-server 7004/redis.conf
  1. Add 7004 to the previous cluster and act as a master node

redis-cli --cluster add-node 192.168.150.101:7004 192.168.150.101:7001
  1. Assign a slot to the 7004 node so that the key num can be stored in the 7004 instance

# 重新分片
redis-cli --cluster reshard 192.168.150.101:7001
# 移动 3000 个插槽
How many slots do you want to move (from 1 to 16384)? 3000
# 接收插槽的 ID
What is the receiving node ID? 「这里输入 7001 的 ID 即可」
# 使用 done 结束
# 是否确认移动
Do you want to proceed with the proposed rehard plan (yes/no)? yes

#failover _

What happens when a master in the cluster goes down?

  1. The first is that the instance loses connection with other instances

  2. Then there is a suspected downtime

  3. Finally, it is determined to go offline and automatically promote a slave to the new master

The way to select slave here is to filter according to offset offset and id

data migration

Using the cluster failover command, you can manually shut down a certain master in the cluster and switch to the slave node that executes the cluster failover command to realize data migration without awareness. The specific process is as follows:

 

Manual Failover supports three different modes:

  • Default: the default process
  • force: omitted consistency check for offset
  • takeover: Execute step 5 directly, ignoring data consistency, master status and other master opinions

# RedisTemplate access fragmentation cluster

The bottom layer of RedisTemplate also implements the support of sharding clusters based on lettuce, and the steps used are basically the same as the sentinel mode:

  1. Introduce the starter dependency of redis

  2. Configure the shard cluster address

  3. Configure read-write separation

Compared with sentinel mode, only the configuration of fragmented clusters is slightly different, as follows:

spring:
  redis:
    cluster:
      nodes:    # 指定分片集群的每一个节点信息
        - 192.168.150.101:7001
        - 192.168.150.101:7002
        - 192.168.150.101:7003
        - 192.168.150.101:8001
        - 192.168.150.101:8002
        - 192.168.150.101:8003

Guess you like

Origin blog.csdn.net/qq_33685334/article/details/131461773