SpringCloud microservice technology stack. Dark horse follow-up (10)

SpringCloud microservice technology stack. Dark horse follow-up ten

today's goal

insert image description here

distributed cache

– Based on Redis cluster to solve the problems of stand-alone Redis

There are four major problems in stand-alone Redis:
insert image description here

1. Redis persistence

Redis has two persistence schemes:

  • RDB persistence
  • AOF persistence

1.1. RDB persistence

The full name of RDB is Redis Database Backup file (Redis data backup file), also known as Redis data snapshot. Simply put, all data in memory is recorded to disk. When the Redis instance fails and restarts, read the snapshot file from the disk and restore the data. Snapshot files are called RDB files, which are saved in the current running directory by default.

1.1.1. Execution Timing

RDB persistence is performed in four situations:

  • Execute the save command
  • Execute the bgsave command
  • When Redis is down
  • When an RDB condition is triggered

1) save command

Execute the following command to execute the RDB immediately:
insert image description here
The save command will cause the main process to execute the RDB, and all other commands will be blocked during this process. May only be used during data migration.

2) bgsave command⭐

The following command can execute RDB asynchronously:
insert image description here
After this command is executed, an independent process will be started to complete the RDB, and the main process can continue to process user requests without being affected.

3) When
Redis is down, it will execute a save command to achieve RDB persistence.
When we exit redis with ctrl + c, it will be automatically saved once
insert image description here

4) Trigger RDB conditions
Redis has an internal trigger RDB mechanism, which can be found in the redis.conf file, the format is as follows:

# 900秒内,如果至少有1个key被修改,则执行bgsave , 如果是save "" 则表示禁用RDB
save 900 1  
save 300 10  
save 60 10000 

insert image description here
insert image description here

Other configurations of RDB can also be set in the redis.conf file:

# 是否压缩 ,建议不开启,压缩也会消耗cpu,磁盘的话不值钱
rdbcompression yes

# RDB文件名称
dbfilename dump.rdb  

# 文件保存的路径目录
dir ./ 

insert image description here
insert image description here

1.1.2. Principle of RDB

At the beginning of bgsave, the main process will be forked to obtain the child process, and the child process will share the memory data of the main process. After the fork is completed, the memory data is read and written to the RDB file.

fork uses copy-on-write technology:

  • When the main process performs a read operation, it accesses the shared memory;
  • When the main process performs a write operation, a copy of the data is copied and the write operation is performed.

insert image description here

1.1.3. Summary

Basic process of bgsave in RDB mode?

  • Fork the main process to get a child process, shared memory space
  • The child process reads memory data and writes it to a new RDB file
  • Replace old RDB files with new RDB files

When will RDB be executed? What does save 60 1000 mean?

  • The default is when the service is stopped
  • It means that RDB is triggered when at least 1000 modifications are performed within 60 seconds

Disadvantages of RDB?

  • The RDB execution interval is long, and there is a risk of data loss between two RDB writes.
  • Fork child process, compress, and write out RDB files are time-consuming

1.2. AOF persistence

1.2.1. Principle of AOF

The full name of AOF is Append Only File (append file). Every write command processed by Redis will be recorded in the AOF file, which can be regarded as a command log file.
insert image description here

1.2.2. AOF configuration

AOF is disabled by default, you need to modify the redis.conf configuration file to enable AOF:

# 是否开启AOF功能,默认是no
appendonly yes
# AOF文件的名称
appendfilename "appendonly.aof"

First disable RDB
insert image description here
and enable AOF
insert image description here

The frequency of AOF command recording can also be configured through the redis.conf file:

# 表示每执行一次写命令,立即记录到AOF文件
appendfsync always 
# 写命令执行完先放入AOF缓冲区,然后表示每隔1秒将缓冲区数据写到AOF文件,是默认方案
appendfsync everysec 
# 写命令执行完先放入AOF缓冲区,由操作系统决定何时将缓冲区内容写回磁盘
appendfsync no

insert image description here
Restart linux after configuration, because we are
executing after the background process is opened

select 4
set school 4

View the appendonly.aof file
insert image description here

Comparison of three strategies:
insert image description here

1.2.3. AOF file rewriting

Because it is a record command, the AOF file will be much larger than the RDB file. And AOF will record multiple write operations to the same key, but only the last write operation is meaningful.
For example, if we modify num twice
insert image description here
and look at the appendonly.aof file, we find that it has been recorded twice.
insert image description here
By executing the bgrewriteaof command, the AOF file can be rewritten to achieve the same effect with the least number of commands.
insert image description here
let's do it

BGREWRITEAOF

insert image description here
Then we look at the appendonly.aof file and do some compression processing.
insert image description here
Then we restart Linux and check whether the data exists, and find that it still exists.
insert image description here

As shown in the figure, AOF originally has three commands, but set num 123 和 set num 666they are all operations on num. The second time will overwrite the first value, so it is meaningless to record the first command.

So after rewriting the command, the content of the AOF file is:mset name jack num 666

Redis will also automatically rewrite the AOF file when the threshold is triggered. Thresholds can also be configured in redis.conf:

# AOF文件比上次文件 增长超过多少百分比则触发重写
auto-aof-rewrite-percentage 100
# AOF文件体积最小多大以上才触发重写 
auto-aof-rewrite-min-size 64mb 

1.3. Comparison between RDB and AOF

RDB and AOF each have their own advantages and disadvantages. If the data security requirements are high, they are often used in combination in actual development .
insert image description here

2. Redis master-slave

2.1. Build a master-slave architecture

The concurrency capability of single-node Redis has an upper limit. To further improve the concurrency capability of Redis, it is necessary to build a master-slave cluster to achieve read-write separation.
insert image description here

For the specific construction process, refer to the pre-class material "Redis Cluster.md":
insert image description here

2.1.1 Cluster structure

The master-slave cluster structure we built is shown in the figure:

insert image description here

There are three nodes in total, one master node and two slave nodes.

Here we will open 3 redis instances in the same virtual machine to simulate the master-slave cluster. The information is as follows:

IP PORT Role
192.168.150.101 7001 master
192.168.150.101 7002 slave
192.168.150.101 7003 slave

2.1.2. Prepare instance and configuration

To start three instances on the same virtual machine, three different configuration files and directories must be prepared. The directory where the configuration files are located is also the working directory.

1) Create a directory

We create three folders named 7001, 7002, and 7003:

# 进入/tmp目录
cd /tmp
# 创建目录
mkdir 7001 7002 7003

As shown in the picture:
insert image description here

2) Restore the original configuration

Modify the redis-6.2.4/redis.conf file, change the persistence mode to the default RDB mode, and keep AOF off.

# 开启RDB
# save ""
save 3600 1
save 300 100
save 60 10000

# 关闭AOF
appendonly no

insert image description here
close AOF
insert image description here

3) Copy the configuration file to each instance directory

Then copy the redis-6.2.4/redis.conf file to three directories (execute the following command in the /tmp directory):

# 方式一:逐个拷贝
cp redis-6.2.4/redis.conf 7001
cp redis-6.2.4/redis.conf 7002
cp redis-6.2.4/redis.conf 7003
# 方式二:管道组合命令,一键拷贝
echo 7001 7002 7003 | xargs -t -n 1 cp redis-6.2.4/redis.conf
# 我本机是
echo redis7001 redis7002 redis7003 |xargs -t -n 1 cp /usr/local/src/redis-6.2.6/redis.conf

insert image description here

4) Modify the port and working directory of each instance

Modify the configuration file in each folder, modify the ports to 7001, 7002, and 7003 respectively, and modify the storage location of the rdb file to your own directory (execute the following commands in the /tmp directory):

sed -i -e 's/6379/7001/g' -e 's/dir .\//dir \/tmp\/7001\//g' 7001/redis.conf
sed -i -e 's/6379/7002/g' -e 's/dir .\//dir \/tmp\/7002\//g' 7002/redis.conf
sed -i -e 's/6379/7003/g' -e 's/dir .\//dir \/tmp\/7003\//g' 7003/redis.conf

# 我们本机是
sed -i -e 's/6379/7001/g' -e 's/dir .\//dir \/tmp\/redis7001\//g' redis7001/redis.conf
sed -i -e 's/6379/7002/g' -e 's/dir .\//dir \/tmp\/redis7002\//g' redis7002/redis.conf
sed -i -e 's/6379/7003/g' -e 's/dir .\//dir \/tmp\/redis7003\//g' redis7003/redis.conf

It can be seen that the modification is successful and
insert image description here
the port modification is successful
insert image description here

5) Modify the declared IP of each instance

The virtual machine itself has multiple IPs. In order to avoid confusion in the future, we need to specify the binding ip information of each instance in the redis.conf file. The format is as follows:

# redis实例的声明 IP
replica-announce-ip 192.168.150.101

Each directory needs to be changed, and we can complete the modification with one click (execute the following command in the /tmp directory):

# 逐一执行
sed -i '1a replica-announce-ip 192.168.150.101' redis7001/redis.conf
sed -i '1a replica-announce-ip 192.168.150.101' redis7002/redis.conf
sed -i '1a replica-announce-ip 192.168.150.101' redis7003/redis.conf

# 或者一键修改
printf '%s\n' redis7001 redis7002 redis7003 | xargs -I{} -t sed -i '1a replica-announce-ip 192.168.150.101' {}/redis.conf

insert image description here
Then enter a configuration file to see
insert image description here

2.1.3. Startup

In order to view the logs conveniently, we open three ssh windows, start three redis instances respectively, and start the command:
insert image description here

# 第1个
redis-server redis7001/redis.conf
# 第2个
redis-server redis7002/redis.conf
# 第3个
redis-server redis7003/redis.conf

After startup:
insert image description here

If you want to stop with one key, you can run the following command:

printf '%s\n' 7001 7002 7003 | xargs -I{} -t redis-cli -p {} shutdown

2.1.4. Open the master-slave relationship

Now the three instances have nothing to do with each other. To configure the master-slave, you can use the replicaof or slaveof (before 5.0) command.

There are two modes, temporary and permanent:

  • Modify the configuration file (permanent)

    • Add a line of configuration to redis.conf:slaveof <masterip> <masterport>
  • Use the redis-cli client to connect to the redis service and execute the slaveof command (it will fail after restarting):

  slaveof <masterip> <masterport>

< strong >< font color='red'>Note</font>< /strong>: The command replicaof is added after 5.0, which has the same effect as salveof.

Here we use the second method for the convenience of demonstration.

Connect to 7002 through the redis-cli command, and execute the following command:

# 连接 7002
redis-cli -p 7002
# 执行slaveof
slaveof 192.168.150.101 7001

Connect to 7003 through the redis-cli command, and execute the following command:

# 连接 7003
redis-cli -p 7003
# 执行slaveof
slaveof 192.168.150.101 7001

Then connect to node 7001 to view the cluster status:

# 连接 7001
redis-cli -p 7001
# 查看状态
info replication

result:
insert image description here

2.1.5. Testing

Do the following to test:

  • Use redis-cli to connect to 7001, executeset num 123

insert image description here

  • Use redis-cli to connect to 7002, execute get num, and then executeset num 666

insert image description here

  • Use redis-cli to connect to 7003, execute get num, and then executeset num 888

insert image description here

It can be found that only the master node 7001 can perform write operations, and the two slave nodes 7002 and 7003 can only perform read operations.

Summary:
Suppose there are two Redis instances, A and B, how to make B the slave node of A?
● Execute the command on node B: slaveof A's IP A's port

2.2. Master-slave data synchronization principle

2.2.1. Full synchronization

When the master-slave establishes a connection for the first time, it will perform full synchronization and copy all the data of the master node to the slave node. The process is as follows:
insert image description here

Here is a question, how does the master know that the salve is connecting for the first time? ?

There are several concepts that can be used as the basis for judgment:

  • Replication Id : Replid for short, is the mark of the data set, and the same id means that it is the same data set. Each master has a unique reply, and the slave will inherit the reply of the master node
  • offset : The offset, which gradually increases as the data recorded in repl_baklog increases. When the slave completes the synchronization, it will also record the current synchronization offset. If the slave's offset is smaller than the master's offset, it means that the slave data lags behind the master and needs to be updated.

Therefore, for data synchronization, the slave must declare its own replication id and offset to the master, so that the master can determine which data needs to be synchronized.

Because the slave is originally a master with its own reply and offset, when it becomes a slave for the first time and establishes a connection with the master, the sent reply and offset are its own reply and offset.

The master judges that the replid sent by the slave is inconsistent with its own, indicating that this is a brand new slave, and it knows that it needs to do full synchronization.

The master will send its reply and offset to the slave, and the slave will save the information. In the future, the replid of the slave will be the same as that of the master.

Therefore, the basis for the master to judge whether a node is synchronized for the first time is to see whether the replid is consistent .

Figure:
insert image description here
complete process description:

  • The slave node requests incremental synchronization
  • The master node judges replid, finds inconsistency, and refuses incremental synchronization
  • Master generates RDB with complete memory data and sends RDB to slave
  • The slave clears the local data and loads the RDB of the master
  • The master records the commands during the RDB period in repl_baklog, and continuously sends the commands in the log to the slave
  • The slave executes the received command and keeps in sync with the master

2.2.2. Incremental synchronization

Full synchronization needs to create RDB first, and then transfer the RDB file to a slave through the network, which is too expensive. Therefore, except for the first full synchronization, the slave and the master perform incremental synchronization most of the time .

What is delta sync? It is to update only part of the data that is different between the slave and the master. As shown in the picture:
insert image description here

So how does the master know where the data difference between the slave and itself is?

2.2.3. repl_backlog principle

How does the master know where the data difference between the slave and itself is?

This is about the repl_baklog file during full synchronization.

This file is an array with a fixed size, but the array is circular, that is to say, after the subscript reaches the end of the array, it will start reading and writing from 0 again , so that the data at the head of the array will be overwritten.

Repl_baklog will record the command log and offset processed by Redis, including the current offset of the master and the offset that the slave has copied to:
insert image description here

The difference between the offset of the slave and the master is the data that the slave needs to incrementally copy.

As data continues to be written, the offset of the master gradually increases, and the slave keeps copying to catch up with the offset of the master:
insert image description here

until the array is filled:
insert image description here

At this point, if new data is written, the old data in the array will be overwritten. However, as long as the old data is green, it means that the data has been synchronized to the slave, even if it is overwritten, it will have no effect. Because only the red part is not synchronized.

However, if the slave has network congestion, the offset of the master far exceeds the offset of the slave:
insert image description here

If the master continues to write new data, its offset will overwrite the old data until the current offset of the slave is also overwritten:
insert image description here

The red part in the brown box is the data that has not been synchronized but has been overwritten. At this time, if the slave recovers, it needs to synchronize, but finds that its offset is gone, and the incremental synchronization cannot be completed. Can only do full sync .
insert image description here

2.3. Master-slave synchronization optimization

Master-slave synchronization can ensure the consistency of master-slave data, which is very important.

Redis master-slave clusters can be optimized from the following aspects:

  • Configure repl-diskless-sync yes in the master to enable diskless replication to avoid disk IO during full synchronization. (This scenario can be used if the network bandwidth is sufficient)

Change configuration redis.conf in masterr
insert image description here

  • The memory usage on a Redis single node should not be too large to reduce excessive disk IO caused by RDB
  • Appropriately increase the size of repl_baklog, realize fault recovery as soon as possible when the slave is down, and avoid full synchronization as much as possible
  • Limit the number of slave nodes on a master. If there are too many slaves, you can use a master-slave-slave chain structure to reduce the pressure on the master

Master-slave architecture diagram:
insert image description here

2.4. Summary

Briefly describe the difference between full synchronization and incremental synchronization?

  • Full synchronization: the master generates an RDB with complete memory data and sends the RDB to the slave. Subsequent commands are recorded in repl_baklog and sent to slave one by one.
  • Incremental synchronization: the slave submits its own offset to the master, and the master obtains the commands after the offset in the repl_baklog to the slave

When to perform full synchronization?

  • When the slave node connects to the master node for the first time
  • The slave node has been disconnected for too long and the offset in the repl_baklog has been overwritten

When is an incremental sync performed?

  • When the slave node is disconnected and restored, and the offset can be found in the repl_baklog

3. Redis Sentry

Redis provides a Sentinel mechanism to achieve automatic failure recovery of the master-slave cluster.

3.1. Sentinel principle

insert image description here

3.1.1. Cluster structure and function

The structure of the sentinel is shown in the figure:
insert image description here
the role of the sentinel is as follows:

  • Monitoring : Sentinel constantly checks that your master and slave are working as expected
  • Automatic failure recovery : If the master fails, Sentinel will promote a slave to master. When the faulty instance recovers, the new master is also the main one
  • Notification : Sentinel acts as the service discovery source for the Redis client, and when the cluster fails over, it will push the latest information to the Redis client

3.1.2. Cluster monitoring principle

Sentinel monitors the service status based on the heartbeat mechanism, and sends a ping command to each instance of the cluster every 1 second:

• Subjective offline: If a sentinel node finds that an instance does not respond within the specified time, it considers the instance to be offline subjectively .

• Objective offline: If more than the specified number (quorum) of sentinels think that the instance is offline subjectively, the instance will be objectively offline . The quorum value is preferably more than half of the number of Sentinel instances.
insert image description here

3.1.3. Cluster failure recovery principle

Once a master failure is found, sentinel needs to select one of the salves as the new master. The selection basis is as follows:

  • First, it will judge the length of disconnection between the slave node and the master node. If it exceeds the specified value (down-after-milliseconds * 10), the slave node will be excluded
  • Then judge the slave-priority value of the slave node, the smaller the priority, the higher the priority, if it is 0, it will never participate in the election
  • If the slave-prority is the same, judge the offset value of the slave node. The larger the value, the newer the data and the higher the priority
  • The last is to judge the size of the running id of the slave node, the smaller the priority, the higher the priority.

When a new master is elected, how to implement the switch?

The process is as follows:

  • Sentinel sends the slaveof no one command to the candidate slave1 node to make the node the master
  • Sentinel sends the slaveof 192.168.150.101 7002 command to all other slaves to make these slaves become slave nodes of the new master and start synchronizing data from the new master.
  • Finally, sentinel marks the failed node as a slave, and when the failed node recovers, it will automatically become the slave node of the new master

insert image description here

3.1.4. Summary

What are the three functions of Sentinel?

  • monitor
  • failover
  • notify

How does Sentinel judge whether a redis instance is healthy?

  • Send a ping command every 1 second, if there is no communication for a certain period of time, it is considered as a subjective offline
  • If most of the sentinels think that the instance is offline subjectively, it is determined that the service is offline

What are the failover steps?

  • First select a slave as the new master, execute slaveof no one
  • Then let all nodes execute slaveof new master
  • Modify the faulty node configuration, add slaveof new master

3.2. Building a sentinel cluster

For the specific construction process, refer to the pre-class material "Redis Cluster.md":
insert image description here

3.2.1. Cluster structure

Here we build a Sentinel cluster formed by three nodes to supervise the previous Redis master-slave cluster. As shown in the picture:
insert image description here

The information of the three sentinel instances is as follows:

node IP PORT
s1 192.168.150.101 27001
s2 192.168.150.101 27002
s3 192.168.150.101 27003

3.2.2. Prepare instance and configuration

To start three instances on the same virtual machine, three different configuration files and directories must be prepared. The directory where the configuration files are located is also the working directory.

We create three folders named s1, s2, s3:

# 进入/tmp目录
cd /tmp
# 创建目录
mkdir sentinel1 
mkdir sentinel2
mkdir sentinel3

As shown in the picture:
insert image description here

Then we create a sentinel.conf file in the s1 directory and add the following content:

port 27001
sentinel announce-ip 192.168.150.101
sentinel monitor mymaster 192.168.150.101 7001 2
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
dir "/tmp/sentinel1"

Interpretation:

  • port 27001: is the port of the current sentinel instance
  • sentinel monitor mymaster 192.168.150.101 7001 2: Specify master node information
    • mymaster: master node name, user-defined, arbitrary write
    • 192.168.150.101 7001: the ip and port of the master node
    • 2: The quorum value when electing the master (more than half of the elections)

Then copy the s1/sentinel.conf file to the s2 and s3 directories (execute the following commands in the /tmp directory):

# 方式一:逐个拷贝
cp s1/sentinel.conf s2
cp s1/sentinel.conf s3
# 方式二:管道组合命令,一键拷贝
echo sentinel2 sentinel3 | xargs -t -n 1 cp sentinel1/sentinel.conf

Modify the configuration files in the two folders s2 and s3, and change the ports to 27002 and 27003 respectively:

sed -i -e 's/27001/27002/g' -e 's/s1/s2/g' s2/sentinel.conf
sed -i -e 's/27001/27003/g' -e 's/s1/s3/g' s3/sentinel.conf

3.2.3. Startup

In order to view the logs conveniently, we open three ssh windows, start three redis instances respectively, and start the command:

insert image description here

The command is as follows:

# 第1个
redis-sentinel sentinel1/sentinel.conf
# 第2个
redis-sentinel sentinel2/sentinel.conf
# 第3个
redis-sentinel sentinel3/sentinel.conf

After startup:
insert image description here

3.2.4. Testing

Try to shut down the master node 7001, check the sentinel log:
insert image description here

View the log of 7003:
insert image description here
View the log of 7002:
insert image description here

3.3.RedisTemplate

In the Redis master-slave cluster under the supervision of Sentinel cluster, its nodes will change due to automatic failover, and the Redis client must perceive this change and update the connection information in time. The underlying layer of Spring's RedisTemplate uses lettuce to realize node perception and automatic switching.

Next, we implement the RedisTemplate integration sentinel mechanism through a test.

3.3.1. Import Demo project

First, we introduce the demo project provided by the pre-class materials:
insert image description here

3.3.2. Introducing dependencies

Introduce dependencies in the project's pom file:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

3.3.3. Configure Redis address

Then specify the sentinel related information of redis in the configuration file application.yml:

spring:
  redis:
    sentinel:
      master: mymaster
      nodes:
        - 192.168.150.101:27001
        - 192.168.150.101:27002
        - 192.168.150.101:27003

3.3.4. Configure read-write separation

In the startup class of the project, add a new bean:

@Bean
public LettuceClientConfigurationBuilderCustomizer clientConfigurationBuilderCustomizer(){
    
    
    return clientConfigurationBuilder -> clientConfigurationBuilder.readFrom(ReadFrom.REPLICA_PREFERRED);
}

or written as

    @Bean
    public LettuceClientConfigurationBuilderCustomizer clientConfigurationBuilderCustomizer() {
    
    
        return new LettuceClientConfigurationBuilderCustomizer() {
    
    
            @Override
            public void customize(LettuceClientConfiguration.LettuceClientConfigurationBuilder clientConfigurationBuilder) {
    
    
                clientConfigurationBuilder.readFrom(ReadFrom.REPLICA_PREFERRED);
            }
        };
    }

Look at the data in redis
insert image description here

access

http://localhost:8080/get/num

Get
insert image description here
Look at the log, the current reading is 7002

insert image description here
Let's revisit and modify

http://localhost:8080/set/num/666

It was found that it was handed over to the master node
insert image description here
, so we shut down the 7003 master node
and found that 7001 has become a master node
insert image description here
and then started 7003, and found that 7001 is still a master node.
We visit

http://localhost:8080/set/num/777

Look at the log.
insert image description here
This bean is configured with read and write strategies, including four types:

  • MASTER: read from the master node
  • MASTER_PREFERRED: Read from the master node first, read the replica when the master is unavailable
  • REPLICA: read from the slave (replica) node
  • REPLICA _PREFERRED: Read from the slave (replica) node first, all slaves are unavailable to read the master

4. Redis fragmentation cluster

4.1. Build a shard cluster

Master-slave and sentry can solve the problem of high availability and high concurrent reading. But there are still two unresolved issues:

  • Mass data storage problem

  • The problem of high concurrent writing

Using fragmented clusters can solve the above problems, as shown in the figure:
insert image description here

Sharded cluster features:

  • There are multiple masters in the cluster, and each master saves different data

  • Each master can have multiple slave nodes

  • The master monitors each other's health status through ping

  • Client requests can access any node in the cluster and will eventually be forwarded to the correct node

For the specific construction process, refer to the pre-class material "Redis Cluster.md":

insert image description here

4.1.1. Cluster structure

Sharded clusters require a large number of nodes. Here we build a minimal sharded cluster, including 3 master nodes, and each master contains a slave node. The structure is as follows:
insert image description here

Here we will open 6 redis instances in the same virtual machine to simulate a fragmented cluster. The information is as follows:

IP PORT Role
192.168.150.101 7001 master
192.168.150.101 7002 master
192.168.150.101 7003 master
192.168.150.101 8001 slave
192.168.150.101 8002 slave
192.168.150.101 8003 slave

4.1.2. Prepare instance and configuration

Stop all previous redis clusters first

Delete the previous directories 7001, 7002, and 7003, and recreate the directories 7001, 7002, 7003, 8001, 8002, and 8003:

# 进入/tmp目录
cd /tmp
# 删除旧的,避免配置干扰
rm -rf 7001 7002 7003
# 创建目录
mkdir 7001 7002 7003 8001 8002 8003

Prepare a new redis.conf file under /tmp with the following content:

port 6379
# 开启集群功能
cluster-enabled yes
# 集群的配置文件名称,不需要我们创建,由redis自己维护
cluster-config-file /tmp/6379/nodes.conf
# 节点心跳失败的超时时间
cluster-node-timeout 5000
# 持久化文件存放目录
dir /tmp/6379
# 绑定地址
bind 0.0.0.0
# 让redis后台运行
daemonize yes
# 注册的实例ip
replica-announce-ip 192.168.150.101
# 保护模式
protected-mode no
# 数据库数量
databases 1
# 日志
logfile /tmp/6379/run.log

Copy this file into each directory:

# 进入/tmp目录
cd /tmp
# 执行拷贝
echo 7001 7002 7003 8001 8002 8003 | xargs -t -n 1 cp redis.conf

Modify redis.conf in each directory, and modify 6379 to be consistent with the directory:

# 进入/tmp目录
cd /tmp
# 修改配置文件
printf '%s\n' 7001 7002 7003 8001 8002 8003 | xargs -I{} -t sed -i 's/6379/{}/g' {}/redis.conf

4.1.3. Startup

Since the background startup mode has been configured, the service can be started directly:

# 进入/tmp目录
cd /tmp
# 一键启动所有服务
printf '%s\n' 7001 7002 7003 8001 8002 8003 | xargs -I{} -t redis-server {}/redis.conf

View status via ps:

ps -ef | grep redis

Discovery services have been started normally:
insert image description here

If you want to close all processes, you can execute the command:

ps -ef | grep redis | awk '{print $2}' | xargs kill

or (recommended this way):

printf '%s\n' 7001 7002 7003 8001 8002 8003 | xargs -I{} -t redis-cli -p {} shutdown

4.1.4. Create a cluster

Although the service is started, each service is currently independent without any association.

We need to execute commands to create a cluster. It is troublesome to create a cluster before Redis5.0. After 5.0, cluster management commands are integrated into redis-cli.

1) Before Redis5.0

Cluster commands before Redis 5.0 were implemented using src/redis-trib.rb under the redis installation package. Because redis-trib.rb is written in ruby ​​language, it is necessary to install the ruby ​​environment.

# 安装依赖
yum -y install zlib ruby rubygems
gem install redis

Then use the command to manage the cluster:

# 进入redis的src目录
cd /tmp/redis-6.2.4/src
# 创建集群
./redis-trib.rb create --replicas 1 192.168.150.101:7001 192.168.150.101:7002 192.168.150.101:7003 192.168.150.101:8001 192.168.150.101:8002 192.168.150.101:8003

2) After Redis5.0

We are using Redis6.2.4 version, cluster management and integrated into redis-cli, the format is as follows:

redis-cli --cluster create --cluster-replicas 1 192.168.150.101:7001 192.168.150.101:7002 192.168.150.101:7003 192.168.150.101:8001 192.168.150.101:8002 192.168.150.101:8003

Command description:

  • redis-cli --clusterOr ./redis-trib.rb: represents a cluster operation command
  • create: represents the creation of a cluster
  • --replicas 1Or --cluster-replicas 1: specify that the number of copies of each master in the cluster is 1, and 节点总数 ÷ (replicas + 1)the number of masters is obtained at this time. Therefore, the first n in the node list is the master, and the other nodes are all slave nodes, which are randomly assigned to different masters.

Appearance after running:
insert image description here
enter yes here, and the cluster will start to be created:
insert image description here

You can view the cluster status with the command:

redis-cli -p 7001 cluster nodes

insert image description here

4.1.5. Testing

Try to connect to node 7001 and store a piece of data:

# 连接
redis-cli -p 7001
# 存储数据
set num 123
# 读取数据
get num
# 再次存储
set a 1

The result is tragic:

insert image description here

During cluster operation, you need to redis-cliadd -cparameters:

redis-cli -c -p 7001

This time it’s ok: redirected means redirection. When we visit a node, we will judge which node it belongs to according to the slot slot, and then redirect to the node for query
insert image description here

4.2. Hash slots

4.2.1. Slot principle

Redis will map each master node to a total of 16384 slots (hash slots) ranging from 0 to 16383, which can be seen when viewing the cluster information:
insert image description here

Data keys are not bound to nodes, but to slots. Redis will calculate the slot value based on the effective part of the key, in two cases:

  • The key contains "{}", and "{}" contains at least 1 character, and the part in "{}" is a valid part
  • The key does not contain "{}", the entire key is a valid part

For example: if the key is num, then it will be calculated according to num, if it is {itcast}num, then it will be calculated according to itcast. The calculation method is to use the CRC16 algorithm to obtain a hash value, and then take the remainder of 16384, and the result obtained is the slot value.

insert image description here

As shown in the figure, when set a 1 is executed on node 7001, a hash operation is performed on a, and the remainder of 16384 is obtained, and the result is 15495, so it needs to be stored in node 103.

After reaching 7003, get numwhen executing, perform a hash operation on num, take the remainder of 16384, and the result is 2765, so you need to switch to 7001 node

4.2.1. Summary

How does Redis determine which instance a key should be in?

  • Allocate 16384 slots to different instances
  • Calculate the hash value according to the effective part of the key, and take the remainder of 16384
  • The remainder is used as the slot, just find the instance where the slot is located

How to save the same type of data in the same Redis instance?

  • This type of data uses the same effective part, for example, keys are all prefixed with {typeId}

4.3. Cluster scaling

redis-cli --cluster provides many commands to operate the cluster, which can be viewed in the following ways:
insert image description here
For example, the command to add a node:
insert image description here

4.3.1. Demand Analysis

Requirement: Add a new master node to the cluster and store num = 10 in it

  • Start a new redis instance with port 7004
  • Add 7004 to the previous cluster and act as a master node
  • Assign a slot to the 7004 node so that the key num can be stored in the 7004 instance

Two new functions are needed here:

  • Add a node to the cluster
  • Assign some slots to new slots

4.3.2. Create a new redis instance

Create a folder:

mkdir 7004

Copy the configuration file:

cp redis.conf  7004

Modify the configuration file:

sed -i s/6379/7004/g 7004/redis.conf

start up

redis-server 7004/redis.conf

4.3.3. Add new nodes to redis

The syntax for adding a node is as follows:
insert image description here

Excuting an order:

redis-cli --cluster add-node  192.168.150.101:7004 192.168.150.101:7001

Check the cluster status with the command:

redis-cli -p 7001 cluster nodes

As shown in the figure, 7004 has joined the cluster and is a master node by default:

insert image description here
However, it can be seen that the number of slots of the 7004 node is 0, so no data can be stored on the 7004

4.3.4. Transfer slots

We want to store num to node 7004, so we need to see how many slots num has:

redis-cli -c -p 7001
get num
get a
get num

insert image description here

As shown above, the slot of num is 2765.

We can transfer the slots of 0~3000 from 7001 to 7004, the command format is as follows:
insert image description here

The specific commands are as follows:

establish connection:

redis-cli --cluster reshard 192.168.150.101:7001

insert image description here

Get the following feedback:
insert image description here
Ask how many slots to move, we plan to be 3000:

Here comes the new problem:
insert image description here

Which node to receive these slots? ?

Obviously it is 7004, so what is the id of node 7004?
insert image description here

Copy this id, and then copy it to the console just now:
insert image description here

Ask here, where did your slot move from?

  • all: represents all, that is, each of the three nodes transfers a part
  • Specific id: the id of the target node
  • done: no more

Here we want to get from 7001, so fill in the id of 7001:
insert image description here

Once filled, click done, and the slot transfer is ready:
insert image description here

Are you sure you want to transfer? Enter yes:

Then, view the results with the command:

redis-cli -p 7001 cluster nodes

insert image description here

can be seen:
insert image description here

The goal is achieved.
enter

redis-cli -c -p 7001

insert image description here
Homework: Delete node 7004
First check the command to delete the node

redis-cli --cluster help

I found that it is written in the help documentation

del-node       host:port node_id

Check the node id by command:

redis-cli -p 7001 cluster nodes

Enter cluster operation

redis-cli -c -p 7001

delete node

redis-cli --cluster del-node 192.168.150.101:7004 fce0c2f09c4a2fbf5d9caefdf4aa3e6ab0aeb259

Direct deletion found that an error was reported.
insert image description here
It seems that the slot of 7004 needs to be moved to 7001

redis-cli --cluster reshard 192.168.150.101:7004

Enter 3000
insert image description here
, enter 7001 as the receiving slot,
insert image description here
enter the slot source,
insert image description here
enter yes, and
insert image description here
finally delete

redis-cli --cluster del-node 192.168.150.101:7004 fce0c2f09c4a2fbf5d9caefdf4aa3e6ab0aeb259

check

redis-cli -p 7001 cluster nodes

Found that 7004 is gone
insert image description here

4.4. Failover

The initial state of the cluster is as follows:
insert image description here

Among them, 7001, 7002, and 7003 are masters, and we plan to shut down 7002.

4.4.1. Automatic failover

What happens when a master in the cluster goes down?
Stop a redis instance directly, such as 7002:

redis-cli -p 7002 shutdown

1) First, the instance loses connection with other instances

2) Then there is a suspected downtime:
insert image description here

3) Finally, it is determined to go offline and automatically promote a slave to the new master:
insert image description here

4) When the 7002 starts up again, it will become a slave node:

redis-server 7002/redis.conf

insert image description here

4.4.2. Manual failover

Using the cluster failover command, you can manually shut down a master in the cluster and switch to the slave node that executes the cluster failover command to realize data migration without perception. The process is as follows:
insert image description here

This failover command can specify three modes:

  • Default: The default process, as shown in Figure 1~6 steps
  • force: omits the consistency check of offset
  • takeover: Execute step 5 directly, ignoring data consistency, master status and other master opinions

Case requirements : Perform manual failover on the slave node 7002 to regain the master status

Proceed as follows:

1) Use redis-cli to connect to node 7002

2) Execute the cluster failover command

As shown in the picture:

redis-cli -p 7002
cluster failover

insert image description here

Effect: It is found that 7002 has become the master
insert image description here

4.5. RedisTemplate access to fragmented cluster

The bottom layer of RedisTemplate also implements the support of fragmented clusters based on lettuce, and the steps used are basically the same as the sentinel mode:

1) Introduce the starter dependency of redis

2) Configure the shard cluster address

3) Configure read-write separation

Compared with sentinel mode, only the configuration of fragmented clusters is slightly different, as follows:

spring:
  redis:
    cluster:
      nodes:
        - 192.168.150.101:7001
        - 192.168.150.101:7002
        - 192.168.150.101:7003
        - 192.168.150.101:8001
        - 192.168.150.101:8002
        - 192.168.150.101:8003


Restart the service after configuration

http://localhost:8080/get/num

Obtain
insert image description here
and visit the slave node
and then visit

http://localhost:8080/set/num/777

It is found that the master node is accessed

Guess you like

Origin blog.csdn.net/sinat_38316216/article/details/129788825