Article directory
- 1. Distributed cache
- 2. Multi-level cache
- 3. Best Practices
- Log
1. Distributed cache
Summary of notes:
- Please review each section
- Summary: Please check for details
1.1 Overview
illustrate:
Problems with single-point Redis
1.2Redis persistence
Summary of notes:
- Overview: Redis is an in-memory database that can save data to disk through a persistence mechanism.
- RDB (Redis data backup file):
- Redis will enable asynchronous threads to automatically back up data files.
- Redis will execute RDB once by default when it is shut down.
- Redis will modify the RDB every fifteen minutes, modify it 10 times every five minutes, and perform RDB persistence once every 10,000 modifications per minute.
- AOF ( append file ):
- Every write command processed by Redis will be recorded in the AOF file
- Common operations: turn on AOF, modify recording frequency , set trigger threshold
1.2.1 Overview
Redis is an in-memory database that can save data to disk through a persistence mechanism to prevent data loss.
1.2.2RDB
1.2.2.1 Overview
The full name of RDB is Redis Database Backup file (Redis data backup file), also called Redis data snapshot . To put it simply, all the data in the memory is recorded to the disk. When the Redis instance fails and restarts, the snapshot file is read from the disk and the data is restored.
Disadvantages of RDB: the execution interval of Redis is long, and there is a risk of data loss between two RDB writes . Forking child processes, compressing, and writing out RDB files are all time-consuming.
illustrate:
- The snapshot file is called an RDB file and is saved in the current running directory by default. By default, Redis will execute RDB once when it is shut down.
1.2.2.2 Basic Use Cases
- Modify recording frequency
# 900秒内,如果至少有1个key被修改,则执行bgsave , 如果是save "" 则表示禁用RDB
save 900 1
save 300 10
save 60 10000
illustrate:
There is a mechanism inside Redis to trigger RDB, which can be
redis.conf
found in the file
- Rest of parameter settings
# 是否压缩 ,建议不开启,压缩也会消耗cpu,磁盘的话不值钱
rdbcompression yes
# RDB文件名称
dbfilename dump.rdb
# 文件保存的路径目录
dir ./
illustrate:
- By default, whether compression is on
- Other configurations of RDB can also
redis.conf
be set in the file
1.2.2.3 Principle
illustrate:
- In Redis, the main process does not directly read the data in the physical memory, but maps the physical memory through the page table to read.
bgsave
When the command starts,fork
the main process gets the child process, and the child process shares the memory data of the main process . After completing the fork, read the memory data and write it to the RDB file- Fork uses copy-on-write technology: when the main process performs a read operation, it accesses the shared memory. When the main process performs a write operation, it copies a copy of the data and performs the write operation.
1.2.3AOF
1.2.3.1 Overview
AOF stands for Append Only File. Every write command processed by Redis will be recorded in the AOF file, which can be regarded as a command log file.
illustrate:
Every command is recorded in the AOF file, and the command file continues to grow.
1.2.3.2 Basic use cases
- Turn on AOF
# 是否开启AOF功能,默认是no
appendonly yes
# AOF文件的名称
appendfilename "appendonly.aof"
illustrate:
- AOF默认是关闭的,需要修改
redis.conf
配置文件来开启AOF- 开启AOF功能的时候,建议关闭RDB功能
- 修改记录频率
# 表示每执行一次写命令,立即记录到AOF文件
appendfsync always
# 写命令执行完先放入AOF缓冲区,然后表示每隔1秒将缓冲区数据写到AOF文件,是默认方案
appendfsync everysec
# 写命令执行完先放入AOF缓冲区,由操作系统决定何时将缓冲区内容写回磁盘
appendfsync no
说明:
- AOF的命令记录的频率也可以通过
redis.conf
文件来配
- 设置触发阈值
# AOF文件比上次文件 增长超过多少百分比则触发重写
auto-aof-rewrite-percentage 100
# AOF文件体积最小多大以上才触发重写
auto-aof-rewrite-min-size 64mb
说明:
- 因为是记录命令,AOF文件会比RDB文件大的多。而且AOF会记录对同一个key的多次写操作,但只有最后一次写操作才有意义
- 通过执行
bgrewriteaof
命令,可以让AOF文件执行重写功能,用最少的命令达到相同效果
1.2.4总结
说明:
RDB和AOF各有自己的优缺点,如果对数据安全性要求较高,在实际开发中往往会结合两者来使用
1.3Redis主从
笔记小结:
- 概述:主从集群,实现读写分离,提高数据可靠性与完整性
- 全量同步原理:数据标记
Replication
、偏移量offset
、生成RDB
文件、记录repl_baklog
Redis命令缓存区- 增量同步原理:Redis重启后进行
repl_baklog
命令缓存区的命令重写、偏移量offset
被覆盖、全量同步- 总结:详细请查看
1.3.1概述
说明:
单节点Redis的并发能力是有上限的,要进一步提高Redis的并发能力,就需要搭建主从集群,实现读写分离
1.3.2搭建主从集群
三个Redis节点信息如下:
IP | PORT | 角色 |
---|---|---|
10.13.164.55 | 6379 | master |
10.13.164.55 | 6380 | slave |
10.13.164.55 | 6381 | slave |
说明:
主节点用于写操作,子节点用于读取操作
步骤一:配置环境
说明:
This Redis master-slave node is installed using Docker.
1. Create files and directories
cd /home
mkdir redis
cd redis
mkdir /home/redis/myredis1
mkdir data
touch myredis.conf
// 在myredis2和myredis3目录中分别创建 myredis.conf 配置文件和data目录此处省略命令
mkdir /home/redis/myredis2
mkdir /home/redis/myredis3
Description: View results
myredis.conf
The content of the file is as follows
bind 0.0.0.0
protected-mode no
port 6379
tcp-backlog 511
requirepass qweasdzxc
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 30
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-disable-tcp-nodelay no
replica-priority 100
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
appendonly yes
appendfilename "appendonly.aof"
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-max-len 128
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
masterauth qweasdzxc # 配置主节点Redis的密码
Notice:
Note that the ports need to be replaced with 6380 and 6381 respectively.
Step 2: Run the Docker service
Notice:
Need to create
myredis.conf
files anddata
folders in advance
1. Please run the following commands on the host respectively
sudo docker run \
--restart=always \
-p 6379 \
--net=host \
--name myredis1 \
-v /home/redis/myredis1/myredis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis1/data:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
sudo docker run \
--restart=always \
-p 6380 \
--net=host \
--name myredis2 \
-v /home/redis/myredis2/myredis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis2/data:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
sudo docker run \
--restart=always \
-p 6381 \
--net=host \
--name myredis3 \
-v /home/redis/myredis3/myredis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis3/data:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
Description: View results
2. Establish a master-slave relationship
slaveof 10.13.164.55 6379 # 配置 主节点的Ip地址以及端口号
Step 3: Test
- Check the connection status on the Master node
info repilication
Description: View results
- It is found here that the IP address and port of the child node correspond to each other.
- However, the data set in the main node can still be read in the child nodes, indicating that the construction is successful.
1.3.3 Principle of full synchronization
illustrate:
The first master-slave synchronization is full synchronization . When the child node synchronizes for the first time, it will send a request to the master node and determine whether it carries past data versions and other information. Later, the master node will generate a file of existing data and
RDB
send it to the child nodes. If there is newer data at this time, it will be recorded as a command and saved inrepl_baklog
the file, constantly synchronizing with the child nodes.
Supplement: How does the master determine whether the slave is synchronizing data for the first time?
- Replication Id : Replid for short, is the mark of the data set. If the ID is consistent, it means the same data set. Each master has a unique replicad, and the slave will inherit the replicad of the master node.
- offset : offset, which gradually increases as the data recorded in repl_baklog increases. When the slave completes synchronization, it will also record the current synchronization offset.
illustrate:
- If the replica of the slave is inconsistent with the master node, it means the first synchronization.
- If the slave's offset is smaller than the master's offset, it means that the slave data lags behind the master and needs to be updated.
Therefore, when the slave performs data synchronization, it must declare its replication id and offset to the master, so that the master can determine which data needs to be synchronized.
Synchronization process:
- slave node requests incremental synchronization
- The master node judges replid, finds inconsistency, and refuses incremental synchronization
- Master generates RDB with complete memory data and sends RDB to slave
- The slave clears local data and loads the master's RDB.
- The master records the commands during RDB in
repl_baklog
the Redis command cache area, and continuously sends the commands in the log to the slave.- The slave executes the received command and keeps in sync with the master
1.3.4 Incremental synchronization principle
illustrate:
The first master-slave synchronization is full synchronization , but if the slave restarts and synchronizes, it will perform incremental synchronization .
Replenish:
There is an upper limit to the size of repl_baklog, and when full the oldest data will be overwritten . If the slave is disconnected for too long and data that has not yet been backed up is overwritten, incremental synchronization based on the log cannot be performed and only full synchronization can be performed again.
1.3.5 Summary
If the whole amount is synchronized, it must be done. Then we can optimize the Redis master-slave cluster to optimize the Redis master-slave cluster.
-
Redis cluster optimization
Description: Optimization scheme
- Configure diskless replication in
master
the configuration file to avoid disk IO during full synchronization.repl-diskless-sync yes
- The memory usage on a single Redis node should not be too large to reduce excessive disk IO caused by RDB.
- Increase
repl_baklog
the size appropriately to achieve fault recovery as soon as possible when a downtime is discoveredslave
, and avoid full synchronization as much as possible. - Limit the number of nodes
master
on oneslave
, if there are too manyslave
, you can use a master-slave-slave chain structure to reducemaster
pressure
- Configure diskless replication in
-
Full synchronization and incremental synchronization
-
Full synchronization : the master generates an RDB with complete memory data and sends the RDB to the slave. Subsequent commands are recorded in
repl_baklog
and sent to the slave one by one.illustrate:
When the slave node connects to the master node for the first time, when the save node disconnects for too long and the offset in repl_baklog has been overwritten, full synchronization will be performed
-
Incremental synchronization : slave submits its own offset to the master, and the master obtains the commands after the offset in repl_baklog and gives them to the slave
illustrate:
When the slave node disconnects and recovers, and the offset can be found in the repl_baklog, incremental synchronization will be performed
Notice:
Incremental synchronization, synchronization may fail, depending on
repl_baklog
whether the area is completely covered
-
Replenish:
After the slave node crashes and recovers, it can synchronize data with the master node. However, if the master node crashes, it cannot be restored. To solve this problem, please see the next section
1.4Redis Sentinel
Summary of notes:
- Overview:
- Meaning: Monitor and manage automatic failover of Redis instances
- Monitoring status: subjective offline and objective offline
- Master election rights: length of disconnection time ,
slave-priority
weight value ,offset
offset , running id size- Failover: make
slave
the node the newmaster
slave node, mark the failed node- Basic use cases: import
spring-boot-starter-data-redis
dependencies , sentinel master nodes of configuration files, cluster subnodes, configuration class configuration and set the read mode of the clusteryml
LettuceClientConfigurationBuilderCustomizer
- Summary: Please check for details
1.4.1 Overview
1.4.1.1 Meaning
Redis' Sentinel mechanism (Sentinel) is a high-availability solution provided by Redis for monitoring and managing automatic failover of Redis instances.
The core of the sentinel mechanism is a set of independently running sentinel processes. They monitor the Redis master node and its corresponding multiple slave nodes, and automatically upgrade a slave node to a new master node when the master node fails, thereby realizing failure transfer.
The structure and functions of Sentinel include monitoring : Sentinel will constantly check whether your master and slave are working as expected. Automatic failure recovery : If the master fails, Sentinel will promote a slave to the master. When the failed instance is restored, the new master will take over. Notification : Sentinel acts as the service discovery source for the Redis client, and when the cluster fails over, it will push the latest information to the Redis client
1.4.1.2 Service status monitoring
Sentinel monitors the service status based on the heartbeat mechanism, and sends a ping command to each instance of the cluster every 1 second:
- Subjective offline: If a sentinel node finds that an instance does not respond within the specified time, the instance is considered to be subjectively offline.
- Objective offline: If more than the specified number (quorum) of sentinels think that the instance is subjectively offline, the instance will be objectively offline . The quorum value should preferably exceed half of the number of Sentinel instances
illustrate:
In the Redis configuration file, it can be set to half the number of Sentinel
1.4.1.3Masterr voting rights
If Sentinel finds that the Master node is faulty, Sentinel needs to select one of the slaves as the new master. The rules are as follows:
- First, it will determine the length of time the slave node is disconnected from the master node . If it exceeds the specified value (down-after-milliseconds * 10), the slave node will be excluded.
- Then determine the slave-priority value of the slave node. The smaller the priority, the higher the priority. If it is 0, it will never participate in the election.
- If the slave-prority is the same, determine the offset value of the slave node . The larger the value , the newer the data and the higher the priority.
- The last step is to determine the running ID size of the slave node . The smaller the size, the higher the priority.
1.4.1.4 Failover
When one of the slaves is selected as the new master (for example, slave1), the steps of failover are as follows:
- sentinel sends a command to the alternative slave1 node
slaveof no one
to make this node the master slaveof 192.168.150.101 7002
Sentinel sends commands to all other slaves to make these slaves become slave nodes of the new master and start synchronizing data from the new master- Finally, sentinel marks the failed node as a slave, and when the failed node recovers, it will automatically become the slave node of the new master
1.4.2 Building a sentinel cluster
3 Sentinel
sample information are as follows:
IP | PORT |
---|---|
10.13.164.55 | 27001 |
10.13.164.55 | 27001 |
10.13.164.55 | 27001 |
Step 1: Configure the environment
illustrate:
This sentinel cluster node is installed using Docker.
1. Create files and directories
cd /home
mkdir redis
cd redis
mkdir /home/redis/mysentinel1
vim myredis.conf
// 在myredis2和myredis3目录中分别创建 myredis.conf 配置文件
mkdir /home/redis/mysentinel2
mkdir /home/redis/mysentinel3
Description: View results
sentinel.conf
The content of the file is as follows
port 27001 # 注意,此处需要将sentinel.conf文件分别替换为 27002、27003
sentinel announce-ip 10.13.164.55
sentinel monitor mymaster 10.13.164.55 6379 2 # 注意此处IP和地址正确无误
sentinel auth-pass mymaster qweasdzxc
sentinel down-after-milliseconds mymaster 5000
sentinel failover-timeout mymaster 60000
sentinel parallel-syncs mymaster 1
illustrate:
port 27001
: Set the current listening port of Redis Sentinel to 27001.sentinel announce-ip 10.13.164.55
: Set the IP address Sentinel uses when announcing its IP address to other nodes to 10.13.164.55.sentinel monitor mymaster 10.13.164.55 6379 2
: Set Sentinel to monitor themymaster
master node named. The IP address of the master node is 10.13.164.55 and the port number is 6379.2
This indicates the time Sentinel needs to wait (in seconds) after the master node enters the offline state. After this time, Sentinel will The master node is marked offline.sentinel auth-pass mymaster qweasdzxc
: Set the password Sentinel needs to use when connecting to the master nodeqweasdzxc
for authentication.sentinel down-after-milliseconds mymaster 5000
: Set the time threshold for Sentinel to consider the master node offline to 5000 milliseconds (i.e. 5 seconds). If no response from the master node is received within this time, the master node is considered offline.sentinel failover-timeout mymaster 60000
: Set the timeout for failover to 60000 milliseconds (that is, 60 seconds). If the failover is not completed within this time, the failover is considered failed.sentinel parallel-syncs mymaster 1
: Set the number of slave nodes to be synchronized simultaneously during failover to 1, that is, synchronize one slave node at the same time. This can avoid excessive resource load caused by synchronizing multiple slave nodes at the same time.
Step 2: Run the Docker service
illustrate:
Please run the following commands on the host respectively
docker run --restart=always \
--net=host \
--name mysentinel1 \
-v /home/redis/mysentinel1/sentinel.conf:/sentinel.conf \
-d redis redis-sentinel /sentinel.conf
docker run --restart=always \
--net=host \
--name mysentinel2 \
-v /home/redis/mysentinel2/sentinel.conf:/sentinel.conf \
-d redis redis-sentinel /sentinel.conf
docker run --restart=always \
--net=host \
--name mysentinel3 \
-v /home/redis/mysentinel3/sentinel.conf:/sentinel.conf \
-d redis redis-sentinel /sentinel.conf
Notice:
Configuration files
sentinel.conf
need to correspond to their respective monitoring nodes
Step 3: Test
1. Stop Master node query sentinel
log
2. View the log of 7003:
3. View the log of 7002:
illustrate:
At this time, node 7003 has been elected as the new master node, which is consistent with the message printed by our node 6380 as the master node.
1.4.3 Basic Use Cases
illustrate:
In the Redis master-slave cluster under the supervision of the Sentinel cluster, its nodes will change due to automatic failover. The Redis client must sense this change and update the connection information in a timely manner. The bottom layer of Spring
RedisTemplate
uses lettuce to realize node perception and automatic switching.
Step 1: Import dependencies
- modify
pom.xml
file
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Step 2: Add configuration
1. Modify application.yaml
configuration file
logging:
level:
io.lettuce.core: debug
pattern:
dateformat: MM-dd HH:mm:ss:SSS
server:
port: 8081
spring:
redis:
sentinel:
master: mymaster # 指定master名称
nodes: # 指定redis-sentinel集群信息
- 10.13.164.55:27001
- 10.13.164.55:27002
- 10.13.164.55:27003
password: qweasdzxc
2. Add RedisConfig
configuration file class
@Configuration
public class RedisConfig {
@Bean
LettuceClientConfigurationBuilderCustomizer getLettuceClientConfigurationBuilderCustomizer(){
// 设置集群的读取模式,先读取从结点,若失败则再读取主节点
return clientConfigurationBuilder -> clientConfigurationBuilder.readFrom(ReadFrom.REPLICA_PREFERRED);
}
}
Step 3: Test
1. Write HelloController
presentation layer classes
@RestController
public class HelloController {
@Autowired
private StringRedisTemplate redisTemplate;
@GetMapping("/get/{key}")
public String hi(@PathVariable String key) {
return redisTemplate.opsForValue().get(key);
}
@GetMapping("/set/{key}/{value}")
public String hi(@PathVariable String key, @PathVariable String value) {
redisTemplate.opsForValue().set(key, value);
return "success";
}
}
2. Enter the console to request testing
3. View the Idea log
illustrate:
The log prints normally, indicating that the test passed
4. Test the master node down
illustrate:
After the failed node is restored, it will automatically join the master node, indicating that the test has passed.
1.4.4 Summary
- What are the three functions of Sentinel?
- failover
- monitor
- notify
- How does Sentinel determine whether a redis instance is healthy?
- Send a ping command every 1 second. If there is no response after a certain period of time, it is considered to be subjectively offline.
- If most sentinels believe that the instance is subjectively offline, the service will be determined to be offline.
- What are the failover steps?
- First select a slave as the new master and execute slaveof no one
- Then let all nodes execute slaveof new master
- Modify the faulty node and execute slaveof new master
Replenish:
Master-slave and sentry can solve the problems of high availability and high concurrent reading. But the problem is still not solved: massive data storage problem, high concurrent writing problem
1.5Redis sharding cluster
Summary of notes:
- Overview: Dividing data into multiple shards and distributing them to different nodes can achieve data horizontal expansion and load balancing. Improved cluster capacity and performance
- Hash slot:
- Meaning: Let the data store in the specified Redis slot in a customized way
- Note: When customizing the key, the key contains **"{ }" , and "{ }" contains at least 1 character, and the part in "{ }" is a valid part**
- Cluster scaling:
add-node
adding nodes,reshard
allocating slots ,del-node
deleting nodes- Failover:
cluster failover
Becoming Primary , Principles, UsingOffset
Offsets- Java access: import
spring-boot-starter-data-redis
dependencies toSpringBoot
integrate Redis, the master node of the configuration file, and set the reading mode of theyml
cluster from the cluster slave node and configuration class configurationLettuceClientConfigurationBuilderCustomizer
1.5.1 Overview
Redis sharding cluster is a scheme that distributes data on multiple Redis nodes. By dividing data into multiple shards and distributing them to different nodes, horizontal expansion and load balancing of data can be achieved. Each node can process a part of the data independently, and the capacity and performance of the cluster can be dynamically adjusted by adding or removing nodes.
illustrate:
There are multiple masters in the cluster, and each master stores different data. Each master can have multiple slave nodes. The masters monitor each other's health status through ping. Client requests can access any node in the cluster and will eventually be forwarded to the correct node
1.5.2 Build a sharded cluster
The 6 Redis
instance information is as follows:
IP | PORT | Role |
---|---|---|
10.13.164.55 | 7001 | master |
10.13.164.55 | 7002 | master |
10.13.164.55 | 7003 | master |
10.13.164.55 | 7004 | slave |
10.13.164.55 | 7005 | slave |
10.13.164.55 | 7006 | slave |
Step 1: Configure the environment
illustrate:
This sentinel cluster node is installed using Docker.
1. Create files and directories
cd /home
mkdir redis
cd redis
mkdir /home/redis/myredis1
touch /home/redis/myredis1/redis.conf
mkdir /home/redis/myredis1/data
// 在myredis2到myredis6的目录中分别创建 myredis.conf 配置文件和data目录,此处省略命令
mkdir /home/redis/myredis2
……
mkdir /home/redis/myredis6
……
touch myredis.conf
mkdir data
Description: View results
myredis.conf
The content of the file is as follows
Note: The configuration file corresponding to each node requires port and other information to be set separately.
# 绑定地址
bind 0.0.0.0
# redis端口,不同节点端口不同分别是7001 ~ 7006
port 7001
#redis 访问密码
requirepass qweasdzxc
#redis 访问Master节点密码
masterauth qweasdzxc
# 关闭保护模式
protected-mode no
# 开启集群
cluster-enabled yes
# 集群节点配置
cluster-config-file nodes.conf
# 超时
cluster-node-timeout 5000
# 集群节点IP host模式为宿主机IP
cluster-announce-ip 10.13.164.55
# 集群节点端口,不同节点端口不同分别是7001 ~ 7006
cluster-announce-port 7001
cluster-announce-bus-port 17001
# 开启 appendonly 备份模式
appendonly yes
# 每秒钟备份
appendfsync everysec
# 对aof文件进行压缩时,是否执行同步操作
no-appendfsync-on-rewrite no
# 当目前aof文件大小超过上一次重写时的aof文件大小的100%时会再次进行重写
auto-aof-rewrite-percentage 100
# 重写前AOF文件的大小最小值 默认 64mb
auto-aof-rewrite-min-size 64mb
# 日志配置
# debug:会打印生成大量信息,适用于开发/测试阶段
# verbose:包含很多不太有用的信息,但是不像debug级别那么混乱
# notice:适度冗长,适用于生产环境
# warning:仅记录非常重要、关键的警告消息
loglevel notice
# 日志文件路径
logfile "/data/redis.log"
Step 2: Run the container
Redis结点1
sudo docker run \
--name myredis1 \
-p 7001:7001 \
-p 17001:17001 \
-v /home/redis/myredis1/redis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis1/data/:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
Redis结点2
sudo docker run \
--name myredis2 \
-p 7002:7002 \
-p 17002:17002 \
-v /home/redis/myredis2/redis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis2/data/:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
Redis结点3
sudo docker run \
--name myredis3 \
-p 7003:7003 \
-p 17003:17003 \
-v /home/redis/myredis3/redis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis3/data/:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
Redis结点4
sudo docker run \
--name myredis4 \
-p 7004:7004 \
-p 17004:17004 \
-v /home/redis/myredis4/redis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis4/data/:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
Redis结点5
sudo docker run \
--name myredis5 \
-p 7005:7005 \
-p 17005:17005 \
-v /home/redis/myredis5/redis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis5/data/:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
Redis结点6
sudo docker run \
--name myredis6 \
-p 7006:7006 \
-p 17006:17006 \
-v /home/redis/myredis6/redis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis6/data/:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
Step 3: Create a cluster
redis-cli --cluster create --cluster-replicas 1 -h 10.13.164.55 -p 7001 -a qweasdzxc 10.13.164.55:7001 10.13.164.55:7002 10.13.164.55:7003 10.13.164.55:7004 10.13.164.55:7005 10.13.164.55:7006
illustrate:
- Visit one of the cluster nodes, connect to one of the clients and create a cluster
- View node status
redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc cluster node
Supplement: Parameter explanation
Used to
--cluster-replicas
specify the number of slave nodes that each master node should have when creating a Redis sharded cluster. For1
, means that a slave node can be automatically created for each master node.
1.5.3 Hash slots
Hash slots are a data sharding mechanism in Redis sharding clusters. It stores data distributedly on multiple nodes to achieve horizontal distribution and load balancing of data.
In a Redis sharded cluster, Redis Cluster divides the entire data set into a fixed number of hash slots (usually 16384 slots). Each key is calculated through a hash function to obtain a slot number, and then the key-value pair is assigned to the corresponding node based on the slot number.
In a Redis sharded cluster, the data key is not bound to the node, but to the slot . Redis will calculate the slot value based on the valid part of the key, in two situations:
- The key contains "{}", and "{}" contains at least 1 character. The part in "{}" is a valid part.
- The key does not contain "{}", the entire key is a valid part
illustrate:
The key is num, then it is calculated according to num, if it is {itcast}num, it is calculated according to itcast. The calculation method is to use the CRC16 algorithm to obtain a hash value, and then take the remainder of 16384, and the result obtained is the slot value. To obtain the data, calculate the hash value based on the effective part of the key, take the remainder of 16384, and use the remainder as the slot, just find the instance where the slot is located
Replenish:
If the same type of data is fixedly saved in the same Redis instance, then this type of data uses the same valid part, for example, the keys are prefixed with {typeId}
1.5.4 Cluster scaling
- Add node
Step 1: Build Redis service
illustrate:
Similar to the steps to build a sharded cluster, first create a 7007 node and run
sudo docker run \
--name myredis7 \
-p 7007:7007 \
-p 17007:17007 \
-v /home/redis/myredis7/redis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis7/data/:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
Step 2: Add nodes to the existing cluster
# 格式
# add-node new_host:new_port existing_host:existing_port
# --cluster-slave
# --cluster-master-id <arg>
# 例如
redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc --cluster add-node 10.13.164.55:7007 10.13.164.55:7001
Description: View results
- Check the number of slots
redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc cluster nodes
- It is found that this Master node has not allocated the number of slots, and needs to allocate slots to continue to use
- Assign slot
# 格式 reshard host:port
# --cluster-from <arg>
# --cluster-to <arg>
# --cluster-slots <arg>
# --cluster-yes
# --cluster-timeout <arg>
# --cluster-pipeline <arg>
# --cluster-replace
# 例如
redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc --cluster reshard 10.13.164.55:7001
Description: View results
- Reassign the 7001 slot to the 7007 node
- Check the number of slots
redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc cluster nodes
- delete node
Step One: Transfer Slots
redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc --cluster reshard 10.13.164.55:7001
Step 2: Delete nodes
# 格式 del-node host:port node_id
# 例如
redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc --cluster del-node 10.13.164.55:7007 489417ac7de6be3997ba26911efa7fc95ce3be40
Description: View results
- Check the number of slots
redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc cluster nodes
- At this point, it can be found that node 7007 has disappeared
1.5.5 Failover
- View master-slave switching
watch redis-cli -h 10.13.164.55 -p 7001 -a qweasdzxc cluster nodes
illustrate:
- Observe the changes in the Master node and find that it has been replaced.
- data migration
Step 1: Connect child nodes
redis-cli -h 10.13.164.55 -p 7002 -a qweasdzxc
Step 2: switch nodes
cluster failover
illustrate:
- As can be seen, it becomes the master node again
Replenish:
- You can use the cluster failover command to manually shut down a master in the cluster and switch to the slave node that executes the cluster failover command to achieve imperceptible data migration.
- Manual Failover supports three different modes: default : the default process, as shown in Figure 1~6, force : omitting the consistency check of the offset, takeover : directly executing the 5th step, ignoring data consistency and ignoring the master Status and other master’s opinions
1.5.6 Basic use cases
illustrate:
In the Redis sharded cluster under the supervision of the Sentinel cluster, its nodes will change due to automatic failover. The Redis client must sense this change and update the connection information in a timely manner. The bottom layer of Spring
RedisTemplate
uses lettuce to realize node perception and automatic switching.
Step 1: Import dependencies
- modify
pom.xml
file
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Step 2: Add configuration
1. Modify application.yaml
configuration file
logging:
level:
io.lettuce.core: debug
pattern:
dateformat: MM-dd HH:mm:ss:SSS
server:
port: 8081
spring:
redis:
cluster:
nodes: # 指定分片集群的每一个节点信息
- 10.13.164.55:7001
- 10.13.164.55:7002
- 10.13.164.55:7003
- 10.13.164.55:7004
- 10.13.164.55:7005
- 10.13.164.55:7006
password: qweasdzxc
2. Add RedisConfig
configuration file class
@Configuration
public class RedisConfig {
@Bean
LettuceClientConfigurationBuilderCustomizer getLettuceClientConfigurationBuilderCustomizer(){
// 设置集群的读取模式,先读取从结点,若失败则再读取主节点
return clientConfigurationBuilder -> clientConfigurationBuilder.readFrom(ReadFrom.REPLICA_PREFERRED);
}
}
Step 3: Test
1. Write HelloController
presentation layer classes
@RestController
public class HelloController {
@Autowired
private StringRedisTemplate redisTemplate;
@GetMapping("/get/{key}")
public String hi(@PathVariable String key) {
return redisTemplate.opsForValue().get(key);
}
@GetMapping("/set/{key}/{value}")
public String hi(@PathVariable String key, @PathVariable String value) {
redisTemplate.opsForValue().set(key, value);
return "success";
}
}
2. Enter the console to request testing
3. View the Idea log
illustrate:
Through the log, you can find that reading and writing are separated
1.6 Summary
1. Comparison of the advantages and disadvantages of Redis master-slave cluster and Redis sharded cluster :
-
Redis master-slave cluster
-
advantage:
-
Data replication: The master node copies data to the slave node to achieve data backup and redundancy and improve data reliability and availability .
-
Separation of reading and writing: The master node is responsible for writing operations, and the slave node is responsible for reading operations, which improves concurrent processing capability and reading performance of the system .
-
Fault tolerance: When the master node fails, it can automatically switch to the slave node as the new master node to achieve high availability .
-
-
shortcoming:
- Write operations depend on the master node, and the performance and stability of the master node have a greater impact on the entire cluster.
- The separation of read and write may cause data delays because the data from the slave node is not necessarily synchronized with the master node in real time.
-
-
Redis sharded cluster
-
advantage:
-
Data sharding: Data is distributed and stored on multiple nodes, which improves storage capacity and throughput .
-
Parallel processing: Each node independently processes its own data fragments, which improves the system's concurrent processing capabilities.
-
Horizontal expansion: Expand the cluster by adding nodes to support larger- scale data storage and processing .
-
-
shortcoming:
-
Impact of node failure: When a node fails, the data responsible for that node will be inaccessible, which may result in data loss or unavailability.
-
Data consistency: Data distribution in a sharded cluster is not necessarily uniform , which may result in higher load on some nodes. Data balance and consistency issues need to be considered.
-
Cross-node transactions: Transaction operations in sharded clusters span multiple nodes, and data consistency and the complexity of concurrency control need to be considered.
-
-
-
Summarize:
- Redis distributed cache has high performance , high availability and rich functions, and is suitable for most scenarios. However, its advantages and disadvantages need to be weighed based on specific business needs and data characteristics, and reasonable configuration and management
- Sharded clusters are suitable for scenarios with a large amount of data and scattered read and write operations, and provide horizontal expansion and high throughput capabilities. However, we need to pay attention to issues such as data balance, node failures, and cross-node transactions.
2. Traditional caching strategy:
Description:
The traditional caching strategy is generally to query Redis first after the request reaches Tomcat, and if it misses, query the database. When the data volume reaches the billion level, there will be problems.
- Requests have to be processed by Tomcat, and Tomcat's performance becomes the bottleneck of the entire system.
- When the Redis cache fails, it will have an impact on the database.
illustrate:
So, how to solve the cache failure and Tomcat bottleneck? Please see the next section for details.
2. Multi-level cache
Summary of notes:
- Please review each section
- Summary: Please check for details
2.1 Overview
Summary of notes:
- Overview: Redis's multi-level cache, which consists of multiple levels of cache to improve system performance and scalability.
- Workflow: When data is accessed, the first-level cache , second-level cache, third-level cache will be queried in sequence...and finally Tomcat will be queried.
Redis's multi-level cache is a common cache architecture that consists of multiple levels of cache to improve system performance and scalability. Each cache level has different characteristics and purposes
illustrate:
The Nginx used as cache is business Nginx, which needs to be deployed as a cluster, and a dedicated Nginx is used as a reverse proxy.
By using multi-level cache, the performance and scalability of the system can be greatly improved, the number of accesses to the back-end data storage system can be reduced , the system load can be reduced, and a better user experience can be provided. At the same time, multi-level cache can also be flexibly configured and managed according to the access mode and importance of data to meet different business needs.
Principle process:
- When the application needs to obtain data, it first queries the first-level cache (L1 cache). If the data exists in the first-level cache, the data is returned directly without accessing the back-end data storage system.
- If the required data does not exist in the L1 cache, the L2 cache (L2 cache) is queried, and if the data exists in the L2 cache, the data is returned to the application and the L2 cache is updated.
- If the required data does not exist in the L3 cache either, the L3 cache (L3 cache) is queried, and if the data exists in the L3 cache, the data is returned to the application and the L3 and L3 caches are updated.
- If the data does not exist in all cache levels, the application fetches the data from the back-end data storage system and stores the data in all cache levels for subsequent access.
2.2JVM process cache
Summary of notes:
- Overview: Caffeine is a high-performance local caching library with the best hit rate
- Basic usage: create Builder objects, get, set
- Cache eviction policy: The cache can set the cache update frequency based on time and capacity
maximumSize
and set the cache expiration time.expireAfterWrite
2.2.1 Overview
Caffeine is a high-performance local cache library developed based on Java8 that provides near-optimal hit rate. Currently, Spring's internal cache uses Caffeine. GitHub address: https://github.com/ben-manes/caffeine
2.2.2 Case-Basic Usage
- Create
Test
class
@Test
void testBasicOps() {
// 1.创建缓存对象
Cache<String, String> cache = Caffeine.newBuilder().build();
// 2.存数据
cache.put("gf", "迪丽热巴");
// 3.取数据
// 3.1不存在则返回null
String gf = cache.getIfPresent("gf");
System.out.println("gf = " + gf);
// 3.2不存在则去数据库查询
String defaultGF = cache.get("defaultGF", key -> {
// 这里可以去数据库根据 key查询value
return "柳岩";
});
System.out.println("defaultGF = " + defaultGF);
}
2.2.3 Cache Eviction Strategy
Caffeine is a high-performance caching library based on Java. It provides a variety of cache eviction strategies to control cache size and memory usage.
It is worth noting that cache eviction takes a certain amount of time, such as 10 seconds or 20 seconds. Here are some common cache eviction strategies supported by Caffeine
- Based on capacity
// 创建缓存对象
Cache<String, String> cache = Caffeine.newBuilder()
.maximumSize(1) // 设置缓存大小上限为 1
.build();
illustrate:
Set an upper limit on the number of caches
- time based
// 创建缓存对象
Cache<String, String> cache = Caffeine.newBuilder()
.expireAfterWrite(Duration.ofSeconds(10)) // 设置缓存有效期为 10 秒,从最后一次写入开始计时
.build();
illustrate:
Set the cache validity time
- reference based
illustrate:
Set the cache to soft reference or weak reference, and use GC to recycle cached data. Poor performance, not recommended.
Replenish:
By default, when a cache element expires, Caffeine will not automatically clean and evict it immediately. Instead, the eviction of invalid data is completed after a read or write operation, or during idle time
2.2.4 Basic Use Cases
Step 1: Import dependencies
- modify
pom.xml
file
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
</dependency>
Step 2: Create configuration file
- Create
CaffeineConfig
configuration class
@Configuration
public class CaffeineConfig {
@Bean
public Cache<Long, Item> itemCache() {
return Caffeine.newBuilder()
.initialCapacity(100) // 设置缓存的初始容量为100个条目
.maximumSize(10000) // 设置缓存的最大容量为10000个条目
.build();
}
@Bean
public Cache<Long, ItemStock> StockCache() {
return Caffeine.newBuilder()
.initialCapacity(100) // 设置缓存的初始容量为100个条目
.maximumSize(10000) // 设置缓存的最大容量为10000个条目
.build();
}
}
Step 3: Implement the query
- Modify control layer
ItemController
class
@GetMapping("/{id}")
public Item findById(@PathVariable("id") Long id) {
return itemCache.get(id, key -> itemService.query()
.ne("status", 3).eq("id", key)
.one());
}
@GetMapping("/stock/{id}")
public ItemStock findStockById(@PathVariable("id") Long id) {
return StockCache.get(id, key -> stockService.getById(key));
}
2.3 Introduction to Lua syntax
Summary of notes:
- Overview: Lua is a lightweight and compact scripting language designed to be embedded in applications to provide flexible extensions and customization capabilities for applications.
- The syntax is similar to python, please check each section for details
2.3.1 Overview
Lua is a lightweight and compact scripting language , written in standard C language and open in source code form. It is designed to be embedded in applications to provide flexible expansion and customization functions for applications. Official website: https://www.lua.org/
2.3.2 Basic use cases
Step 1: Create Lua script
touch hello.lua
Step 2: Add some content
print("Hello World!")
Step 3: Run
lua hello.lua
2.3.3 Data types
type of data | describe |
---|---|
nil | This is the simplest, only the value nil belongs to this class, representing an invalid value (equivalent to false in conditional expressions). |
boolean | Contains two values: false and true |
number | Represents a real floating-point number of type double |
string | A string is represented by a pair of double quotes or single quotes |
function | Function written in C or Lua |
table | A table in Lua is actually an "associative array ", and the index of the array can be a number, string, or table type. In Lua, the creation of a table is done through a "construction expression". The simplest construction expression is {}, which is used to create an empty table. |
illustrate:
- View variable data type
print(type("hello,world"))
2.3.4 Variables
-- 声明字符串
local str = 'hello'
-- 字符串拼接可以使用 ..
local str2 = 'hello' .. 'world'
-- 声明数字
local num = 21
-- 声明布尔类型
local flag = true
-- 声明数组 key为索引的 table
local arr = {
'java', 'python', 'lua'}
-- 声明table,类似java的map
local map = {
name='Jack', age=21}
illustrate:
- access variables
-- 访问数组,lua数组的角标从1开始 print(arr[1]) -- 访问table print(map['name']) print(map.name)
2.3.5 Loop
- Traverse array
-- 声明数组 key为索引的 table
local arr = {
'java', 'python', 'lua'}
-- 遍历数组
for index,value in ipairs(arr) do
print(index, value)
end
- Traverse table
-- 声明map,也就是table
local map = {
name='Jack', age=21}
-- 遍历table
for key,value in pairs(map) do
print(key, value)
end
2.3.6 Function
- define function
function 函数名( argument1, argument2..., argumentn)
-- 函数体
return 返回值
end
-- 例如
function printArr(arr)
for index, value in ipairs(arr) do
print(value)
end
end
2.3.7 Conditional control
- Conditional control
if(布尔表达式)
then
--[ 布尔表达式为 true 时执行该语句块 --]
else
--[ 布尔表达式为 false 时执行该语句块 --]
end
illustrate:
2.4OpenResty Quick Start
Summary of notes:
- Overview: OpenResty is a high-performance web platform based on Nginx , which is used to easily build dynamic web applications, web services and dynamic gateways that can handle ultra-high concurrency and high scalability
2.4.1 Overview
OpenResty® is a high-performance web platform based on Nginx, which is used to easily build dynamic web applications, web services, and dynamic gateways that can handle ultra-high concurrency and high scalability .
OpenResty has the complete functions of Nginx, expands based on the Lua language, integrates a large number of sophisticated Lua libraries, third-party modules, and allows the use of Lua to customize business logic and custom libraries
Official website: https://openresty.org/cn/
2.4.2 Installation
illustrate:
This tutorial installs OpenResty through Docker.
Step 1: Create a directory
cd /home
mkdir openresty
cd /home/openresty
mkdir conf
mkdir lua
Step 2: Install OpenResty
docker run -id --name openresty -p 8080:8080 sevenyuan/openresty
Step 3: Mount configuration
1.Copy OpenResty
configuration
docker cp openresty:/usr/local/openresty/nginx/conf/nginx.conf /home/openresty/conf
docker cp openresty:/usr/local/openresty/lualib /home/openresty
Description: View results
2. Modify /home/openresty/nginx/conf/nginx.conf
configuration
#user nobody;
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
server {
listen 8080;
server_name localhost;
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Step 4: Reinstall
1.DeleteOpenResty
docker rm -f openresty
2.InstallationOpenResty
docker run -id -p 8080:8080 \
--name openresty \
-v /home/openresty/conf/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf \
-v /home/openresty/lua:/usr/local/openresty/nginx/lua \
-v /home/openresty/lualib/:/usr/local/openresty/lualib \
-v /etc/localtime:/etc/localtime \
-d sevenyuan/openresty
illustrate:
Do not add
--restart always
attributes, otherwise the startup will fail
Step 5: Access OpenResty
the control interface
illustrate:
Being able to access
OpenResty
the default interface on the browser side indicates that the installation is successful.
2.5 Query local cache
Summary of notes:
- Overview: Implementing local caching solution through Nginx cluster
- Nginx reverse proxy request,
upstream
how to use it- Nginx dynamic processing of request parameters
2.5.1 Overview
illustrate:
When the client browser sends a request, the NGINX reverse proxy will forward the request to the NGINX local cache.
2.5.2 Basic Use Cases
Step 1: Modify NGINX
the reverse proxy
illustrate:
Let
nginx
the agent go toOpenResty
the business cluster to process the business
1. Modify Nginx
the path from the reverse proxy to the business cluster
upstream nginx-cluster{
# 定义多个请求代理的服务器
server 10.13.167.28:8080;
}
server {
listen 8080;
server_name localhost;
# 当nginx拦截到任一api开头的请求时,会自动的代理到upstream后端服务器模块中
location /api {
proxy_pass http://nginx-cluster;
}
}
illustrate:
2. Restart Nginx
the reverse proxy
nginx.exe -s stop
start nginx
Step 2: Modify NGINX
the local cache
- Modified
OpenResty
configurationnginx.conf
profile
#user nobody;
worker_processes 1;
error_log logs/error.log;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
# 添加对OpenResty的Lua模块的加载
#lua 模块
lua_package_path "/home/openresty/lualib/?.lua;;";
#c模块
lua_package_cpath "/home/openresty/lualib/?.so;;";
server {
listen 8080;
server_name localhost;
# 添加对/api/item这个路径的监听
location /api/item {
# 默认的响应类型
default_type application/json;
# 响应结果有lua/item.lua文件来决定
content_by_lua_file lua/item.lua;
}
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
illustrate:
After modifying the configuration file,
OpenResty
it will be automatically refreshed, so there is no need to restart.
Replenish:
Step 3: Add script execution file
1. Write item.lua
the file
vim /home/openresty/lua/item.lua
2. Add the file content as follows
ngx.say('{"id":10001,"name":"SALSA AIR","title":"RIMOWA 21寸托运箱拉杆箱 SALSA AIR系列果绿色 820.70.36.4","price":27900,"image":"https://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webp","category":"拉杆箱","brand":"RIMOWA","spec":"","status":1,"createTime":"2019-04-30T16:00:00.000+00:00","updateTime":"2019-04-30T16:00:00.000+00:00","stock":2999,"sold":31290}')
Step 4: RebootOpenResty
docker restart openresty
Step 5: View results
1. View the browser’s response data
illustrate:
Indicates that the data response is successful
2. View the browser front-end page
illustrate:
You can see that the price has changed and the Nginx proxy experiment was successful.
2.5.3 Request parameter processing
How to get the parameters in the request address in OpenResty. In fact, OpenResty provides various APIs to get different types of request parameters:
2.5.4 Basic Example - Improvements
illustrate:
Get the parameters in the path placeholder
Step 1: Edit Openresty
the configuration file
- Modified
OpenResty
configurationnginx.conf
file
location ~ /api/item/(\d+) {
# 默认的响应类型
default_type application/json;
# 响应结果有lua/item.lua文件来决定
content_by_lua_file lua/item.lua;
}
illustrate:
Step 2: Write the corresponding Lua
script
- modify
item.lua
file
local id = ngx.var[1]
ngx.say('{"id":' .. id .. ',"name":"SALSA AIR","title":"RIMOWA 21寸托运箱拉杆箱 SALSA AIR系列果绿色 820.70.36.4","price":27900,"image":"https://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webp","category":"拉杆箱","brand":"RIMOWA","spec":"","status":1,"createTime":"2019-04-30T16:00:00.000+00:00","updateTime":"2019-04-30T16:00:00.000+00:00","stock":2999,"sold":31290}')
illustrate:
Represents
..
splicing strings
Step 3: Demo
illustrate:
When the request ID value changes, the returned data will still change with the request.
2.6 Query Tomcat
Summary of notes:
- Overview: Encapsulate Lua script HTTP requests and implement Tomcat cluster query
- Object serialization and deserialization using CJSON
2.6.1 Overview
illustrate:
When
OpenResty
sending a request, the first query will not query the Redis cluster directly, but will query Tomcat to obtain it.
2.6.2Send HTTP request
How to send requests in nginx, in fact, nginx provides an internal API for sending Http requests:
local resp = ngx.location.capture("/path",{
method = ngx.HTTP_GET, -- 请求方式
args = {
a=1,b=2}, -- get方式传参数
body = "c=3&d=4" -- post方式传参数
})
illustrate:
Use nginx
ngx.location.capture
API to send
The response content returned includes:
- resp.status: response status code
- resp.header: response header, which is a table
- resp.body: response body, which is the response data
Notice:
- The path here is the path and does not include IP and port. This request will be monitored and processed by the server inside nginx.
location /path { # 这里是windows电脑的ip和Java服务端口,需要确保windows防火墙处于关闭状态 proxy_pass http://192.168.150.1:8081; }
- But we want this request to be sent to the Tomcat server, so we also need to write a server to reverse proxy this path.
2.6.3 Encapsulating HTTP request tools
Step 1: Create common.lua
the file
在/home/openresty/lualib目录下创建common.lua文件,便于OpenResty的nginx.conf模块的导入
Step 2: Write common.lua
the file
1. Encapsulate HTTP
the function that sends the request
-- 函数,发送http请求,并解析响应
local function read_http(path, params)
local resp = ngx.location.capture(path,{
method = ngx.HTTP_GET,
args = params,
})
if not resp then
-- 记录错误信息,返回404
ngx.log(ngx.ERR, "http not found, path: ", path , ", args: ", args)
ngx.exit(404)
end
return resp.body
end
2. Export the method
-- 将方法导出
local _M = {
read_http = read_http
}
return _M
2.6.4CJSON tool class
OpenResty provides a cjson module to handle JSON serialization and deserialization. Official address: https://github.com/openresty/lua-cjson/
How to use:
- Import cjson module
local cjson = require ("cjson")
- Serialization
local obj = {
name = 'jack',
age = 21
}
local json = cjson.encode(obj)
- Deserialization
local json = '{"name": "jack", "age": 21}'
-- 反序列化
local obj = cjson.decode(json);
print(obj.name)
2.6.5 Basic use cases
premise:
Common.lua
Functions that need to be encapsulated in
Step 1: Add OpenResty
proxy nginx.conf
address
http {
……
server {
listen 8080;
server_name localhost;
# 这里是配置Tomcat服务的电脑的ip和Java服务端口,需要确保其防火墙处于关闭状态
location /item{
proxy_pass http://10.13.122.51:8081;
}
……
}
Step 2: Modify item.lua
the file to implement real business logic
-- 导入common函数库
local common = require('common')
local read_http = common.read_http
-- 导入cjson库
local cjson = require('cjson')
-- 获取路径参数
local id = ngx.var[1]
-- 根据id查询商品
local itemJSON = read_http("/item/".. id, nil)
-- 根据id查询商品库存
local itemStockJSON = read_http("/item/stock/".. id, nil)
-- JSON转化为lua的table
local item = cjson.decode(itemJSON)
local stock = cjson.decode(itemStockJSON)
-- 组合数据
item.stock = stock.stock
item.sold = stock.sold
-- 把item序列化为json 返回结果
ngx.say(cjson.encode(item))
Step 3: Demo
1. Check the background log
illustrate:
Background query successful
2. View the data returned by the browser
illustrate:
Front-end data returned successfully
2.7Tomcat cluster load balancing
Summary of notes:
- Overview: Modify the configuration of Nginx to achieve
upstream
load balancing configuration
2.7.1 Overview
illustrate:
In actual development, Tomcat's environment deployment is not necessarily a stand-alone, but a Tomcat cluster will be deployed, so the Tomcat polymorphic deployment test is implemented here
2.7.2 Basic use cases
Step 1: Configure OpenResty
local cache
1. Modified OpenResty
configuration nginx.conf
file
http{
……
# tomcat集群配置
upstream tomcat-cluster{
hash $request_uri;
server 10.13.122.51:8081;
server 10.13.122.51:8082;
}
upstream tomcat-cluster{
……
server{
……
location /item {
proxy_pass http://tomcat-cluster;
}
……
}
}
Notice:
When writing the configuration file, the file format needs to be unified. It is recommended to type it by hand instead of copying it , otherwise a strange error will be reported! !
illustrate:
- Nginx's
hash $request_uri;
load balancing algorithm is used here to avoid Tomcat data redundancy of different processes.
2. RestartOpenreSty
docker restart openresty
illustrate:
Refreshed configuration
openresty
_nginx.conf
Step 2: Start Tomcat
the cluster
- Idea runs multiple
Tomcat
instances
Step 3: Demo
- View
Idea
log
illustrate:
Check the browser, the access is successful
2.8Redis warm-up
Summary of notes:
- Overview: Implement early loading of data in Redis when the project starts
- Basic use case: build
Handler
a processing class, implementInitializingBean
the interface, rewrite the method, and implement cache preheatingafterPropertiesSet
in this method
2.8.1 Overview
illustrate:
When the service is just started, there is no cache in Redis. If all product data is cached during the first query, it may put greater pressure on the database. Therefore, cache preheating is used to start up.
Cache preheating :
In actual development, we can use big data to count the hot data accessed by users, and query these hot data in advance and save them to Redis when the project is started.
2.8.2 Basic use cases
premise:
Requires the existence of a service with a password
Redis
, please check搭建Redis
the log for details
Step 1: Import dependencies
- Import
Springboot
integrationRedis
dependencies
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Step 2: Add configuration file
1. Modify application.yml
configuration file
spring:
redis:
host: 10.13.167.28
port: 6379
password: qweasdzxc
2.Added Redis
heat treatment mechanism
illustrate:
This project has less data, so all of it is taken out and put into Redis.
@Configuration
public class RedisHandler implements InitializingBean {
@Autowired
StringRedisTemplate stringRedisTemplate;
@Autowired
IItemService itemService;
@Autowired
IItemStockService iItemStockService;
private static final ObjectMapper MAPPER = new ObjectMapper();
@Override
/**
* 初始化缓存
* 此方法会在项目启动时,本类加载完成,和@Autowired加载完成之后执行该方法
* @throws Exception 异常
*/
public void afterPropertiesSet() throws Exception {
// 1.获得Item数据
List<Item> itemList = itemService.list();
for (Item item : itemList) {
// 2.设置Key
String key = "item:id:" + item.getId();
// 3.将数据序列化
String jsonItem = MAPPER.writeValueAsString(item);
stringRedisTemplate.opsForValue().set(key, jsonItem);
}
// 4.获取stock数据
List<ItemStock> stockList = iItemStockService.list();
for (ItemStock itemStock : stockList) {
// 5.设置Key
String key = "itemStock:id:" + itemStock.getId();
// 6.将数据序列化
String jsonItem = MAPPER.writeValueAsString(itemStock);
stringRedisTemplate.opsForValue().set(key, jsonItem);
}
}
}
Step 3: Demo
illustrate:
You can see that this project has consulted the database when it was started.
illustrate:
You can see through the Redis control software that the data is already stored in Redis.
2.9 Query Redis cache
Summary of notes:
- Overview: Encapsulate the Lua script Redis query function to implement Redis query data query
2.9.1 Overview
illustrate:
Tomcat has loaded data into Redis in a preheating manner. Modify the project logic to
OpenResty
query Redis first and then Tomcat
2.9.2 Encapsulating Reids query tool
Step 1: Create/Rewrite common.lua
File
在/home/openresty/lualib目录下创建/改写common.lua文件,便于OpenResty的nginx.conf模块的导入
Step 2: Write common.lua
the file
1. Import Redis
the module and initialize Redis
the object
-- 导入redis
local redis = require('resty.redis')
-- 初始化redis
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)
2. Encapsulate the release Redis
connection function
-- 关闭redis连接的工具方法,其实是放入连接池
local function close_redis(red)
local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒
local pool_size = 100 --连接池大小
local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
if not ok then
ngx.log(ngx.ERR, "放入redis连接池失败: ", err)
end
end
3. Encapsulate the query Redis
function based on Key
-- 查询redis的方法 ip和port是redis地址,key是查询的key
local function read_redis(ip, port, password, key)
-- 获取一个连接
local ok, err = red:connect(ip, port)
if not ok then
ngx.log(ngx.ERR, "连接redis失败 : ", err)
return nil
end
-- 验证密码
if password then
local res, err = red:auth(password)
if not res then
ngx.log(ngx.ERR, "Redis 密码认证失败: ", err)
close_redis(red)
return nil
end
end
-- 查询redis
local resp, err = red:get(key)
-- 查询失败处理
if not resp then
ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key)
end
--得到的数据为空处理
if resp == ngx.null then
resp = nil
ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key)
end
close_redis(red)
return resp
end
4. Export the method
-- 将方法导出
local _M = {
read_http = read_http, -- 此方法为封装HTTP请求的工具导出
read_redis = read_redis
}
return _M
Notice:
The connections in this
common.lua
file are only suitable for connecting to Redis on a single node, and cannot be used to connect Redis master-slave or sharded clusters. If you need to connect to the Redis cluster, please refer to lua to connect to the redis cluster_lua to connect to the redis cluster_CurryYoung11's blog-CSDN blog
Supplement: View Common.lua
the complete code
-- 导入redis local redis = require('resty.redis') -- 初始化redis local red = redis:new() red:set_timeouts(1000, 1000, 1000) -- 关闭redis连接的工具方法,其实是放入连接池 local function close_redis(red) local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒 local pool_size = 100 --连接池大小 local ok, err = red:set_keepalive(pool_max_idle_time, pool_size) if not ok then ngx.log(ngx.ERR, "放入redis连接池失败: ", err) end end -- 查询redis的方法 ip和port是redis地址,key是查询的key local function read_redis(ip, port, password, key) -- 获取一个连接 local ok, err = red:connect(ip, port) if not ok then ngx.log(ngx.ERR, "连接redis失败 : ", err) return nil end -- 验证密码 if password then local res, err = red:auth(password) if not res then ngx.log(ngx.ERR, "Redis 密码认证失败: ", err) close_redis(red) return nil end end -- 查询redis local resp, err = red:get(key) -- 查询失败处理 if not resp then ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key) end --得到的数据为空处理 if resp == ngx.null then resp = nil ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key) end close_redis(red) return resp end -- 封装函数,发送http请求,并解析响应 local function read_http(path, params) local resp = ngx.location.capture(path,{ method = ngx.HTTP_GET, args = params, }) if not resp then -- 记录错误信息,返回404 ngx.log(ngx.ERR, "http查询失败, path: ", path , ", args: ", args) ngx.exit(404) end return resp.body end -- 将方法导出 local _M = { read_http = read_http, read_redis = read_redis } return _M
2.9.3 Basic Use Cases
Step 1: Modify item.lua
the file to implement real business logic
1. Import common
function library
-- 导入common函数库
local common = require('common')
local read_redis = common.read_redis
2. Encapsulate query function
-- 封装查询函数
function read_data(key, path, params)
-- 查询本地缓存
local val = read_redis("10.13.164.55", 7001, "qweasdzxc", key)
-- 判断查询结果
if not val then
ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
-- redis查询失败,去查询http
val = read_http(path, params)
end
-- 返回数据
return val
end
3. Modify the business of product and library query
-- 获取路径参数
local id = ngx.var[1]
-- 根据Id查询商品
local itemJSON = read_data("item:id:" .. id, "/item/" .. id,nil)
-- 根据Id查询商品库存
local stockJson = read_data("item:stock:id:" .. id, "/item/stock/" .. id,nil)
Supplement: View Item.lua
the complete code
-- 导入common函数库 local common = require('common') local read_http = common.read_http local read_redis = common.read_redis -- 导入cjson库 local cjson = require('cjson') -- 封装查询函数 function read_data(key, path, params) -- 查询本地缓存 local val = read_redis("10.13.167.28", 6379, "qweasdzxc", key) -- 判断查询结果 if not val then ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key) -- redis查询失败,去查询http val = read_http(path, params) end -- 返回数据 return val end -- 获取路径参数 local id = ngx.var[1] -- 查询商品信息 local itemJSON = read_data("item:id:" .. id, "/item/" .. id, nil) -- 查询库存信息 local stockJSON = read_data("item:stock:id:" .. id, "/item/stock/" .. id, nil) -- JSON转化为lua的table local item = cjson.decode(itemJSON) local stock = cjson.decode(stockJSON) -- 组合数据 item.stock = stock.stock item.sold = stock.sold -- 把item序列化为json 返回结果 ngx.say(cjson.encode(item))
Step 2: RestartOpenResty
docker restart openresty
illustrate:
Restart the service and refresh the configuration of Nginx.conf
Step 3: Demo
1. ViewIdea
illustrate:
Since Redis has been preheated before, stop the Tomcat service now.
2. View browser
illustrate:
Although the Tomcat service is stopped, the data is stored in Redis.
Openresty
The cluster will first query Redis, so the data is still displayed normally.
2.10 Nginx local cache
Summary of notes:
Overview: Encapsulate Lua script Nginx query function to implement Nginx local cache data query
2.10.1 Overview
illustrate:
When the client accesses,
OpenResty
the local cache is queried first, and then Redis is queried. If Redis is not queried successfully, Tomcat is queried. Implementing the final level of multi-level caching
2.10.2 Local Cache API
OpenResty provides the shard dict function for Nginx, which can share data between multiple workers of nginx and implement caching functions.
Basic use case:
- Enable shared dictionary
# 共享字典,也就是本地缓存,名称叫做:item_cache,大小150m
lua_shared_dict item_cache 150m;
illustrate:
Modified configuration
Openresty
_nginx.conf
- Manipulate shared dictionaries
-- 获取本地缓存对象
local item_cache = ngx.shared.item_cache
-- 存储, 指定key、value、过期时间,单位s,默认为0代表永不过期
item_cache:set('key', 'value', 1000)
-- 读取
local val = item_cache:get('key')
2.10.2 Basic use cases
premise:
Openresty
Configuration of shared cache that needs to be enabled
Step 1: Modify the query function
1. Modify the function item.lua
in the file read_data
to check the local cache first, then the Redis cache, and finally the Tomcat
-- 导入共享词典,本地缓存
local item_cache = ngx.shared.item_cache
-- 封装查询函数
function read_data(key, expire, path, params)
-- 首先,查询本地缓存
local val = item_cache:get(key)
if not val then
ngx.log(ngx.ERR, "本地缓存查询失败,尝试查询Redis, key: ", key)
-- 然后,查询redis
val = read_redis("10.13.167.28", 6379, "qweasdzxc", key)
-- 判断查询结果
if not val then
ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
-- 最后,redis查询失败,去查询http
val = read_http(path, params)
end
end
-- 查询成功,把数据写入本地缓存
item_cache:set(key, val, expire)
-- 返回数据
return val
end
2. Modify item.lua
the calling instance in the file
-- 查询商品信息
local itemJSON = read_data("item:id:" .. id, 1800, "/item/" .. id, nil)
-- 查询库存信息
local stockJSON = read_data("item:stock:id:" .. id, 60, "/item/stock/" .. id, nil)
Supplement: complete item.lua
code
-- 导入common函数库
local common = require('common')
local read_http = common.read_http
local read_redis = common.read_redis
-- 导入cjson库
local cjson = require('cjson')
-- 导入共享词典,本地缓存
local item_cache = ngx.shared.item_cache
-- 封装查询函数
function read_data(key, expire, path, params)
-- 查询本地缓存
local val = item_cache:get(key)
if not val then
ngx.log(ngx.ERR, "本地缓存查询失败,尝试查询Redis, key: ", key)
-- 查询redis
val = read_redis("10.13.167.28", 6379, "qweasdzxc", key)
-- 判断查询结果
if not val then
ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
-- redis查询失败,去查询http
val = read_http(path, params)
end
end
-- 查询成功,把数据写入本地缓存
item_cache:set(key, val, expire)
-- 返回数据
return val
end
-- 获取路径参数
local id = ngx.var[1]
-- 查询商品信息
local itemJSON = read_data("item:id:" .. id, 1800, "/item/" .. id, nil)
-- 查询库存信息
local stockJSON = read_data("item:stock:id:" .. id, 60, "/item/stock/" .. id, nil)
-- JSON转化为lua的table
local item = cjson.decode(itemJSON)
local stock = cjson.decode(stockJSON)
-- 组合数据
item.stock = stock.stock
item.sold = stock.sold
-- 把item序列化为json 返回结果
ngx.say(cjson.encode(item))
Step 2: Demonstration
1. Refresh Nginx
local cache cluster data
illustrate:
When the user fails to access the Nginx cache for the first time, Redis will be queried.
2. Stop Redis
service
docker stop myredis
3. Query browser data
illustrate:
When the Redis service is stopped, the data response is successful, indicating that the Nginx cache has been started.
2.11 Cache Synchronization
Summary of notes:
- Overview: Synchronously update cached data shared between multiple nodes
- Canal: It is used for database incremental log analysis and provides incremental data subscription & consumption. Monitor the incremental log of the database to realize the synchronization and corresponding processing of data requests
2.11.1 Overview
Cache synchronization refers to the process of sharing cached data and maintaining consistency among multiple nodes in a distributed system. When the data in the cache changes, these changes need to be synchronized to the caches of other nodes to ensure that the data obtained by all nodes is the latest.
The method of cache data synchronization:
-
Set validity period : Set the validity period for the cache, and it will be automatically deleted after expiration. Update when querying again
- Advantages: simple and convenient
- Disadvantages: poor timeliness, cache may be inconsistent before expiration
- Scenario: business with low update frequency and low timeliness requirements
-
Synchronous double write : directly modify the cache while modifying the database
- Advantages: strong timeliness, strong consistency between cache and database
- Disadvantages: code intrusion, high coupling;
- Scenario: Cache data with high consistency and timeliness requirements
-
**Asynchronous notification:** Event notification is sent when the database is modified, and the relevant services modify the cached data after listening to the notification.
- Advantages: Low coupling, multiple cache services can be notified at the same time
- Disadvantages: general timeliness, there may be intermediate inconsistencies
- Scenario: The timeliness requirements are average and there are multiple services that need to be synchronized.
Asynchronous notification implements cache synchronization
1. Asynchronous notification based on MQ
illustrate:
After the product service completes the modification of the data, it only needs to send a message to MQ. The cache service listens to MQ messages and then completes updates to the cache
2. Canal-based notifications
illustrate:
After the product service completes the product modification, the business ends directly without any code intrusion. Canal monitors MySQL changes. When changes are found, the cache service is immediately notified. The cache service receives the canal notification and updates the cache.
2.11.2Canal Introduction
Overview
Canal [kə'næl], translated as waterway/pipeline/ditch, canal is an open source project under Alibaba, developed based on Java. Based on database incremental log analysis, incremental data subscription & consumption is provided. GitHub address: https://github.com/alibaba/canal
Canal is implemented based on MySQL's master-slave synchronization. The principle of MySQL master-slave synchronization is as follows:
- MySQL master writes data changes to the binary log (binary log), and the recorded data is called binary log events
- MySQL slave copies the master's binary log events to its relay log (relay log)
- MySQL slave replays events in the relay log and reflects data changes to its own data
illustrate:
Realize master-slave synchronization based on the binary log generated by MySQL
Canal disguises itself as a slave node of MySQL to monitor the master's binary log changes. Then the obtained change information is notified to the Canal client, and then the synchronization of other databases is completed.
illustrate:
When used
Canal
, synchronization with other databases can also be completed
2.11.3 Install and configure Canal
Step 1: Configure MySQL master and slave
1. Modify my.cnf
and openBinlog
vim /home/mysql/conf/my.cnf
2.Add the following content
log-bin=/home/mysql/mysql-bin # 设置binary log文件的存放地址和文件名,叫做mysql-bin
binlog-do-db=heima # 指定对哪个database记录binary log events,这里记录heima这个库
Supplement: my.cnf
complete content
[mysqld] skip-name-resolve character_set_server=utf8 datadir=/home/mysql server-id=1000 log-bin=/home/mysql/mysql-bin # 设置binary log文件的存放地址和文件名,叫做mysql-bin binlog-do-db=heima # 指定对哪个database记录binary log events,这里记录heima这个库
3.Set user permissions
3.1 Add cannal
users and set permissions
create user canal@'%' IDENTIFIED by 'canal'; # 创建canal新用户,并指定密码canal
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT, SUPER ON *.* TO canal@'%';# 授予权限
FLUSH PRIVILEGES; # 刷新权限
3.2 Restart Mysql
the container
docker restart mysql
3.3 View the main database binary log
show master status;
Description: View results
Position
is the offset of synchronized data, similar toRedis
the offset in and in order to achieve master-slave synchronization
Step 2: Configure the network
1. Create a network
docker network create heima
illustrate:
Create a network and put MySQL and Canal into the same Docker network
2. MySQL
Join the network
docker network connect heima mysql
Step 3: Install Canal
1. Pull the image
docker pull canal/canal-server:v1.1.5
2. Run Canal
the container
docker run -p 11111:11111 --name canal \
-e canal.destinations=heima \
-e canal.instance.master.address=mysql:3306 \
-e canal.instance.dbUsername=canal \
-e canal.instance.dbPassword=canal \
-e canal.instance.connectionCharset=UTF-8 \
-e canal.instance.tsdb.enable=true \
-e canal.instance.gtidon=false \
-e canal.instance.filter.regex=heima\\..* \
--network heima \
-d canal/canal-server:latest
Notice:
Remember to change the corresponding account and connection password
illustrate:
When running
Canal
the container, joinheima
the network
Supplement: Parameter meaning
-p 11111:11111
: This is the default listening port of canal-e canal.instance.master.address=mysql:3306
: Database address and port. If you don’t know the mysql container address, you candocker inspect 容器id
check it through-e canal.instance.dbUsername=canal
:database username-e canal.instance.dbPassword=canal
: database password-e canal.instance.filter.regex=
: The name of the table to be monitored
Supplement: table name regularity
- Mysql data parsing focuses on tables, Perl regular expressions. Multiple regular expressions are separated by commas (,), and the escape character requires double slashes (\), for example:
- All tables: .* or . \…
- All tables under canal schema: canal\…
- The table starting with canal under canal: canal\.canal.
- A table under canal schema: canal.test1
- Use multiple rules in combination and separate them with commas: canal\…*,mysql.test1,mysql.test2
Step 4: Demonstrate
1. Check Canal
status
docker logs canal
illustrate:
- Indicates
Canal
successful startup
2. View heima
database logs
docker exec -it canal bash # 进入容器内部
tail -f /home/admin/canal-server/logs/heima/heima.log
Description: View results
Replenish:
- If the log output reports an error
2023-07-07 16:22:16.085 [MultiStageCoprocessor-other-heima-0] WARN com.taobao.tddl.dbsync.binlog.LogDecoder - Skipping unrecognized binlog event Unknown from: mysql-bin.000005:2262
- The current MySQL version does not match the Canal version. Please change the two versions.
2.11.4 Basic use cases
illustrate:
Configure Canal to automatically update the Redis cache and JVM cache after changes in MySQL
premise:
Completed the installation and configuration of Canal
Step 1: Import dependencies
- modify
pom.xml
file
<!--canal-->
<dependency>
<groupId>top.javatool</groupId>
<artifactId>canal-spring-boot-starter</artifactId>
<version>1.2.1-RELEASE</version>
</dependency>
Step 2: Write configuration
- Revise
application.yml
canal:
destination: heima # canal实例名称,要跟canal-server运行时设置的destination一致
server: 10.13.164.55:11111 # canal地址
Step 3: Write entity class
- Modify entity
Item
class
@Data
@TableName("tb_item")
public class Item {
@TableId(type = IdType.AUTO)
@Id //canal中, 标记表中的id字段
private Long id;//商品id
@Column(name = "name") //canal中, 标记表中与属性名不一致的字段,此处便于做演示,因此设置一下
private String name;//商品名称
private String title;//商品标题
private Long price;//价格(分)
private String image;//商品图片
private String category;//分类名称
private String brand;//品牌名称
private String spec;//规格
private Integer status;//商品状态 1-正常,2-下架
private Date createTime;//创建时间
private Date updateTime;//更新时间
@TableField(exist = false)
@Transient // canal中,标记不属于表中的字段
private Integer stock;
@TableField(exist = false)
@Transient
private Integer sold;
}
illustrate:
What Canal pushes to canal-client is the modified row of data (row), and the canal-client we introduced will help us encapsulate the row data into the Item entity class. In this process, you need to know the mapping relationship between database and entities, and you need to use several annotations of JPA.
Step 4: Write the listener
- Add new
ItemHandler
classes, implementEntryHandler<Item>
interfaces, and overrideinsert、update、delete
methods
@CanalTable("tb_item") //指定要监听的表
@Component // 将监听交给Spring管理
public class ItemHandler implements EntryHandler<Item> {
@Autowired
RedisHandler redisHandler;
@Autowired
private StringRedisTemplate stringRedisTemplate;
@Autowired
private Cache<Long, Item> itemCache;
private static final ObjectMapper MAPPER = new ObjectMapper();
/**
* 监听到商品插入
*
* @param item 商品
*/
@Override
public void insert(Item item) {
// 写数据到JVM进程缓存
itemCache.put(item.getId(), item);
// 写数据到redis
saveItem(item);
}
/**
* 监听到商品数据修改
*
* @param before 商品修改前
* @param after 商品修改后
*/
@Override
public void update(Item before, Item after) {
// 写数据到JVM进程缓存
itemCache.put(after.getId(), after);
// 写数据到redis
saveItem(after);
}
/**
* 监听到商品删除
*
* @param item 商品
*/
@Override
public void delete(Item item) {
// 删除数据到JVM进程缓存
itemCache.invalidate(item.getId());
// 删除数据到redis
deleteItemById(item.getId());
}
private void saveItem(Item item) {
try {
String json = MAPPER.writeValueAsString(item);
stringRedisTemplate.opsForValue().set("item:id:" + item.getId(), json);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
private void deleteItemById(Long id) {
stringRedisTemplate.delete("item:id:" + id);
}
}
Step 5: Demonstrate
1. Check IDEA
the log
illustrate:
Canal
MySQL message monitoring implemented in the current project
2. After modifying the database information, check that the browser data has been changed
illustrate:
When you see data changes, it means that the JVM cache has been refreshed and the Redis data has been refreshed. If you don’t believe it, check it yourself.
2.12 Summary
illustrate:
Now, the local project has completed building a single node
OpenResty
. If you need to build multiple nodes, remember to configure the local cache cluster and create a load balancer when using Nginx reverse proxy.
Based on the study of Redis6, the following questions are raised:
- If
Openresty
the cache fails, how to solve it? Implement proactive updates for sensitive data Redis
How to solve single node downtime? Need to use Lua script to access Redis cluster- Are there any other alternatives to multi-level caching? Not found yet
3. Best Practices
3.1 Redis key-value design
Note summary: View each summary
3.1.1 Elegant Key structure
Summary of notes:
- Key best practices:
- Fixed format: [Business Name]:[Data Name]:[id]
- Short enough: no more than 44 bytes
- Does not contain special characters
- Best practices for Value:
- Reasonably split data and reject BigKey
- Choose the appropriate data structure
- The number of entries in the Hash structure should not exceed 1000
- Set a reasonable timeout
- Appropriate data types, such as Hash structure, etc.
Although the Key of Redis can be customized, it is best to follow the following best practices:
- Follow the basic format: [business name]:[data name]:[id]
- No more than 44 bytes in length
- Does not contain special words
Description: For example
advantage:
- Highly readable
- Avoid key conflicts
- Easy to manage
- More memory saving: key is string type, and the underlying encoding includes int, embstr and raw. embstr is used when less than 44 bytes, using continuous memory space, and the memory footprint is smaller
Replenish:
- When the Key value exceeds 44 bytes, the format will be automatically used
raw
, and non-continuous space will be used, so more memory will be occupied
Replenish:
When storing a piece of data, it often occupies more bytes in memory than the data value itself, because the bottom layer of Redis stores meta information
3.1.2 Reject BigKey
meaning:
In Redis, BigKey (big key) refers to a key-value pair that takes up a large amount of storage space. When the size of a key-value pair exceeds the Redis configuration threshold (default is 10KB), it is considered a BigKey.
- The amount of data in the Key itself is too large : a String type Key with a value of 5 MB.
- Too many members in Key : A ZSET type Key has 10,000 members.
- The data volume of the members in the Key is too large: a Hash type Key has only 1,000 members, but the total size of the Value (value) of these members is 100 MB.
Dangers of BigKey:
-
network congestion
When executing a read request for BigKey, a small amount of QPS may cause the bandwidth usage to be full, causing the Redis instance and even the physical machine where it is located to slow down.
-
Data skew
The memory usage of the Redis instance where BigKey is located is far higher than other instances, and the memory resources of the data fragmentation cannot be balanced
-
Redis blocking
Operations on hash, list, zset, etc. with many elements will take time and cause the main thread to be blocked
-
CPU pressure
Data serialization and deserialization of BigKey will cause CPU usage to soar, affecting Redis instances and other local applications.
Discover BigKey:
-
redis-cli --bigkeys
Using the –bigkeys parameter provided by redis-cli, you can traverse and analyze all keys, and return the overall statistical information of the Key and the Top1 big key of each data
-
scan scan
Program by yourself, use scan to scan all keys in Redis, and use strlen, hlen and other commands to determine the length of the key (MEMORY USAGE is not recommended here)
-
Third party tools ✔️
Use third-party tools, such as Redis-Rdb-Tools , to analyze RDB snapshot files and comprehensively analyze memory usage.
-
Network Monitoring
Custom tool to monitor network data in and out of Redis and proactively alert when the warning value is exceeded.
Delete BigKey:
-
redis 3.0 and below
If it is a collection type, traverse the elements of BigKey, first delete the sub-elements one by one, and finally delete the BigKey
-
Redis 4.0 and later
Redis provides an asynchronous deletion command after 4.0: unlink
illustrate:
3.1.3 Appropriate data types
illustrate:
Choose a suitable data structure for storage, and the bottom layer takes up less space
illustrate:
The most serious problem now is that there are too many entries, which causes the BigKey problem. So how to solve it?
illustrate:
String type violence is simple, but there is not much optimization of memory
illustrate:
When splitting the Key value, let the entry of each Hash be 100, so that it
entry
does not exceed 500, so the data will be stored in a Hash table, thus reducing memory storage.
3.2 Batch processing optimization
Summary of notes:
- Overview: When the amount of data to be transferred is too large, a batch processing solution can be used to reduce network transmission time and improve business execution time.
- Batch processing plan:
- Native M operation
- Pipeline batch processing
- Note: Pipeline's multiple commands are not atomic. It is not recommended to carry too many commands at one time during batch processing
3.2.1 Overview
illustrate:
When there are N command responses, the response time will increase due to network delay during the transmission of each command. Because the concurrency of Redis command execution time is not high, it is one in 50,000. Therefore, the response time of the command will be greatly increased due to the time-consuming network transmission.
illustrate:
When there are N command responses, executing multiple commands at one time will reduce network delay time. At this time, the concurrency during Redis command execution is not high. Therefore, the response time of the command will be greatly reduced
3.2.2MSET
Redis provides many commands such as Mxxx, which can insert data in batches, for example:
- mset
- hmset
Code example: Use mset to insert 100,000 pieces of data in batches
@Test
void testMxx() {
String[] arr = new String[2000];
int j;
for (int i = 1; i <= 100000; i++) {
j = (i % 1000) << 1;
arr[j] = "test:key_" + i;
arr[j + 1] = "value_" + i;
if (j == 0) {
jedis.mset(arr);
}
}
}
illustrate:
When using the bit shift operator to move one bit, any number will be multiplied by 2. At this time, the array with a capacity of 2000 will be filled in the form of Key:value.
Notice:
Don’t transmit too many commands in one batch, otherwise a single command will occupy too much bandwidth and cause network congestion.
3.2.1Pipeline
Although MSET can perform batch processing, it can only operate on some data types. Therefore, if you need batch processing of complex data types, it is recommended to use the Pipeline function.
@Test
void testPipeline() {
// 创建管道
Pipeline pipeline = jedis.pipelined();
for (int i = 1; i <= 100000; i++) {
// 放入命令到管道
pipeline.set("test:key_" + i, "value_" + i);
if (i % 1000 == 0) {
// 每放入1000条命令,批量执行
pipeline.sync();
}
}
}
illustrate:
In the pipeline, you can add any command. Send the commands in the pipeline to Redis in batches and execute them sequentially. It takes more time to execute the command than
MSET
the command, because when the commands in the pipeline arrive in Redis, a queue will be formed and executed sequentially. If there are multiple commands executed in Redis at this time, it will cause delayed execution of commands in the pipeline.
3.2.2 Batch processing optimization under cluster
illustrate:
Batch processing such as MSET or Pipeline needs to carry multiple commands in one request. If Redis is a cluster, the multiple keys of the batch command must fall in one slot, otherwise the execution will fail.
Code example:
@SpringBootTest
public class MultipleTest {
@Test
void testMsetInCluseter() {
StringRedisTemplate stringRedisTemplate = new StringRedisTemplate();
Map<String, String> map = new HashMap<>();
map.put("name", "yueyue");
map.put("age", "18");
stringRedisTemplate.opsForValue().multiSet(map);
}
}
illustrate:
When using batch commands, a set of batch operations for Redis provided by the Spring framework will determine whether to process the cluster.
3.3 Server-side optimization
Summary of notes:
- Persistence configuration: reserve enough memory space and do not deploy it together with CPU-intensive applications
- Slow query: configure slow query threshold and slow query capacity limit, slow query log list, log length query and other operation and maintenance operations
- Commands and security configuration: disable
keys *
and other commands , set Redis password, enable firewall
3.3.1 Persistence configuration
Although Redis persistence can ensure data security, it also brings a lot of additional overhead. Therefore, please follow the following suggestions for persistence:
- Redis instances used for caching should try not to enable persistence.
- It is recommended to turn off the RDB persistence function and use AOF persistence.
- Use scripts to regularly create RDB on the slave node to achieve data backup.
- Set a reasonable rewrite threshold to avoid frequent bgrewrite
- Configure no-appendfsync-on-rewrite = yes to prohibit AOF during rewrite to avoid blocking caused by AOF.
Deployment recommendations:
- The physical machine of the Redis instance must reserve enough memory to cope with fork and rewrite.
- The memory limit of a single Redis instance should not be too large, such as 4G or 8G. It can speed up the speed of fork, reduce the pressure of master-slave synchronization and data migration
- Do not deploy with CPU-intensive applications
- Do not deploy with high disk load applications. For example: database, message queue
3.3.2 Slow query
Slow query : The slow query log is a log that the Redis server calculates the execution time of each command before and after the command is executed. When it exceeds a certain threshold, it is recorded.
illustrate:
Slow queries will wait because Redis executes more commands, and the waiting will exceed the threshold
View the slow query log list:
- slowlog len: Query the slow query log length
- slowlog get [n]: Read n slow query logs
- slowlog reset: Clear the slow query list
Replenish:
Set the slow query threshold:
- slowlog-log-slower-than : Slow query threshold, in microseconds. The default is 10000, 1000 is recommended
illustrate:
Generally, it takes about ten microseconds to execute a command
Slow queries will be put into the slow query log. The length of the log has an upper limit , which can be specified by configuration:
- slowlog-max-len : The length of the slow query log (essentially a queue). The default is 128, 1000 is recommended
illustrate:
You can adjust the log of slow queries to facilitate query retrieval
Supplement: To modify these two configurations, you can use: config set
command
3.3.3 Commands and security configuration
Redis will be bound at 0.0.0.0:6379, which will expose the Redis service to the public network, and if Redis does not do identity authentication, there will be serious security holes.
How to reproduce the vulnerability: https://cloud.tencent.com/developer/article/1039000
The core reasons for the vulnerability are as follows:
- Redis no password set
- Use the config set command of Redis to dynamically modify the Redis configuration
- Start Redis using Root account permissions
To avoid such vulnerabilities, here are some suggestions:
-
Redis must set a password
-
It is prohibited to use the following commands online: keys, flushall, flushdb, config set and other commands. Can be disabled using rename-command.
illustrate:
-
bind: restrict the network card and prohibit access to external network cards
illustrate:
-
Turn on firewall
-
Do not use the Root account to start Redis
-
Try not to have a default port
3.4 Memory configuration
Summary of notes:
- Overview: Properly configure the memory capacity of the copy cache area, AOF cache area, and client cache area in memory to improve performance.
3.4.1 Overview
When Redis memory is insufficient, it may cause problems such as frequent deletion of keys, prolonged response time, and unstable QPS. When the memory usage reaches more than 90%, we need to be vigilant and quickly locate the cause of memory usage.
Memory usage | illustrate |
---|---|
data memory | It is the most important part of Redis and stores the key value information of Redis. The main problems are BigKey problems and memory fragmentation problems |
process memory | The running of the Redis main process itself definitely requires memory, such as code, constant pool, etc.; this part of memory is about several megabytes, which can be ignored compared with the memory occupied by Redis data in most production environments. |
buffer memory | Generally include client buffer, AOF buffer , copy buffer, etc. Client buffers include input buffers and output buffers. The memory usage of this part fluctuates greatly. Improper use of BigKey may cause memory overflow. |
Redis provides some commands to view the current memory allocation status of Redis:
- info memory
- memory xxx
3.4.2 Memory buffer configuration
There are three common types of memory buffers:
- Replication buffer: repl_backlog_buf of master-slave replication . If it is too small, it may cause frequent full replication and affect performance. Set by repl-backlog-size , default 1mb
- AOF buffer: The cache area before AOF flushes the disk, and the buffer where AOF performs rewrite. Unable to set capacity limit
- Client buffer: divided into input buffer and output buffer. The maximum input buffer is 1G and cannot be set. The output buffer can be set
The default configuration is as follows:
3.5 Cluster Best Practices
Summary of notes:
- Overview: Building a cluster requires considering bandwidth, data skew, data integrity, client performance and many other issues.
- Note: There are default cluster data integrity configuration issues in sharded clusters and master-slave clusters, which need to be
cluster-require-full-coverage
configured tofalse
improve Redis cluster performance according to needs.- It is not easy to build too many nodes to avoid business timeout between nodes.
3.5.1 Overview
Although the cluster has high availability features and can realize automatic fault recovery, if used improperly, there will be some problems:
- Cluster integrity issues
- Cluster bandwidth problem
- Data skew problem
- Client performance issues
- Cluster compatibility issues with commands
- Lua and transaction issues
3.5.2 Cluster integrity issues
In the default configuration of Redis, if any slot is found to be unavailable, the entire cluster will stop external services:
Replenish:
In order to ensure high availability, it is recommended to
cluster-require-full-coverage
configure it asfalse
3.5.3 Cluster bandwidth problem
Cluster nodes will constantly ping each other to determine the status of other nodes in the cluster. The information carried by each Ping includes at least:
- Slot information
- Cluster status information
The more nodes there are in the cluster, the larger the amount of cluster status information data. The relevant information of 10 nodes may reach 1kb. At this time, the bandwidth required for each cluster intercommunication will be very high.
Solutions:
- Avoid large clusters. The number of cluster nodes should not be too many , preferably less than 1,000. If the business is large, establish multiple clusters.
- Avoid running too many Redis instances in a single physical machine
- Configure appropriate cluster-node-timeout values according to the number of nodes and bandwidth to ensure node failure detection.
3.5.4 Cluster or master-slave
Although the cluster has high availability features and can realize automatic fault recovery, if used improperly, there will be some problems:
- Cluster integrity issues
- Cluster bandwidth problem
- Data skew problem
- Client performance issues
- Cluster compatibility issues with commands
- Lua and transaction issues
Notice:
Single Redis (master-slave Redis) can already reach 10,000-level QPS, and also has strong high-availability features. If the master and slave can meet business needs, try not to build a Redis cluster.
Log
There is a constant sdown problem when building a Redis sentinel.
sdown
Indicates that Sentinel subjectively believes that this node is down
- Check whether the IP and port of the Sentinel configuration file Master node are configured correctly.
- Check whether the configuration file of Sentry configures the connection password of the cluster
sentinel auth-pass mymaster qweasdzxc
- Check whether the system firewall opens the ports of each node in the Redis cluster
Build MySQL
Step 1: Prepare the basic environment
1.Create a directory
cd home
mkdir mysql
cd mysql
mkdir logs
mkdir data
mkdir conf
Step 2: Run the container
- Use Docker to execute the following command
sudo docker run \
-p 3306:3306 \
--name mysql \
-v /home/mysql/logs:/logs \
-v /home/mysql/data:/var/lib/mysql \
-v /home/mysql/conf:/etc/mysql/conf.d \
-e MYSQL_ROOT_PASSWORD=qweasdzxc \
-d mysql:latest
Step 3: Add configuration
1. Enter conf
the directory to create my.cnf
the file
[mysqld]
skip-name-resolve
character_set_server=utf8
datadir=/home/mysql
server-id=1000
2. Restart the container
docker restart mysql
Step 4: Initialize the project table
/*
Navicat Premium Data Transfer
Source Server : 192.168.150.101
Source Server Type : MySQL
Source Server Version : 50725
Source Host : 192.168.150.101:3306
Source Schema : heima
Target Server Type : MySQL
Target Server Version : 50725
File Encoding : 65001
Date: 16/08/2021 14:45:07
*/
SET NAMES utf8mb4;
SET FOREIGN_KEY_CHECKS = 0;
-- ----------------------------
-- Table structure for tb_item
-- ----------------------------
DROP TABLE IF EXISTS `tb_item`;
CREATE TABLE `tb_item` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '商品id',
`title` varchar(264) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '商品标题',
`name` varchar(128) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL DEFAULT '' COMMENT '商品名称',
`price` bigint(20) NOT NULL COMMENT '价格(分)',
`image` varchar(200) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '商品图片',
`category` varchar(200) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '类目名称',
`brand` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '品牌名称',
`spec` varchar(200) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '规格',
`status` int(1) NULL DEFAULT 1 COMMENT '商品状态 1-正常,2-下架,3-删除',
`create_time` datetime NULL DEFAULT NULL COMMENT '创建时间',
`update_time` datetime NULL DEFAULT NULL COMMENT '更新时间',
PRIMARY KEY (`id`) USING BTREE,
INDEX `status`(`status`) USING BTREE,
INDEX `updated`(`update_time`) USING BTREE
) ENGINE = InnoDB AUTO_INCREMENT = 50002 CHARACTER SET = utf8 COLLATE = utf8_general_ci COMMENT = '商品表' ROW_FORMAT = COMPACT;
-- ----------------------------
-- Records of tb_item
-- ----------------------------
INSERT INTO `tb_item` VALUES (10001, 'RIMOWA 21寸托运箱拉杆箱 SALSA AIR系列果绿色 820.70.36.4', 'SALSA AIR', 16900, 'https://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webp', '拉杆箱', 'RIMOWA', '{\"颜色\": \"红色\", \"尺码\": \"26寸\"}', 1, '2019-05-01 00:00:00', '2019-05-01 00:00:00');
INSERT INTO `tb_item` VALUES (10002, '安佳脱脂牛奶 新西兰进口轻欣脱脂250ml*24整箱装*2', '脱脂牛奶', 68600, 'https://m.360buyimg.com/mobilecms/s720x720_jfs/t25552/261/1180671662/383855/33da8faa/5b8cf792Neda8550c.jpg!q70.jpg.webp', '牛奶', '安佳', '{\"数量\": 24}', 1, '2019-05-01 00:00:00', '2019-05-01 00:00:00');
INSERT INTO `tb_item` VALUES (10003, '唐狮新品牛仔裤女学生韩版宽松裤子 A款/中牛仔蓝(无绒款) 26', '韩版牛仔裤', 84600, 'https://m.360buyimg.com/mobilecms/s720x720_jfs/t26989/116/124520860/644643/173643ea/5b860864N6bfd95db.jpg!q70.jpg.webp', '牛仔裤', '唐狮', '{\"颜色\": \"蓝色\", \"尺码\": \"26\"}', 1, '2019-05-01 00:00:00', '2019-05-01 00:00:00');
INSERT INTO `tb_item` VALUES (10004, '森马(senma)休闲鞋女2019春季新款韩版系带板鞋学生百搭平底女鞋 黄色 36', '休闲板鞋', 10400, 'https://m.360buyimg.com/mobilecms/s720x720_jfs/t1/29976/8/2947/65074/5c22dad6Ef54f0505/0b5fe8c5d9bf6c47.jpg!q70.jpg.webp', '休闲鞋', '森马', '{\"颜色\": \"白色\", \"尺码\": \"36\"}', 1, '2019-05-01 00:00:00', '2019-05-01 00:00:00');
INSERT INTO `tb_item` VALUES (10005, '花王(Merries)拉拉裤 M58片 中号尿不湿(6-11kg)(日本原装进口)', '拉拉裤', 38900, 'https://m.360buyimg.com/mobilecms/s720x720_jfs/t24370/119/1282321183/267273/b4be9a80/5b595759N7d92f931.jpg!q70.jpg.webp', '拉拉裤', '花王', '{\"型号\": \"XL\"}', 1, '2019-05-01 00:00:00', '2019-05-01 00:00:00');
-- ----------------------------
-- Table structure for tb_item_stock
-- ----------------------------
DROP TABLE IF EXISTS `tb_item_stock`;
CREATE TABLE `tb_item_stock` (
`item_id` bigint(20) NOT NULL COMMENT '商品id,关联tb_item表',
`stock` int(10) NOT NULL DEFAULT 9999 COMMENT '商品库存',
`sold` int(10) NOT NULL DEFAULT 0 COMMENT '商品销量',
PRIMARY KEY (`item_id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = COMPACT;
-- ----------------------------
-- Records of tb_item_stock
-- ----------------------------
INSERT INTO `tb_item_stock` VALUES (10001, 99996, 3219);
INSERT INTO `tb_item_stock` VALUES (10002, 99999, 54981);
INSERT INTO `tb_item_stock` VALUES (10003, 99999, 189);
INSERT INTO `tb_item_stock` VALUES (10004, 99999, 974);
INSERT INTO `tb_item_stock` VALUES (10005, 99999, 18649);
SET FOREIGN_KEY_CHECKS = 1;
Build Nginx
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
# nginx的业务集群,nginx本地缓存、redis缓存、tomcat查询
upstream nginx-cluster{
server 10.13.164.55:8081;
}
server {
listen 8080;
server_name localhost;
location /api {
proxy_pass http://nginx-cluster;
}
location / {
root html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
Build Redis
Step 1: Add redis.conf
configuration file
bind 0.0.0.0
protected-mode no
port 6379
tcp-backlog 511
requirepass qweasdzxc
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 30
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-disable-tcp-nodelay no
replica-priority 100
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
appendonly yes
appendfilename "appendonly.aof"
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-max-len 128
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
Step 2: Run Redis
the container
sudo docker run \
--restart=always \
-p 6379:6379 \
--name myredis \
-v /home/redis/myredis/redis.conf:/etc/redis/redis.conf \
-v /home/redis/myredis/data:/data \
-d redis redis-server /etc/redis/redis.conf \
--appendonly yes \
--requirepass qweasdzxc
illustrate:
If this Redis container needs to be connected remotely, it needs to configure a password, that is, add
--requirepass
parameters