Redis (7): Introduction to Redis Basics

Redis usage

Redis is an open source NoSQL in-memory database for high-performance data storage and access. Redis supports multiple data types, including strings, hashes, lists, sets and ordered sets, and supports distributed storage and operations. Redis features include fast speed, high availability, and easy scalability, making it suitable for various application scenarios.

Purpose: cache, message queue, distributed lock, counter, database, distributed cache, real-time statistics, recommendation system, hotspot data storage, geographical location storage, time series data storage, etc.

Redis advantages and disadvantages

  • advantage:
    • Fast: Redis uses memory to store data, and the reading and writing speed is very fast.
    • Multiple data types: Redis supports multiple data structures and can adapt to different application scenarios.
    • Rich features: Redis supports transactions, Lua scripts, publish-subscribe mode and other advanced features.
    • Scalability: Redis can achieve high availability and horizontal scalability through master-slave replication, sentry mode and cluster mode.
  • shortcoming:
    • Memory limitations: Since Redis uses memory to store data, it is limited by memory capacity.
    • Persistence problem: Redis does not persist data to the hard disk by default, and a persistence mechanism needs to be used to solve the problem of data loss.
    • Single-threaded model: Redis adopts a single-threaded model. Although concurrency problems can be solved through multi-instance and multi-threading, the concurrency capability is relatively weak.

docker runs Redis

$ docker run -d -p 6379:6379 --name="local-redis" redis --requirepass 123456

Redis common commands

String command

Equivalent toMap<?, ?>

-- $ set key value
set age 18

-- $ setnx key value
-- 如果key存在,value不存在,则赋值
-- 如果key存在,value存在,则不做处理
-- 如果key不存在,则添加key-value
setnx name Tom

-- $ setex key seconds value 设置key-value在seconds秒后失效value变为null
setex name 10 Lee

-- $ ttl key 查看key存活时间
ttl name

-- $ del key
del name

-- $ get key
get age

-- $ incr key 自加1
incr age

-- $ decr key 自减1
decr age

-- $ incrby key increment 将key对应的value自增increment
incrby age 10

-- $ mset k1 v1 k2 v2 k3 v3 ... 批量添加key-value
mset name Tom age 18 sex 1

-- $ mget k1 k2 k3 ... 批量获取key的value
mget name age sex

-- $ append key value 在key对应的value值上拼接value
append name ,World

-- $ setrange key offset value 从offset索引处开始逐字符串替换key所对应的value值
setrange name 6 Lee

Hash command

Equivalent toMap<?, Map<?, ?>>

-- $ hset key field value 将field-value放到Redis中,键为key
hset user name Lee

hget user name

-- $ hexists key field 判断key中的field是否存在
hexists user age

hdel user name

hincrby user age 10

-- $ hlen key 返回key中field的数量
hlen user

-- $ hkeys key 获取key下所有的field值
hkeys user

-- $ hvals key 获取key下所有的value值
hvals user

-- $ hgetall key 获取key下所有的field-value值
hgetall user

List command

Equivalent toMap<?, List>

-- $ rpush key value 从右开始向key中添加value值
rpush name Lee
rpush name Tom

-- $ lpush key value 从左开始向key中添加value值
lpush name zhangsan

-- $ lrange key start stop 从左开始遍历key所对应的value值
lrange name 0 2

-- $ lpop key 删除key中最左侧的value并返回删除的value
lpop name

-- $ rpop key 删除key中最右侧的value并返回删除的value
rpop name

-- $ llen key 获取key所对应的value长度
llen name

-- $ linsert key before pivot value 操作key对应的value队列,在pivot值之前插入value值
linsert name before Lee Hello

-- $ linsert key after pivot value 操作key对应的value队列,在pivot值之后插入value值
linsert name after Lee World

-- $ lset key index value 操作key对应的value队列,修改索引为index的value值
lset name 2 Hello,World

-- $ lrem key count value 操作key对应的value队列,删除count个值为value的元素
lrem name 3 Tom

-- $ ltrim key start stop 操作key对应的value队列,截取start到stop内的元素作为value值
ltrim name 2 4

-- $ lindex key index 操作key对应的value队列,获取index索引下value的值
lindex name 2

Set command

Equivalent toMap<?, Set>

-- $ sadd key members[...] 向key中添加多个member元素
sadd name a b c
sadd name1 Tom Lee ZhangSan XiaoMing
sadd name2 XiaoLiu Tom Lee XiXi

-- $ smembers key 遍历key集合中的所有元素
smembers name

-- $ srem key members[...] 删除key集合中的多个member元素
srem name a b d

-- $ spop key count 随机删除key集合中的count个元素,并返回删除的元素
spop name 3

-- $ sdiff key1 key2 返回key2集合中没有的key1集合中的元素(差集) ZhangSan XiaoMing
sdiff name1 name2

-- $ sdiffstore key key1 key2 将key2集合中没有的key1集合中的元素放到key集合中
sdiffstore name name1 name2

-- $ sinter key1 key2 返回key1 key2集合的交集
sinter name1 name2

-- $ sinterstore key key1 key2 将key1 key2集合的交集放到key集合中
sinterstore name name1 name2

-- $ sunion key1 key2 返回key1 key2集合的并集
sunion name1 name2

-- $ sunionstore key key1 key2 将key1 key2集合的并集放到key集合中
sunionstore name name1 name2

-- $ srandmember key count 随机获取key集合中的count个元素
srandmember name 3

-- $ sismember key member 判断member元素是否在key集合中
sismember name Tom

-- $ smove key1 key2 member 将key1集合中的member元素移动到key2集合中
smove name1 name2 XiaoMing

ZSet command

Equivalent to sortedMap<?, Set>

-- $ zadd key score member score member ... 按score排序将member元素添加到key集合中
zadd name 1 熊大 3 张三 5 王五 7 孙七 9 吴九 2 熊二 4 李四 6 赵六 8 周八 0 郑十

zadd name 1 熊大
zadd name 3 张三
zadd name 5 王五
zadd name 7 孙七
zadd name 9 吴九
zadd name 2 熊二
zadd name 4 李四
zadd name 6 赵六
zadd name 8 周八
zadd name 0 郑十

-- $ zincrby key increment member 将key集合中的member元素排名分数增加increment
zincrby name 10 郑十

-- $ zcard key 返回key集合中元素个数
zcard name

-- $ zrank key member 返回key集合中member元素的正序排名
zrank name 周八

-- $ zrevrank key member 返回key集合中member元素的倒序排名
zrevrank name 周八

-- $ zscore key member 返回key集合中member元素的分数
zscore name 郑十

-- $ zrange key start stop [withscores] 返回key集合中start到stop的元素并进行升序,withscores显示分数
zrange name 0 9 withscores
zrange name 0 -1 withscores

-- $ zrevrange key start stop [withscores] 返回key集合中start到stop的元素并进行倒序,withscores显示分数
zrevrange name 3 8 withscores

-- $ zrangebyscore key min max [withscores] 返回[min, max]分数范围内key集合中的元素(正序)
zrangebyscore name 3 7 withscores

-- $ zrevrangebyscore key max min [withscores] 返回[min, max]分数范围内key集合中的元素(倒序)
zrevrangebyscore name 7 2 withscores

-- $ zrem key member 删除key集合中的member元素
zrem name 郑十

-- $ zremrangebyscore key min max 删除[min, max]分数范围内key集合中的元素
zremrangebyscore name 3 7

-- $ zremrangebyrank key start stop 删除[start, stop]排名范围内key集合中的元素
zremrangebyrank name 3 7

-- $ zcount key min max 返回key集合中分数在[min, max]范围内的元素个数
zcount name 3 7

global command

-- 查看Redis中的key
keys *
keys *a*

-- 判断key是否存在
exists name

-- 为key添加过期时间
expire name 10

-- 取消key过期时间
persist name

-- 切换数据库,默认为0
select 2

-- 将key移动到指定数据库
move name 0

-- 随意返回一个key
randomkey

-- 修改name1为name0
rename name1 name0

-- 打印信息
echo Hi,Lee

-- 查看key个数
dbsize

-- 查看redis信息
info

-- 查看所有redis配置信息
config get *

-- 清空当前数据库
flushdb

-- 清空所有数据库
flushall

Redis transaction

Insert image description here

-- 开始事务
multi
-- 命令入队
sadd name "Tom" "Lee" "ZhangSan" "XiaoMing"
smembers name
-- 执行事务
exec

Redis persistence mechanism

Persistence refers to writing data to persistent storage, such as a solid-state drive (SSD)

RDB

Takes a point-in-time snapshot of a dataset at a specified interval

  • 手动触发Way

    • saveSynchronously blocks the main process. New operations can only be performed after the save is completed.
    • bgsaveThe child process of fork is non-blocking. After the execution is completed, it will notify the main process and then close the child process.
  • 自动触发Way

    • save m nAutomatically trigger the command when mthere are modifications to the data set within secondsnbgsave
      Insert image description here
  • 优点

    • RDB is a very compact single-file point-in-time representation of Redis data. RDB files are great for backups.
      • For example, you might want to archive an RDB file every hour for the last 24 hours, and save an RDB snapshot every day for 30 days. This enables you to easily restore different versions of your data set in the event of a disaster.
    • RDB is great for disaster recovery, it is a compact file that can be transferred to a remote data center or Amazon S3 (possibly encrypted).
    • RDB maximizes Redis performance because the only work the Redis parent process needs to do to be persistent is to fork a child process to do the rest. The parent process never performs disk I/O or similar operations.
    • RDB allows faster restarts with large data sets compared to AOF.
    • On replicas, RDB supports partial resynchronization after restart and failover.
  • 缺点

    • RDB is not good if you need to minimize the possibility of data loss in the event that Redis stops working (such as after a power outage). You can configure different storage points where the RDB is generated (for example, you can have multiple storage points after at least five minutes and 100 writes to the dataset). However, you typically create an RDB snapshot every five minutes or more, so if Redis stops working without shutting down properly, you should be prepared to lose the last few minutes of data.
    • RDB often requires fork() in order to persist on disk using a subprocess. Fork() can be time-consuming if the data set is large and may cause Redis to stop serving the client for a few milliseconds or even a second if the data set is large and CPU performance is poor. AOF also requires fork(), but less frequently, and you can adjust how often the log is rewritten without any trade-off in durability.

AOF

Every write operation received by the server is recorded, which can then be replayed again when the server is started, thus reconstructing the original data set, using the same format as the Redis protocol itself to record commands

Insert image description here

  • appendonly

    • noClose AOF
    • yesTurn on AOF
  • appendfsync

    • alwaysAfter receiving the command, it is written to the disk immediately, which is the slowest efficiency, but can ensure complete persistence.
    • everysecWrites to disk once per second, making a good compromise between performance and persistence.
    • noCompletely dependent on OS, the general synchronization cycle is 30 seconds
  • 优点

    • Using AOF Redis is more durable: you can have different fsync strategies: no fsync at all, fsync every second, fsync on every query. With the default policy of fsync per second, write performance is still great. fsync is performed using a background thread, and when no fsync is in progress, the main thread will try to perform writes, so you only lose a second of writes.
    • AOF logs are append-only so there are no lookup issues and no corruption issues in the event of a power outage. Even if for some reason (disk full or otherwise) the log ends up with half-written commands, the redis-check-aof tool can easily fix it.
    • When the AOF is too large, Redis can automatically rewrite the AOF in the background. Overwriting is completely safe because while Redis continues appending to the old file, a completely new file is generated using the minimum set of operations required to create the current dataset, and Redis switches the two once the second file is ready. file and start appending to the new one.
    • AOF contains a log of all operations sequentially in a format that is easy to understand and parse. You can even export AOF files easily. For example, even if you accidentally flushed everything with the FLUSHALL command, as long as no log rewriting was performed in the meantime, you can still save the data set by stopping the server, removing the latest command, and restarting Redis again.
  • 缺点

    • AOF files are typically larger than equivalent RDB files for the same data set.
    • AOF can be slower than RDB, depending on the exact fsync strategy. In general, performance with fsync set to once per second is still very high, and with fsync disabled it should be as fast as an RDB even under heavy load. RDB is still able to provide more guarantees on maximum latency even under huge write loads.

RDB+AOF (default)

blend mode

Insert image description here

Redis memory elimination mechanism

LRULFUTTL随机淘汰

Insert image description here

  • noeviction(Default): New values ​​will not be saved when the memory limit is reached. This applies to the master database when the database uses replication
  • allkeys-lru: Keep the most recently used keys and delete the least recently used (LRU) keys
  • allkeys-lfu: Keep commonly used keys and delete least commonly used (LFU) keys
  • volatile-lru: Remove the least recently used key and set the expires field to true
  • volatile-lfu: Remove the least commonly used keys and set the expired field to true
  • allkeys-random: Randomly delete keys to make room for newly added data
  • volatile-random: Randomly delete keys with expired fields set to true
  • volatile-ttl: Delete the key with the expiration field set to true and the shortest remaining time to live (TTL) value

Redis's handling of expired keys

The Redis server uses two strategies: 惰性删除and . By using these two strategies together, it can be obtained 定期删除well before 合理使用CPUand避免内存浪费平衡

  • 惰性删除: When the key is accessed, it is judged whether it has expired, and it is deleted when it expires (friendly to the CPU, but if a key is not used for a long time and remains in the memory, it will cause memory waste)
  • 定时删除: While setting the expiration time of the key, create a timer. When the expiration time is reached, delete the key immediately (unfriendly to the CPU, the CPU needs to maintain the timer)
  • 定期删除: Check the data every once in a while and delete the expired keys (as for how many expired keys are deleted and how much data is checked, it is determined by the algorithm)

Guess you like

Origin blog.csdn.net/weixin_43526371/article/details/131367043