Redis~从入门到入坑。

Redis~从入门到入坑。


文章目录


Redis~what。

REmote DIctionary Server。远程字典服务器。

Redis(Remote Dictionary Server ),即远程字典服务,是一个开源的使用 ANSI C 语言编写、支持网络、可基于内存亦可持久化的日志型、Key-Value 数据库,并提供多种语言的 API。从 2010 年 3 月 15 日起,Redis 的开发工作由 VMware 主持。从 2013 年5 月开始,Redis 的开发由 Pivotal 赞助。
~百科。

REmote DIctionary Server(Redis) 是一个由 Salvatore Sanfilippo 写的 key-value 存储系统。

Redis 是一个开源的使用 ANSI C 语言编写、遵守 BSD 协议、支持网络、可基于内存亦可持久化的日志型、Key-Value 数据库,并提供多种语言的 API。

它通常被称为数据结构服务器,因为值(value)可以是 字符串(String),哈希(Hash),列表(list),集合(sets)和有序集合(sorted sets)等类型。
~ runoob。

Redis 是一个开源(BSD 许可)的,内存中的数据结构存储系统,它可以用作数据库、缓存和消息中间件。 它支持多种类型的数据结构,如字符串(strings),散列(hashes),列表(lists),集合(sets),有序集合(sorted sets)与范围查询, bitmaps,hyperloglogs 和地理空间(geospatial) 索引半径查询。 Redis 内置了复制(replication),LUA 脚本(Lua scripting),LRU 驱动事件(LRU eviction),事务(transactions) 和不同级别的磁盘持久化(persistence), 并通过 Redis 哨兵(Sentinel)和自动分区(Cluster)提供高可用性(high availability)。
~ http://www.redis.cn/

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster. Learn more →

~https://redis.io/


特点。
  • Redis 支持数据的持久化,可以内存中的数据保存到磁盘中,重启的时候可以再次加载进行使用。

  • Redis 不仅支持简单的 key-value 类型的数据,同时还提供 list, set, zset, hash 等数据结构的存储。

  • Redis 支持数据的备份,即 master-slave 模式的数据备份。


Redis 安装。

  • 本地下载。

http://download.redis.io/releases/redis-4.0.11.tar.gz

  • scp 传输到服务器。
geek@geek-PC:~/Downloads$ scp redis-4.0.11.tar.gz [email protected]:/root/geek/tools_my/
[email protected]'s password: 
redis-4.0.11.tar.gz                       100% 1699KB  36.9MB/s   00:00
  • 进入服务器。

  • make

[root@localhost tools_my]# tar -zxvf redis-4.0.11.tar.gz 
[root@localhost tools_my]# cd redis-4.0.11
[root@localhost redis-4.0.11]# ls
00-RELEASENOTES  deps       README.md        runtest-sentinel  utils
BUGS             INSTALL    redis.conf       sentinel.conf
CONTRIBUTING     Makefile   runtest          src
COPYING          MANIFESTO  runtest-cluster  tests

[root@localhost redis-4.0.11]# make

...

Hint: It's a good idea to run 'make test' ;)

make[1]: Leaving directory `/root/geek/tools_my/redis-4.0.11/src'

最好不要 make test。复杂。

  • make install
[root@localhost redis-4.0.11]# make install
cd src && make install
make[1]: Entering directory `/root/geek/tools_my/redis-4.0.11/src'
    CC Makefile.dep
make[1]: Leaving directory `/root/geek/tools_my/redis-4.0.11/src'
make[1]: Entering directory `/root/geek/tools_my/redis-4.0.11/src'

Hint: It's a good idea to run 'make test' ;)

    INSTALL install
    INSTALL install
    INSTALL install
    INSTALL install
    INSTALL install
make[1]: Leaving directory `/root/geek/tools_my/redis-4.0.11/src'

  • Redis 安装到哪去了?
[root@localhost ~]# cd /usr/local/bin/
[root@localhost bin]# ls
pcre-config  pcretest         redis-check-aof  redis-cli       redis-server
pcregrep     redis-benchmark  redis-check-rdb  redis-sentinel
[root@localhost bin]# ll
total 35780
-rwxr-xr-x. 1 root root    2363 Feb 19 09:30 pcre-config
-rwxr-xr-x. 1 root root   90207 Feb 19 09:30 pcregrep
-rwxr-xr-x. 1 root root  186075 Feb 19 09:30 pcretest
-rwxr-xr-x. 1 root root 5599918 Mar 15 03:53 redis-benchmark
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-check-aof
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-check-rdb
-rwxr-xr-x. 1 root root 5740282 Mar 15 03:53 redis-cli
lrwxrwxrwx. 1 root root      12 Mar 15 03:53 redis-sentinel -> redis-server
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-server


Redis 配置。

修改任何配置文件前先备份,这是一个很重要的习惯。
[root@localhost redis-4.0.11]# cp redis.conf redis.conf.bak
  • Redis 默认不是以 daemon 守护进程运行的。修改配置文件。
[root@localhost redis-4.0.11]# vim redis.conf
 131 
 132 ################################# GENERAL #############################     ########
 133 
 134 # By default Redis does not run as a daemon. Use 'yes' if you need it.
 135 # Note that Redis will write a pid file in /var/run/redis.pid when daem     onized.
 136 #daemonize no
 137 daemonize yes
 138 


Redis 使用。

  • 使用指定配置文件启动 redis-server。
[root@localhost redis-4.0.11]# /usr/local/bin/redis-server redis.conf
7189:C 15 Mar 05:34:17.025 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
7189:C 15 Mar 05:34:17.025 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=7189, just started
7189:C 15 Mar 05:34:17.025 # Configuration loaded

  • 使用 redis-cli 连接 redis-server。
    默认使用端口 6376
[root@localhost redis-4.0.11]# /usr/local/bin/redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> 

  • 查看进程。
[root@localhost ~]# ps -ef | grep redis
root       7190      1  0 05:34 ?        00:00:00 /usr/local/bin/redis-server 127.0.0.1:6379
root       7214   3666  0 05:36 pts/0    00:00:00 /usr/local/bin/redis-cli
root       7232   7218  0 05:39 pts/1    00:00:00 grep redis
  • 关闭 redis-server。
[root@localhost redis-4.0.11]# /usr/local/bin/redis-cli
127.0.0.1:6379> shutdown
not connected> exit

在交互命令行中 shutdown,就是关闭 redis-server。
在交互命令行中 exit,就是关闭 redis-cli。

[root@localhost ~]# ps -ef | grep redis
root       7275   7218  0 05:59 pts/1    00:00:00 grep redis

其他 Redis 相关。

[root@localhost bin]# ll
total 35780
-rwxr-xr-x. 1 root root    2363 Feb 19 09:30 pcre-config
-rwxr-xr-x. 1 root root   90207 Feb 19 09:30 pcregrep
-rwxr-xr-x. 1 root root  186075 Feb 19 09:30 pcretest
-rwxr-xr-x. 1 root root 5599918 Mar 15 03:53 redis-benchmark
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-check-aof
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-check-rdb
-rwxr-xr-x. 1 root root 5740282 Mar 15 03:53 redis-cli
lrwxrwxrwx. 1 root root      12 Mar 15 03:53 redis-sentinel -> redis-server
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-server
[root@localhost bin]# pwd
/usr/local/bin
redis-benchmark~测试性能工具。
[root@localhost bin]# ./redis-benchmark 

单线程。
  • Redis 用单线程模型来处理客户端的请求。对读写等事件的响应是通过对 epoll 函数的包装做到的。Redis 的实际处理速度完全依靠主线程的执行效率。

  • Epoll 是 Linux 内核为处理大批量文件描述而作了改进的 epoll。是 Linux 下多路复用 IO 接口 select/poll 的增强版本,ta 能显著提高程序在大量并发连接中只有少量活跃的情况下的系统 CPU 利用率。


默认 16 个数据库,类似数组下标从 0 开始,初始默认使用 0 号库。
[root@localhost redis-4.0.11]# vim redis.conf

 184 # Set the number of databases. The default database is DB 0, you can se     lect
 185 # a different one on a per-connection basis using SELECT <dbid> where
 186 # dbid is a number between 0 and 'databases'-1
 187 databases 16


select 命令切换数据库。
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> 

dbsize 查看当前数据库的 key 的数量。
127.0.0.1:6379> DBSIZE
(integer) 4
127.0.0.1:6379> keys *
1) "myset:__rand_int__"
2) "key:__rand_int__"
3) "mylist"
4) "counter:__rand_int__"
flushdb 清空当前库。
flushall 通杀全部库。
统一密码管理:16 个库都是相同密码,要么都 ok,要么一个也连不上。
Redis 索引是从零开始的。
为什么默认端口是 6379。

9 键键盘:6379 ——> merz ——> Alessia Merz 演员。


Redis 数据类型~常用五大。

http://redisdoc.com/

String~字符串。

可以理解成 Memcached 一样的类型。
一个 key 对应一个 value。
二进制安全 ——> 可以包含任何数据,eg. jpg 图片 or 序列化的对象。
Redis 最基本的数据类型。一个 Redis 中字符串 value 最多可以是 512M。

set / get / del / append / strlen
127.0.0.1:6379> get k1
"geek"
127.0.0.1:6379> append k1 666
(integer) 7
127.0.0.1:6379> get k1
"geek666"
127.0.0.1:6379> strlen k1
(integer) 7
127.0.0.1:6379> 


incr / decr / incrby / decrby (必须要是数字)。
127.0.0.1:6379> incr k2
(integer) 1
127.0.0.1:6379> incr k2
(integer) 2
127.0.0.1:6379> incr k2
(integer) 3
127.0.0.1:6379> incrby k2 2
(integer) 5
127.0.0.1:6379> incrby k2 2
(integer) 7
127.0.0.1:6379> incrby k2 2
(integer) 9

getrange / setrange
127.0.0.1:6379> get k1
"geek666"
127.0.0.1:6379> GETRANGE k1 0 -1
"geek666"
127.0.0.1:6379> GETRANGE k1 0 2
"gee"

127.0.0.1:6379> SETRANGE k1 0 xxx
(integer) 7
127.0.0.1:6379> get k1
"xxxk666"


setex(set with expire) 键 秒值 / setnx(set if not exist)
127.0.0.1:6379> setex k3 10 v3
OK
127.0.0.1:6379> ttl k3
(integer) 8
127.0.0.1:6379> setnx k1 q
(integer) 0


mset / mget / msetnx(有回滚,原子性)。
127.0.0.1:6379> mset k1 v1 k2 v2 k3 v3
OK
127.0.0.1:6379> mget k1 k2 k3
1) "v1"
2) "v2"
3) "v3"
127.0.0.1:6379> mget k1 k2 k3 k4
1) "v1"
2) "v2"
3) "v3"
4) (nil)

127.0.0.1:6379> msetnx k3 v3 k4 v4
(integer) 0


List~列表~单值多 value。

Redis 列表是简单的字符串列表,按照插入顺序排序。
可以添加一个元素到列表的头部(左)或尾部(右)。
底层是链表。


lpush / rpush / lrange
127.0.0.1:6379> LPUSH list01 1 2 3 4 5
(integer) 5
127.0.0.1:6379> LRANGE list01 0 -1
1) "5"
2) "4"
3) "3"
4) "2"
5) "1"
127.0.0.1:6379> RPUSH list02 1 2 3 4 5
(integer) 5
127.0.0.1:6379> LRANGE list02 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"


lpop / rpop

lindex~按照索引下标获取元素(从上到下)。

llen。

lrem 列表名 n value~删 n 个 value。
127.0.0.1:6379> LRANGE list02 0 -1
 1) "1"
 2) "2"
 3) "3"
 4) "4"
 5) "5"
 6) "1"
 7) "1"
 8) "2"
 9) "2"
10) "3"
11) "3"
12) "3"
13) "5"
14) "4"
15) "1"
127.0.0.1:6379> LREM list02 2 3
(integer) 2
127.0.0.1:6379> LRANGE list02 0 -1
 1) "1"
 2) "2"
 3) "4"
 4) "5"
 5) "1"
 6) "1"
 7) "2"
 8) "2"
 9) "3"
10) "3"
11) "5"
12) "4"
13) "1"
127.0.0.1:6379> 


ltrim key 开始index 结束index~截取指定范围的值后再赋值给 key。
127.0.0.1:6379> lpush list01 1 2 3 4 5 6 7 8
(integer) 8
127.0.0.1:6379> LTRIM list01 0 4
OK
127.0.0.1:6379> lrange list01 0 -1
1) "8"
2) "7"
3) "6"
4) "5"
5) "4"
127.0.0.1:6379> 


rpoplpush 源列表 目的列表。

lset key index value
127.0.0.1:6379> lset list01 1 6
OK
127.0.0.1:6379> LRANGE list01 0 -1
1) "5"
2) "6"
3) "3"
4) "2"
5) "1"


linsert key before/after value1 value2
127.0.0.1:6379> LINSERT list01 before 6 Java
(integer) 6
127.0.0.1:6379> LRANGE list01 0 -1
1) "5"
2) "Java"
3) "6"
4) "3"
5) "2"
6) "1"


Set~集合。

String 类型的无序集合。通过 HashTable 实现。

sadd / smembers / sismember
127.0.0.1:6379> sadd set01 1 1 2 2 3 3
(integer) 3
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
127.0.0.1:6379> SISMEMBER set01 1
(integer) 1
127.0.0.1:6379> SISMEMBER set01 x
(integer) 0


scard~获取集合中元素个数。
127.0.0.1:6379> SCARD set01
(integer) 3

srem key value~删除集合中元素。
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
127.0.0.1:6379> SREM set01 2
(integer) 1
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "3"

srandmember key n(随机选取 n 个数)。
127.0.0.1:6379> sadd set01 1 2 3 4 5 6 7 8
(integer) 6
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
6) "6"
7) "7"
8) "8"
127.0.0.1:6379> SRANDMEMBER set01 5
1) "6"
2) "1"
3) "2"
4) "5"
5) "3"

spop key~随机出栈。
127.0.0.1:6379> spop set01
"4"
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
4) "5"
5) "6"
6) "7"
7) "8"


smove key1 key2 在 key1 中的某个值~将 key1 中的某个值赋给 key2。
127.0.0.1:6379> smove set01 set02 8
(integer) 1
127.0.0.1:6379> SMEMBERS set02
1) "8"
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
4) "5"
5) "6"
6) "7"

数学集合类。sdiff(差集)sinter(交集)sunion(并集)。
127.0.0.1:6379> SADD set01 1 2 3 4 5
(integer) 5
127.0.0.1:6379> SADD set02 1 2 3 a b
(integer) 5
127.0.0.1:6379> SDIFF set01 set02
1) "4"
2) "5"
127.0.0.1:6379> SINTER set01 set02
1) "1"
2) "2"
3) "3"
127.0.0.1:6379> SUNION set01 set02
1) "2"
2) "1"
3) "4"
4) "b"
5) "a"
6) "5"
7) "3"

Hash~哈希。

Redis Hash 是一个键值对集合。
类似 Java 中的 Map<String, Object>。
Redis Hash 是一个 String 类型的 field 和 value 的映射表。
Hash 特别适用于存储对象。

KV 模式不变,但 V 是一个键值对。
hset / hget / hmset / hmget / hgetall / hdel
127.0.0.1:6379> set str01 v1
OK
127.0.0.1:6379> get str01
"v1"

方便对应 json。

127.0.0.1:6379> hset user name geek
(integer) 1
127.0.0.1:6379> hget user name
"geek"
127.0.0.1:6379> HSET customer id 11 name zh3 age 25
(integer) 3
127.0.0.1:6379> HGET customer id
"11"
127.0.0.1:6379> HGET customer name
"zh3"
127.0.0.1:6379> HGETALL customer
1) "id"
2) "11"
3) "name"
4) "zh3"
5) "age"
6) "25"

hlen。
127.0.0.1:6379> HLEN customer
(integer) 3

hexists key 在 key 中的某个值的 key。
127.0.0.1:6379> HLEN customer
(integer) 3
127.0.0.1:6379> HEXISTS customer id
(integer) 1
127.0.0.1:6379> HEXISTS customer add
(integer) 0

hkeys / hvals
127.0.0.1:6379> hkeys customer
1) "id"
2) "name"
3) "age"
127.0.0.1:6379> HVALS customer
1) "11"
2) "zh3"
3) "25"


hincrby / hincrbyfloat
127.0.0.1:6379> HGET customer age
"25"
127.0.0.1:6379> HINCRBY customer age 2
(integer) 27
127.0.0.1:6379> hset customer score 91
(integer) 1
127.0.0.1:6379> HINCRBYFLOAT customer score 0.5
"91.5"

hsetnx
127.0.0.1:6379> hset customer age 18
(integer) 0

Sorted Set~有序集合~zset。

Redis zset 和 set 一样也是 String 类型元素的集合,且不允许重复。
不同的是每个元素都会关联一个 double 类型的分数。
redis 正是通过分数来为集合中的成员进行从小到大的排序。
zset 的成员是唯一的,但分数(score)可以重复。

zadd / zrange (withscore)
127.0.0.1:6379> zadd zset01 60 v1 70 v2 80 v3 90 v4 100 v5
(integer) 5
127.0.0.1:6379> ZRANGE zset01 0 -1
1) "v1"
2) "v2"
3) "v3"
4) "v4"
5) "v5"
127.0.0.1:6379> ZRANGE zset01 0 -1 withscores
 1) "v1"
 2) "60"
 3) "v2"
 4) "70"
 5) "v3"
 6) "80"
 7) "v4"
 8) "90"
 9) "v5"
10) "100"
127.0.0.1:6379> ZRANGEBYSCORE zset01 70 90
1) "v2"
2) "v3"
3) "v4"
  • 不包含加 (
127.0.0.1:6379> ZRANGEBYSCORE zset01 (70 90
1) "v3"
2) "v4"
  • limit 截取。

从第二个开始取 2 个。

127.0.0.1:6379> ZRANGEBYSCORE zset01 60 90 limit 1 2
1) "v2"
2) "v3"

zrem key 某 score 下对应的 value 值~删除元素。
127.0.0.1:6379> ZREM zset01 v5
(integer) 1
127.0.0.1:6379> ZRANGE zset01 0 -1
1) "v1"
2) "v2"
3) "v3"
4) "v4"

zcard / zcount key score 区间 / zrank key value值~获得下标值 / zscore 对应值~获取分数。
127.0.0.1:6379> ZCARD zset01
(integer) 4
127.0.0.1:6379> ZCOUNT zset01 60 80
(integer) 3
127.0.0.1:6379> ZRANK zset01 v4
(integer) 3
127.0.0.1:6379> ZSCORE zset01 v4
"90"

zrevrank key values~逆序获取下标值。
127.0.0.1:6379> ZSCORE zset01 v4
"90"

zrevrange
127.0.0.1:6379> ZREVRANGE zset01 0 -1
1) "v4"
2) "v3"
3) "v2"
4) "v1"

zrevrangebyscore key

Bitmaps 和 HyperLogLogs。

Redis 键~key。

key *。

exists [key],判断某个 key 是否存在。
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> set k2 v2
OK
127.0.0.1:6379> exists k1
(integer) 1

move key db。
127.0.0.1:6379> move k2 1
(integer) 1
127.0.0.1:6379> exists k2
(integer) 0
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> keys *
1) "k2"

expire key 秒~给指定 key 设置过期时间。
ttl key ~ time to leave ~ 查看还有多少秒过期。-1 表示永不过期。-2 表示已过期。
127.0.0.1:6379> ttl k2
(integer) -1

127.0.0.1:6379> expire k2 10
(integer) 1
127.0.0.1:6379> ttl k2
(integer) 8
127.0.0.1:6379> ttl k2
(integer) -2
覆盖。
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> set k1 geek
OK
127.0.0.1:6379> get k1
"geek"
127.0.0.1:6379> 


type key~查看 key 是什么数据类型。
127.0.0.1:6379> type k1
string

配置文件。

Linux 下非常重要的习惯~配置文件先备份。(cp .conf .conf.bak)。
首先对于 Redis 的单位问题。
   8 # Note on units: when memory size is needed, it is possible to specify
   9 # it in the usual form of 1k 5GB 4M and so forth:
  10 #
  11 # 1k => 1000 bytes
  12 # 1kb => 1024 bytes
  13 # 1m => 1000000 bytes
  14 # 1mb => 1024*1024 bytes
  15 # 1g => 1000000000 bytes
  16 # 1gb => 1024*1024*1024 bytes
  17 #
  18 # units are case insensitive so 1GB 1Gb 1gB are all the same.

INCLUDES。

和 Struts2 配置类似,redis.conf 作为总闸,包含其他。

  19 
  20 ################################## INCLUDES ###################################
  21 
  22 # Include one or more other config files here.  This is useful if you
  23 # have a standard template that goes to all Redis servers but also need
  24 # to customize a few per-server settings.  Include files can include
  25 # other files, so use this wisely.
  26 #
  27 # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
  28 # from admin or Redis Sentinel. Since Redis always uses the last processed
  29 # line as value of a configuration directive, you'd better put includes
  30 # at the beginning of this file to avoid overwriting config change at runtime.
  31 #
  32 # If instead you are interested in using includes to override configuration
  33 # options, it is better to use include as the last line.
  34 #
  35 # include /path/to/local.conf
  36 # include /path/to/other.conf
  37 


GENERAL。
  • daemonize no —> yes

  • supervised no

  • pidfile /var/run/redis_6379.pid

  • port 6379

  • tcp-backlog

高并发环境下你需要一个高 backlog 值,以避免客户端连接速度慢的问题。 请注意,Linux内核默认将其设置为 /proc/sys/net/core/somaxconn 的值,因此请确保同时提高 somaxconn 和 tcp_max_syn_backlog 的值,以获得所需的效果。
tcp-backlog 511

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511

  • bind

################################## NETWORK #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#bind 127.0.0.1


  • timeout 0
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
  • tcp-keepalive 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300

  • loglevel。

  • logfile。


密码安全问题。
[root@localhost redis-4.0.11]# redis-server redis.conf
1723:C 16 Mar 02:00:50.549 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1723:C 16 Mar 02:00:50.549 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1723, just started
1723:C 16 Mar 02:00:50.549 # Configuration loaded
[root@localhost redis-4.0.11]# redis-cli
127.0.0.1:6379> ping
PONG

127.0.0.1:6379> CONFIG GET dir
1) "dir"
2) "/usr/local/bin"

127.0.0.1:6379> CONFIG GET requirepass
1) "requirepass"
2) ""

设置密码后,ping-pong。×

127.0.0.1:6379> CONFIG SET requirepass 123.
OK
127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379> auth 123.
OK
127.0.0.1:6379> ping
PONG

取消密码。

127.0.0.1:6379> CONFIG SET requirepass ""
OK
127.0.0.1:6379> ping
PONG

LIMITS 限制。
maxclients 10000
maxmemory <bytes>
maxmemory-policy noeviction
  • LRU ——> Least Recently Used

  • volatile-lru ——> 使用 LRU 算法移除 key。只对设置了过期时间的 key。

  • allkeys-lru ——> 使用 LRU 算法移除 key。

  • volatile-random ——> 在过期集合中移除随机的 key,只对设置了过期时间的 key。

  • allkeys-random ——> 移除随机的 key。

  • volatile-ttl ——> 移除那些 TTL 值最小的 key,即那些最近要过期的 key。

  • noeviction ——> 不进行移除。针对写操作,只是返回错误信息。

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction


maxmemory-samples 5

设置样本数量。LRU 算法和最小 TTL 算法都并非是精确的算法,而是估算法。所以你可以设置样本的大小。
Redis 会默认检查这么多个 key 并选择其中 LRU 的那个。

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5


Redis~持久化~rdb~Redis DataBase。

rdb~what。

在指定时间间隔内将内存中的数据集快照写入磁盘。
行话:SnapShot ——> 恢复时直接将快照读到内存中。

Redis 会单独创建(fork)一个子进程来进行持久化。会先将一个文件写入一个临时文件中,待持久化过程都结束,再利用这个临时文件替换上次持久化好的文件。
整个过程中,主进程是不进行任何 IO 操作的,这就确保了极高的性能。

如果需要进行大规模数据的恢复,且对于数据恢复的完整性不是非常敏感,那 rdb 方式要比 aof 方式更高效。rdb 的缺点是最后一次持久化后的数据可能丢失。


rdb~fork。

fork 的作用是复制一个与当前进程一样的进程。新进程的所有数据(变量、环境变量、程序计数器等)数值都和原进程一致。但是是一个全新的进程,并作为原进程的子进程。

如果主进程很大,那么浪费内存。


rdb 保存的是 dump.rdb 文件。
配置位置:SNAPSHOTTING 区。
################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""

save 900 1
save 300 10
save 60 10000

默认。

1 分钟 1W
5 分钟 10
15 分钟 1

修改配置文件。

save 120 10

在 Redis 中 2 分钟内操作 10 次。就会生成 dump.rdb

注意:如果使用 flushall 命令,Redis 数据在 120s 内改变了超过 10 项数据,会再次生成一个 dump.rdb 文件,会覆盖以前的。所以 dump.rbd 要定期备份。


save 命令立即生成 dump.rdb。

stop-writes-on-bgsave-error yes

后台保存出错,前台要停止写操作。

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes


rdbcompression yes

对于存储到硬盘中的快照,Redis 会采用 LZF 算法进行压缩。消耗CPU。

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes


rdbchecksum yes

存储快照后 ,让 Redis 使用 CRC64 算法进行数据校验。增大 10% 性能消耗。

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes


dbfilename dump.rdb
# The filename where to dump the DB
dbfilename dump.rdb

dir ./

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./

如何触发 RDB 快照。
  • 配置文件。
  • save / bgsave
  • save。
    save 时只管保存,其他不管。全部阻塞。
  • bgsave。
    Redis 会在后台异步进行快照操作。
    快照的同时还可以响应客户端请求。
    可以通过 lastsave 命令获取最后一次成功执行快照的时间。
  • flushall。无意义。

  • 如何关闭。

在 redis-cli 中

config set save “”


查看 Redis 进程。
[root@localhost redis-4.0.11]# ps -ef | grep redis
root       1838      1  0 02:40 ?        00:00:02 redis-server *:6379    
root       1951   1846  0 03:25 pts/0    00:00:00 grep redis
[root@localhost redis-4.0.11]# lsof -i :6379
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
redis-ser 1838 root    6u  IPv6  13836      0t0  TCP *:6379 (LISTEN)
redis-ser 1838 root    7u  IPv4  13837      0t0  TCP *:6379 (LISTEN)

Redis~持久化~aof~Append Only File。

rdf 会丢失最后一次备份的数据。

↓ ↓ ↓

aof。可能损失一秒钟的数据。

appendonly.aof 会记录所有写操作的语句。

以日志的形式记录每个写操作,将 Redis 执行过的所有写指令记录下来(读操作不记录)。只许通过文件但不可改写文件。Redis 启动之初会选取这个文件重新构建数据。换言之,Redis 重启的话就根据日志文件的内容将写指令从前到后执行一次以完成数据的恢复工作。


配置。

appendonly no 改为 yes。

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

[root@localhost redis-4.0.11]# cat appendonly.aof 

*2
$6
SELECT
$1
0
*3
$3
set
$2
k1
$2
v1
*3
$3
set
$2
k2
$2
v2

注:FLUSHALL 也算写操作。


appendfsync

默认 everysec。

# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#

+ Always

同步持久化,每次发生数据变更会被立即记录到磁盘。
~ 性能较差。数据完整性较好。

+ Everysec

出厂默认推荐。异步操作,每秒记录,如果一秒内宕机,有数据丢失。

+ no

no-appendfsync-on-rewrite no

重写时是否可以运用 appendfsync。用默认 no 即可,保证数据安全性。


auto-aof-rewrite-percentage 100

auto-aof-rewrite-min-size 64mb

Rewrite。
what。

AOF 采用文件追加方式,文件会越来越大。为避免出现这种情况,新增了重写机制。

当 aof 文件的大小超过所设定的阀值时,Redis 就会启动 AOF 文件的内容压缩,只保留可以恢复数据的最小指令集,可以使用命令 bgrewriteaof


重写原理。

AOF 文件持续增长而过大时,会 fork 出一条新线程来将文件重写(也是先写临时文件最后再 rename),遍历新进程的内存中的数据,每条记录有 set 语句。重写 aof 文件的操作,并没有读取旧的 aof 文件,而是将整个内存中的数据库内容用命令的方式重写了一个新的 aof 文件,这点和快照有点类似。


触发机制。

Redis 会记录上次重写时的 aof 大小,默认配置是当 aof 文件大小是上次 rewrite 后大小的一倍且文件大于 64M 时触发。


rdb vs aof。
  • 如果两者都配置了,优先读取 aof。

如果 appendony.aof 被我们胡搞一通(修改文件内容,加一些“外星语”。文件内容是写操作的语句),Redis 服务将不能启动。

[root@localhost redis-4.0.11]# vim appendonly.aof 
[root@localhost redis-4.0.11]# redis-server redis.conf
[root@localhost ~]# redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected> 

[root@localhost redis-4.0.11]# ps -ef | grep redis
root       2985   2894  0 11:40 pts/1    00:00:00 redis-cli
root       2987   2852  0 11:41 pts/0    00:00:00 grep redis

发现 redis-server 并没有启动。

  • 如果 appendonly.aof 损坏,可以用 Redis 自带工具恢复。

用法:

[root@localhost bin]# ./redis-check-aof 
Usage: ./redis-check-aof [--fix] <file.aof>

[root@localhost redis-4.0.11]# /usr/local/bin/redis-check-aof --fix appendonly.aof 
0x               0: Expected prefix '*', got: 'a'
AOF analyzed: size=98, ok_up_to=0, diff=98
This will shrink the AOF from 98 bytes, with 98 bytes, to 0 bytes
Continue? [y/N]: y
Successfully truncated AOF
  • AOF and RDB persistence can be enabled at the same time without problems.

aof 的优势:3 种同步方式。
  • appendfsync Always

同步持久化,每次发生数据变更会被立即记录到磁盘。
~ 性能较差。数据完整性较好。

  • appendfsync Everysec

出厂默认推荐。异步操作,每秒记录,如果一秒内宕机,有数据丢失。

  • appendfsync no。

aof 的劣势:效率。
  • 相同数据集的数据而言,aof 文件要远大于 rdb 文件,恢复速度慢于 rdb。
  • aof 运行效率要慢于rdb,每秒同步策略效率较好,不同步效率和 rdb 相同。

Redis 事务。

发布了47 篇原创文章 · 获赞 1 · 访问量 1143

猜你喜欢

转载自blog.csdn.net/lyfGeek/article/details/104994808
今日推荐