Redis entry to five consecutive peerless

1. Introduction to Nosql

​ Main Nosql databases: redis, Memcache, Mongdb

1. Functional advantages

  • Easy to expand

    There are many types of NoSQL databases, but a common feature is to remove the relational features of relational databases.

    There is no relationship between data, so it is very easy to expand. Invisibly, it brings scalable capabilities at the architectural level.

  • Large amount of data and high performance

    NoSQL databases have very high read and write performance, especially in the case of large amounts of data, and also perform well. This is due to its non-relational nature and the simple structure of the database.

    Generally, MySQL uses Query Cache, and the cache becomes invalid every time the table is updated. It is a large-grained cache, and the performance of the cache is not high in applications that interact frequently with web2.0. The NoSQL cache is record-level, which is a fine-grained cache, so NoSQL has much higher performance at this level.

  • Diverse and flexible data models

    NoSQL does not need to create fields for the data to be stored in advance, and can store customized data formats at any time. In a relational database, adding and deleting fields is a very troublesome thing. If it is a table with a very large amount of data, adding fields is simply a nightmare.


2. Traditional database RDBMS VS NOSQL

  • RDBMS
  1. Highly Organized Structured Data

  2. Structured Query Language (SQL)

  3. Both data and relationships are stored in separate tables

  4. Data Manipulation Language, Data Definition Language

  5. strict consistency

  6. basic business

  • NOSQL
  1. NoSQL means more than just SQL

  2. no declarative query language

  3. no predefined patterns

  4. Key-value pair storage, column storage, document storage, graph database

  5. Eventual consistency, not ACID properties

  6. Unstructured and Unpredictable Data

  7. CAP theorem

  8. High Performance, High Availability and Scalability


3. 3V + 3 high



4. Four categories of Nosql database

insert image description here
insert image description here

5. CAP principle CAP + BASE in distributed database

insert image description here

5.1. CAP Diagram:

insert image description here

5.2., CAP's 3 into 2

insert image description here

5.3 What is BASE

BASE is a solution proposed to solve the problem of reduced availability caused by the strong consistency of relational databases.

BASE其实是下面三个术语的缩写: 

 - 基本可用(Basically Available)
 - 软状态( Soft state) 
 - 最终一-致(Eventually consistent)  

Its idea is to improve the overall scalability and performance of the system by allowing the system to relax the requirements for data consistency at a certain moment. Why do you say that? The reason is that large-scale systems often cannot use distributed transactions to complete these indicators due to geographical distribution and extremely high performance requirements. To obtain these indicators, we must use another way to complete them. Here BASE is the solution to this problem;

5.4. Distributed system + cluster

distributed system

It consists of multiple computers and communicating software components connected by a computer network (local network or wide area network). It is a software system built on the network. It is precisely because of the characteristics of software that distributed systems are highly cohesive and transparent. Therefore, the difference between a network and a distributed system is more about the high-level software (especially the operating system), rather than the hardware. The distributed system can be applied on different platforms, such as: PC, workstation, LAN and WAN, etc. .

simply speaking:

  • Distributed: Different service modules (projects) are deployed on different servers, and they communicate and call through Rpc/Rmi to provide external services and intra-group collaboration.
  • Cluster: The same service module is deployed on different servers, and the distributed scheduling software is used for unified scheduling to provide external services and access.

2. Introduction to Redis

1. What is redis
   Redis:REmote DIctionary Server(远程字典服务器) ,是完全开源免费的,用C语言编写的,遵守BSD协议,是一个高性能的(key/value)分布式内存数据库,基于内存运行,并支持持久化的NoSQL数据库,是当前最热门的NoSq|数据库之一,也被人们称为数据结构服务器;
2. Three features of Redis:
  • Redis supports data persistence, which can keep the data in memory on the disk, and can be loaded again for use when restarting

  • Redis not only supports simple key-value type data, but also provides storage of data structures such as list, set, zset, and hash

  • Redis supports data backup, that is, data backup in master-slave mode

3. Main functions
  • Memory storage and persistence: redis supports asynchronously writing data in memory to the hard disk without affecting the continued service
  • The operation of fetching the latest N data, such as: you can put the IDs of the latest 10 comments in the List collection of Redis
  • Simulate a function similar to HttpSession that needs to set an expiration time
  • Publish and subscribe message system
  • timer, counter
4. Linux installation

​ redis Chinese website: http://www.redis.cn/

​ Tutorial link: http://www.cnblogs.com/L-Test/p/9239709.html

1. Download redis-4.0.10

Download redis-4.0.10 on redis official website (https://redis.io/download)

2. Upload the installation package to the Linux server

Create a directory package under the root directory of the Linux server, and upload the installation package to this directory;

3. Install dependent packages (some may not be installed)

# 安装这条就好了吧
[root@Cherry /]# yum install -y gcc  

[root@Cherry /]# yum install tcl

4. Create an installation directory (omitted)

[root@Cherry /]# mkdir /usr/local/redis

5. Unzip redis-4.0.10.tar.gz

[root@Cherry /]# cd /package/

[root@Cherry package]# tar -zxvf redis-4.0.10.tar.gz

6. Compile

[root@Cherry package]# cd redis-4.0.10

[root@Cherry redis-4.0.10]# make

7. Installation


[root@Cherry redis-4.0.10]# make PREFIX=/usr/local/redis install

Note: PREFIX should be capitalized

After the installation is complete, a bin directory will be generated in the /usr/local/redis directory, which contains the following files:

redis-benchmark performance testing tool
  redis-check-aof tool for checking aof logs
  redis-check-rdb tool for checking rdb logs
  redis-cli client
  redis-server server

​redis.conf configuration file (if not, go to step 8)

8. Copy the redis configuration file to the installation directory (omitted)

[root@Cherry redis-4.0.10]# cp redis.conf /usr/local/redis/bin/

9. Modify the redis configuration file and configure redis background startup

[root@Cherry redis-4.0.10]# cd /usr/local/redis/bin

[root@Cherry bin]# vim /usr/local/redis/bin/redis.conf

Note: change daemonize no to daemonize yes

​ The default seems to be yes now, that is, it runs silently in the background;

​ vim installation: yum -y install vim*

​ Vim query format on the command line, " :/daemonize ", n matches the next one, N matches the previous one, if the file is fully displayed, press j;

10. Add redis to boot

[root@Cherry bin]# vi /etc/rc.local

Add content inside: /usr/local/redis/bin/redis-server /usr/local/redis/bin/redis.conf

11. Start redis

[root@Cherry bin]# /usr/local/redis/bin/redis-server /usr/local/redis/bin/redis.conf

#查看是否运行其端口
[root@Cherry bin]# ps -ef|grep redis

12. Test

[root@Cherry bin]# ./redis-cli 

  127.0.0.1:6379> set tomorrow bad		# redis中存入键值对
  OK
  127.0.0.1:6379> get tomorrow			# 根据键获取
  "bad"
  127.0.0.1:6379> exit					#退出 redis 

13. Close redis

[root@Cherry bin]# ./redis-cli shutdown

14. Modify the redis configuration file, redis can be connected remotely

[root@Cherry bin]# vi /usr/local/redis/bin/redis.conf

bind 127.0.0.1 to #bind 127.0.0.1

protected-mode yes to protected-mode no

#requirepass foobared Remove the comment, and change foobared to your own password

15. Open port 6379 of the Linux server

If you still can’t connect remotely, you need to open port 6379 of the Linux server

[root@Cherry bin]# vi /etc/sysconfig/iptables

Add port configuration: -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 6379 -j ACCEPT

Save changes: service iptables save

Restart the firewall: service iptables restart

5. Miscellaneous explanation after Redis startup
  1. single process

    Single-process model to handle client requests. The response to events such as reading and writing is done through the packaging of the epoll function. The actual processing speed of Redis depends entirely on the execution efficiency of the main process;

    Epoll is an improved epoll in the Linux kernel to handle a large number of file descriptors. It is an enhanced version of the multiplexed IO interface select/poll under Linux. The CPU utilization of the system;,

  2. There are 16 databases by default, and the following table starts from zero, and the initial default is to use the zero database; ** select 数据库下标(0~15)**;

    • Select command to switch database
    • Dbsize View the number of keys in the current database,
      • keys *View all keys of the library currently subscripted
      • keys ?View the key value of the currently subscripted library key with only one digit
    • Flushdb: Clear the library under the current subscript; Flushall: Clear all 16 libraries;
    • Unified password management for all databases, 16 databases all have the same password, either all OK or none of them can be connected
    • Redis indexes start from zero
    • Why the default port is 6379

3. Redis data types

1. Redis five major data types
  • String

    string is the most basic type of redis, you can understand it as exactly the same type as Memcached-, a key corresponds to a value

    The string type is binary safe. It means that the string of redis can contain any data. Such as jpg images or serialized objects.

    The string type is the most basic data type of Redis, and the string value in a redis can be up to 512M (abnormal level)

  • Hash (hashing, like Map in java)

    Redis hash is a collection of key-value pairs.

    Redis hash is a mapping table between field and value of string type, and hash is especially suitable for storing objects.

    Similar to Map<String, Object> in Java

  • List (list)

    Redis lists are simply lists of strings, sorted by insertion order. You can add an element to the head (left) or tail (right) of the list

    Its bottom layer is actually a linked list

  • Set

    Redis's Set is an unordered collection of string types. It is implemented through the HashTable implementation

  • Zset (sorted set: ordered collection)

    Like set, Redis zset is also a collection of string type elements, and duplicate members are not allowed.

    The difference is that each element will be associated with a score of type double.

    Redis uses scores to sort the members of the set from small to large. The members of zset are unique, but the score (score) can be repeated.

2. Redis operation instructions

​ Connection: http://redisdoc.com/

3. Redis key (key)

​key : the name of the stored key, db: the database subscript

  1. Single value single value:

    type key : View the type of the current key;

    del key : delete the specified key;
    insert image description here

4. Redis string (String)
  1. Commonly used:
    insert image description here

  2. case:

    set k1 1			# 设置key为 k1,value为 1 的 String 类型
    get k1				# 根据 key 值获取 value
    del k1				# 根据 key 值删除
    append k1 2			# 在指定 key 的 value 后面进行字符串的添加,value="12"
    strlen k1			# 查看指定 key 的 value 的长度
    
    #下面的只有字符串是纯数字才能操作
    incr k1				# 指定 key 的 value 值进行加1,,value=13
    decr k1				# 减1
    incrby k1 5			# 指定具体增加 5 
    decrby k1 2			# 指定具体减少 2
    
    
    set k2 abc1234
    getrange k2 0 3		# 截取指定下标范围类的字符串,value="abc1";0 -1 范围就是取全部字符
    setrange k2 1 wang	# 从指定的下标处,用指定的值覆盖掉原来的,value="awang34" ,原值:abc1234
    
    
    setex k3 20 456a	# 设置key=k3,value="456a",有效时间为20秒
    ttl k3				# 返回当前 k3 还剩几秒有效时间,-2则表示过期,-1表示永不过期
    setnx k3 123		# 数据库存在当前key,则不会创建,不存在则创建当前设置
    
    
    mset k4 v4 k5 v5	# 多个一起设置 k4:v4 ,k5:v5
    mget k4 k5			# 同时获取k4,k5 的 value 
    msetnx k5 v78 k6 v6	 # 多个key同时数据库不存在,则才能存入,部分不存在不行
    
5. Redis list (List)
  1. Single value multi-value:
    insert image description here

  2. case:

     # list 集合就像栈,先装进去的,只能后取出来,先进后出
     lpush l1 1 2 3 4 5		# 设置 key=l1 ,value=[1,2,3,4,5] 的集合List(左边开始装)
     lrange l1 0 -1			# 取出 l1 的所有值,取出顺序:5 4 3 2 1
      
     rpush l2 1 2 3 4 5		# 同上,从右边开始装
     lrange l2 0 -1			# 取出 l2 的所有值,取出顺序:1 2 3 4 5
     
     # 此时 l1 的集合顺序: 5 4 3 2 1
     lpop l1				# 从左边出栈,去除1个,去除的是 5
     rpop l1				# 从右边出栈,去除1个,去除的是 1
     lrange	l1 0 -1			# 取出顺序:4 3 2 
     lindex l1 1			# 从上往下(左->右)按下标取,去除的值为 3
     llen l1				# 获取 l1 的长度,此时长度为 3
     
     
     lpush l3 1 2 2 2 3 3 4	5  # 从左开始往下存
     lrem l3 3 2			 # 去除指定 l3 中的 3个2 ,l3结果: 5 4 3 3 1
     ltrim l3 0 2			 # 从l3中截取指定下标的数据,再赋值给 l3,l3结果: 5 4 3
     
     rpoplpush l1 l3		 # l1 的右边最后一个出栈,再将其压栈进l3
     LRANGE l1 0 -1			 # l1 取出顺序为:4 3
     LRANGE l3 0 -1			 # l1 取出顺序为:2 5 4 3
     
     lset l1 1 x			# l1 的下标1位置原来的值替换为 x ,此时 l1: 4 x
     linsert l1 before x java  # 在 l1 的 x 值的位置之前插入 java, 此时 l1: 4 java x
     linsert l1 after x oracle # 在 l1 的 x 值的位置之后插入 oracle, 此时 l1: 4 java x oracle
     
     
    
    sAdd key1 member1		#添加一个VALUE到SET容器中,如果这个key已经存在于SET中,那么返回FLASE。
    
  3. Performance summary:

    • It is a linked list of strings, both left and right can be inserted and added;
    • If the key does not exist, create a new linked list;
    • If the key already exists, add the content;
    • If the value is completely removed, the corresponding key will disappear;
    • The operation of the linked list is extremely efficient whether it is the head or the tail, but if it is an operation on the middle element, the efficiency is very bleak;
6. Redis collection (Set)
  1. Single value multi-value:
    insert image description here

  2. case:

    sadd s1 1 1 2 3 3			# 添加 set 集合,重复的数字不会添加
    smembers s1					# 查询 s1 ,返回结果:1 2 3
    sismember s1 1				# 判断 s1 中是否有 1 ,存在便会返回
    scard s1					# 获取 s1 中元素的个数
    srem s1 3					# 删除 s1 中指定的元素 3
    srandmember s1 3			# 从 s1 中随机抽取 3 个元素
    spop s1 					# 随机出栈,s1 中随机出栈一个元素
    smove s1 s2 2				# 将 s1 中的元素 2 移入 s2 中
    
    del s1						# 删除 s1 
    # 重新添加 s1 ,s2
    sadd s1 1 2 3 4 5			 
    sadd s2 1 2 3 a b
    
    sdiff s1 s2					# 差集,返回结果: 4 5	(在 s1 中,不在 s2 中)
    sinter s1 s2				# 交集,返回结果: 1 2 3
    sunion s1 s2				# 并集,返回结果: 5 b 4 a 3 2 1
    
7. Redis hash (Hash)
  1. The KV schema remains the same, but V is a key-value pair:
    insert image description here

  2. case:

    # kuey=user,value=name zhangsan,果然hash存储对象
    hset user name zhangsan		#												1个属性值
    hget user name				#获取的方式,需要指明key之外,还要进一步指明value中键值对的key
    
    hmset user age 12 sex nan	 #多个值存储										3个属性值						
    hmget user age sex			#多值获取
    hgetall user				# 打印key中保存的所有信息,对象的toString
    hdel user sex				# 删除 user 中的 sex 属性							2个属性值
    
    hlen user					#判断 user 中有几个属性值						打印 2
    
    hexists user name			# 判断 name 属性是否存在,存在返回1,不存在返回0
    hkeys user					# 遍历打印 user 中的 value 中键值对的key(属性名)
    hvals user					# 遍历打印 user 中的 value 中键值对的value(属性值)
    
    hincrby user age 5			# 将user中的 age 属性值添加指定的 5 ,属性值是数字的才能使用此函数
    hsetnx user score 92.5		# 不存在指定的属性则添加,添加成绩为 92.5
    hincrbyfloat  user score 2.5	# 指定添加小数
    
    
8, Redis set Zset (Sorted set)
  1. Parse:

    1. On the set base, add a score value. Before set was k1 v1 v2 v3, now zset is k1 score1 L1 score2 v2
      insert image description here
  2. case:

    zadd z1 70 v1 80 v2 90 v3 100 v4		#存放key=z1,并在每个值前给定一个分数
    zrange z1 0 -1					# 获取的是在 z1 中所有的value, 结果:v1 v2 v3 v4
    zrange z1 0 -1 withscores		# 带分数打印:v1 70 v2 20 ....
    
    zrangebyscore z1 70 90			# 打印分数在70~90的value:v1 v2 v3
    zrangebyscore z1 70 (90			# (:表示限制,不包含,则打印: v1 v2
    zrangebyscore z1 70 90 limit 1 2 # limit返回结果的下标 1 开始,取 2 个,打印:v2 v3
    
     zrem z1 v4						# 删除 z1 中指定的 value,同时他的分数也会被删除
     zcard z1 						# 统计 z1 中的 value 值
     
     zrank z1 v1					# 获得 z1 中 v1 的下标
     zscore z1 v2					# 获得 z1 中 v2 的设置的分数
     zrevrank z1 v3					# 逆序获得 z1 中 v3 的下标,从下往上会数
     zrevrange z1 0 -1				# 逆序查询所有打印结果集
     zrevrangebyscore z1 80 70		 # 逆序打印 70~80 的 value
     
    

4. Redis configuration file redis.conf

1. Units
  1. Configure the size unit, some basic measurement units are defined at the beginning, only bytes are supported, bits are not supported
  2. case insensitive
# Redis configuration file example

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
#
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
2. INCLUDE contains
  1. Similar to our Struts2 configuration file, it can be included through Tsuji includes, redis.conf can be used as the main gate, including other
################################## INCLUDES ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
#
# include /path/to/local.conf
# include /path/to/other.conf
3. GENERAL
pidfile /var/run/redis.pid				# 指定的进程管道 id 文件
daemonize yes						   # 开启后台静默运行
port 6379							   # 默认端口 6379

tcp-backlog:

​ Set the backlog of tcp, the backlog is actually a connection queue, the sum of the backlog queue = unfinished three-way handshake queue + completed three-way handshake queue.

In a high-concurrency environment you need a high backlog value to avoid slow client connection problems. Note that the Linux kernel will reduce this value to the value of /proc/sys/net/core/somaxconn, so you need to confirm to increase the two values ​​of somaxconn and tcp_ max_ syn_ backlog to achieve the desired effect

tcp-backlog 511						# 不懂保持默认即可
timeout 0							#绑定的 IP , 端口等,默认本机

#单位为秒,如果设置为0,则不会进行Keepalive检测,建议设置成60
tcp-keepalive 0						# 空闲多少秒后关闭,0 表示不关闭

loglevel notice						# 日志级别,4种,debug,verbose,notice, warning
logfile ""							# 日志名,没指定会控制台输出...

# syslog-enabled no					 # 是否把日志输出到syslog中
# syslog-ident redis				# 指定 syslog 中的日志标志
# syslog-facility local0		     # 指定syslog设备,值可以是USER或LOCALO-LOCAL7
databases 16						# 默认 16 个库

4. SNAPSHOTTING snapshot
  1. Save

    # 保存级别互为补充,满足任何一个条件就保存
    
    save 900 1						# 900秒钟修改过 1 次则保存快照
    save 300 10						# 300秒钟修改过 10 次则保存快照
    save 60 10000					# 60秒钟修改过 10000 次则保存快照
    
    # 如果想禁用RDB持久化的策略,只要不没置任何save指令,或者給save伎入一-个空字符串参数也可以;
    save ""
    
    # 手动立即保存,输入 save 即可
    127.0.0.1:6379> save
    
    dbfilename dump.rdb				# 系统默认读取 dump.rdb 文件
    
    # 出错的实收进行等待处理,如果配置成 no ,表示你不在乎数据不一致或者有其他手段发现和控制
    stop-writes-on-bgsave-error yes  
    
    
    #rdbcompression:对于存储到磁盘中的快照,可以设置是否进行压缩存储。如果是的话,redis会采用LZF算法进行压缩。如果你不想消耗CPU来进行压缩的话,可以设置为关闭此功能,可以设置为 no
    rdbcompression yes			
    
7. Security
  1. Viewing, setting and canceling of access codes

  2. The redis.conf file is configured, and the dir ./generated information will be saved in the path where the redis-server server is started;

    127.0.0.1:6379> config get requirepass				#获取其安全,课件密码为空
    1) "requirepass"
    2) ""		
    127.0.0.1:6379> config get dir						# 当前启动路径
    1) "dir"
    2) "/usr/local/redis/bin"
    127.0.0.1:6379> 
    
    127.0.0.1:6379> ping							# 连接上没有设置密码时
    PONG											# 此返回表示连接成功
    127.0.0.1:6379> config set requirepass "123456"	   # 设置密码为 123456
    OK
    127.0.0.1:6379> ping							# 再次 ping 无法连接
    (error) NOAUTH Authentication required.
    127.0.0.1:6379> auth 123456						# 现在每次使用之前需要输入密码验证
    OK
    127.0.0.1:6379> ping
    PONG
    127.0.0.1:6379> config set requirepass ""		# 改回默认没有密码的状态
    OK
    127.0.0.1:6379> ping
    PONG
    
8. LIMITS limit
# maxclients 10000								# 默认10000个客户端可同时连接
			
# maxmemory <bytes>								# 设置内存的大小				

# maxmemory-policy noeviction						 # 内存的过期策略,默认永不过期

# volatile-lru -> remove the key with an expire set using an LRU algorithm
# allkeys-lru -> remove any key according to the LRU algorithm
# volatile-random -> remove a random key with an expire set
# allkeys-random -> remove a random key, any key
# volatile-ttl -> remove the key with the nearest expire time (minor TTL)
# noeviction -> don't expire at all, just return an error on write operations
(1) volatile-lru: 使用LRU算法移除key,只对设置了过期时间的键
(2) allkeys-Iru:  使用LRU算法移除key
(3) volatile-random: 在过期集合中移除随机的key,只对设置了过期时间的键
(4) allkeys-random: 移除随机的key
(5) volatile-ttl: 移除那些TTL值最小的key,即那些最近要过期的key
(6) noeviction: 不进行移除。针对写操作,只是返回错误信息

# maxmemory-samples 5					# 默认5个
设置样本数量,LRU算法和最小TTL算法都并非是精确的算法,而是估算值,redis默认会检查这么多个key并选择其中LRU的那个,所以你可以设置样本的大小;


9, APPEND ONLY MODE added
  1. opening of aof

    # 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no
    
    appendonly no
    appendfilename appendonly.aof			# 指定更新日志文件名,默认为appendonly.aof
    
  2. Appendfsync save policy

    • Always: Synchronous persistence will immediately record every data change to the disk, which has poor performance but better data integrity

    • Everysec: Factory default recommendation, asynchronous operation, record every second, if it crashes within one second, there will be data loss

    • No

  3. No-appendfsync-on-rewrite: Whether Appendfsync can be used when rewriting, the default is no, to ensure data security.

  4. Auto-aof-rewrite-min-size: Set the benchmark value for rewriting

  5. Autoraof-rewrite-percentage: Set the benchmark value for rewriting

5. Redis Persistence

​ Brief description: rdb aof;

1. Introduction to RDB (Redis DataBase):
  1. What is rdb?

    Write the snapshot of the data set in the memory to the disk within the specified time interval, which is the Snapshot snapshot in jargon. When restoring, it reads the snapshot file directly into the memory.

    ​ Redis will create (fork) a sub-process for persistence separately, and will first write the data into a temporary file, and after the persistence process is over, then use this temporary file to replace the last persisted file. During the whole process, the main process does not perform any IO operations, which ensures extremely high performance. If large-scale data recovery is required and the integrity of data recovery is not very sensitive, the RDB method is more efficient than the AOF method. The disadvantage of RDB is that the data after the last persistence may be lost.

  2. The role of fork:

    ​ The role of Fork is to copy a process that is the same as the current process. All data (variables, environment variables, program counters, etc.) values ​​of the new process are consistent with the original process, but it is a brand new process, and it is used as a child process of the original process;

  3. Rdb saves the dump.rdb file:

  4. RDB is a compressed Snapshot of the entire memory. The data structure of RDB can configure composite snapshot trigger conditions and snapshot save configuration settings (redis.conf). This is also a supplement to the content not mentioned in the fourth paragraph;

    # 保存级别互为补充,满足任何一个条件就保存
    
    save 900 1						# 900秒钟修改过 1 次则保存快照
    save 300 10						# 300秒钟修改过 10 次则保存快照
    save 60 10000					# 60秒钟修改过 10000 次则保存快照
    
    dbfilename dump.rdb				# 系统默认读取 dump.rdb 文件
    
  5. How to trigger Rdb

    • The default snapshot configuration (Save configuration) in the configuration file can be reused after cold copy, you can cp dump.rdb dump_ new.rdb;

    • Command save or bgsave;

      • Save: Just save when saving, and block everything else;
      • BGSAVE: Redis will perform snapshot operations asynchronously in the background, and snapshots can also respond to client requests. You can use the lastsave command to get the time of the last successful snapshot execution;
    • Executing the flushall command will also generate a dump.rdb file, but it is empty and meaningless;

  6. how to recover

    Move the backup file (dump.rdb) to the redis installation directory and start the service to CONFIG GET dirobtain the startup directory;

  7. Advantage

    • Suitable for large-scale data recovery

    • Low requirements for data integrity and consistency

  8. disadvantage

Make a backup at a certain interval, so if redis accidentally goes down, all modifications after the last snapshot will be lost;

When forking, the data in the memory is cloned, and the expansion of roughly 2 times needs to be considered;

  1. Small summary:

insert image description here

2. AOF (Append Only File) introduction:
  1. What is AOF:

    ​ Record each write operation in the form of a log, and record all write commands executed by Redis (read operations are not recorded), only append files but not rewrite files, and redis will read the file and rebuild it at the beginning of startup data, in other words, if redis restarts, the write command will be executed from front to back according to the content of the log file to complete the data recovery work;

  2. AOF on

    # 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no
    
    appendonly no
    
    appendfilename appendonly.aof			# 指定更新日志文件名,默认为appendonly.aof
    
  3. Appendfsync save policy

    • Always: Synchronous persistence will immediately record every data change to the disk, which has poor performance but better data integrity

    • Everysec: Factory default recommendation, asynchronous operation, record every second, if it crashes within one second, there will be data loss

    • No

  4. AOF start/repair/recovery

    • normal recovery
      • Start: set Yes, modify the default appendonly no, change to yes
      • Copy the aof file with data and save it to the corresponding directory (configgetdir)
      • Recovery: restart redis and reload
    • exception recovery
      • Startup: Set Yes
      • Back up AOF files that are overwritten
      • Fix: Redis-check-aof --fix to fix
      • Recovery: restart redis then reload

    [Note] : If appendonly is enabled, there will be appendonly.aof and dump.rdb files in the current startup directory. When starting the Redis server here, appendonly.aof will be read first. If this file reports an error service The terminal cannot be started, so it is necessary to exclude the content that the system cannot recognize in appendonly.aof:

    insert image description here

  5. What is rewrite?

    ​ AOF adopts the method of file appending, and the file will become larger and larger. To avoid this situation, a rewriting mechanism has been added. When the size of the AOF file exceeds the set threshold, Redis will start the content compression of the AOF file. Only keep the minimum set of instructions that can restore data. You can use the command bgrewriteaof;

  6. rewriting principle

    ​ When the AOF file continues to grow and is too large, a new process will be forked to rewrite the file (also write the temporary file first and then rename), traverse the data in the memory of the new process, and each record has a Set statement. The operation of rewriting the aof file does not read the old aof file, but rewrites a new aof file with the entire in-memory database content by command, which is somewhat similar to the snapshot;

  7. trigger mechanism

    1. Redis will record the AOF size of the last rewrite. The default configuration is to trigger when the AOF file size is twice the size after the last rewrite and the file is larger than 64M;
    auto-aof-rewrite-percentage 100					# 百分之百即1倍
    auto-aof-rewrite-min-size 64mb					#  64M
    
    1. No-appendfsync-on-rewrite: Whether Appendfsync can be used when rewriting, the default is no, to ensure data security.
    2. Auto-aof-rewrite-min-size: Set the benchmark value for rewriting
    3. Autoraof-rewrite-percentage: Set the benchmark value for rewriting
  8. Advantage

    1. Synchronization per second: appendfsync always synchronous persistence Every time a data change occurs, it will be immediately recorded to disk Performance is poor but data integrity is better
    2. Synchronization of every modification: appendfsync everysec asynchronous operation, record every second If it crashes within one second, there will be data loss
    3. not sync: appendfsync no never sync
  9. disadvantage

    1. For the data of the same data set, the aof file is much larger than the rdb file, and the recovery speed is slower than that of rdb
    2. The operating efficiency of Aof is slower than that of RDB, the efficiency of the synchronization strategy per second is better, and the efficiency of non-synchronization is the same as that of RDB
  10. small summary

insert image description here

3. Summary

insert image description here
Performance suggestions:

  1. Because the RDB file is only used for backup purposes, it is recommended to persist the RDB file only on the Slave, and it only needs to be backed up once every 15 minutes, and only the save 900 1 rule is kept.
  2. If Enalbe AOF, the advantage is that it will only lose no more than two seconds of data in the worst case, and the startup script is relatively simple, just load your own AOF file. The first cost is the continuous IO, and the second is that the AOF rewrite finally writes the new data generated during the rewrite process to the new file, which is almost inevitable. As long as the hard disk permits, the frequency of AOF rewrite should be reduced as much as possible. The default value of the base size of AOF rewrite is 64M, which is too small. It can be set to more than 5G. When the default exceeds 100% of the original size, the rewrite can be changed to an appropriate value.
  3. If AOF is not enabled, high availability can be achieved only by Master-Slave Replication. It can save a lot of IO and reduce the system fluctuation caused by rewrite. The price is that if the Master/Slave is dumped at the same time, more than ten minutes of data will be lost. The startup script also compares the RDB files in the two Master/Slave and loads the newer one. Sina Weibo has chosen this architecture.

6. Redis transaction

1. What is the business

​ Multiple commands can be executed at one time, which is essentially a collection of a group of commands. All commands in a transaction will be serialized and executed serially in sequence without being inserted by other commands, and blocking is not allowed;

2. What can I do?

​ In a queue, execute a series of commands one-time, sequentially, and exclusively;

3. How to play
  1. Common commands

    1. DISCARD: Cancel the transaction and give up executing all commands in the transaction block.

    2. EXEC: Executes all commands within a transaction block.

    3. MULTI : Marks the beginning of a transaction block.

    4. UNWATCH: Cancel the monitoring of all keys by the WATCH command.

    5. WATCH key [key ...] : Watch - one (or more) keys, if this (or these) keys are changed by other commands before the transaction is executed, the transaction will be interrupted.

  2. Case1: normal execution

    # 如果把一个事务比作一次超市购物,则开启事务后,未提交之前,则所有的操作都是记录保存在队列中,等待逐条执行,队列就理解为购物车;
    
    127.0.0.1:6379> multi				# 开启事务
    OK
    127.0.0.1:6379> set k1 v1			# 购物车队列存放第一条数据
    QUEUED		
    127.0.0.1:6379> set k2 v2			# 第二条操作
    QUEUED
    127.0.0.1:6379> get k1				# 第三条操作
    QUEUED
    127.0.0.1:6379> set k3 v3	
    QUEUED
    127.0.0.1:6379> exec				# 执行事务,收银柜台结账,队列先进先出,打印逐个单据如下
    1) OK
    2) OK
    3) "v1"
    4) OK
    
  3. Case2: Abandoning the transaction

    127.0.0.1:6379> multi				# 开启事务
    OK
    127.0.0.1:6379> set k1 v1 
    QUEUED
    127.0.0.1:6379> set k2 22
    QUEUED
    127.0.0.1:6379> discard				# 放弃此事务,不会被执行
    OK
    
  4. Case3: all sitting together

    127.0.0.1:6379> multi					#开启事务
    OK
    127.0.0.1:6379> set k5 v5 
    QUEUED
    127.0.0.1:6379> set45					# 添加事务时,此条命令错误
    (error) ERR unknown command 'set45'
    127.0.0.1:6379> set k6 v6				# 事务队列还能添加
    QUEUED	
    127.0.0.1:6379> exec					# 执行报错
    (error) EXECABORT Transaction discarded because of previous errors.
    127.0.0.1:6379> get k5					# 获取 k5,发现添加失败,一损俱损
    (nil)
    
  5. Case4: Bad faith creditor

    127.0.0.1:6379> multi					# 开启事务
    OK
    127.0.0.1:6379> set k1 aa				
    QUEUED
    127.0.0.1:6379> incr k1					# k1的值自增1,命令没有报错,但执行肯定报错
    QUEUED
    127.0.0.1:6379> set k2 22
    QUEUED
    127.0.0.1:6379> get k2
    QUEUED
    127.0.0.1:6379> exec					# 对于事务来说,其他执行成功,其中一条出错
    1) OK
    2) (error) ERR value is not an integer or out of range	# 此自增操作报错
    3) OK
    4) "22"
    
  6. Case5: watch monitoring

    1. pessimistic lock

      ​ Pessimistic Lock (Pessimistic Lock), as the name suggests, is very pessimistic. Every time you go to get the data, you think that others will modify it, so every time you get the data, you will lock it, so that if others want to get the data, it will block until it get the lock. Many such locking mechanisms are used in traditional relational databases, such as row locks, table locks, read locks, write locks, etc., which are all locked before operations;

    2. optimistic lock

      ​ Optimistic Lock (Optimistic Lock), as the name suggests, is very optimistic. Every time you go to get the data, you think that others will not modify it, so you will not lock it, but when updating, you will judge whether others have updated it during this period. For this data, mechanisms such as version numbers can be used. Optimistic locking is suitable for multi-read application types, which can improve throughput;

      ​ Optimistic locking strategy: the submitted version must be greater than the current version of the record to perform an update;

    3. CAS(Check And Set)

    4. Initialize credit card available balance and owed amount as an example:

    5. No tampering with plugging, first monitor and then enable multi, to ensure that the two amount changes are in the same transaction

      127.0.0.1:6379> set balance 100				#添加信用卡信用余额
      OK
      127.0.0.1:6379> set debt 0					#添加欠额
      OK
      127.0.0.1:6379> watch balance				# 对余额添加监听
      OK
      127.0.0.1:6379> multi						#开启事务
      OK
      127.0.0.1:6379> decrby balance 20			# 花掉20,余额减20
      QUEUED
      127.0.0.1:6379> incrby debt 20				# 欠额增加 20
      QUEUED
      127.0.0.1:6379> exec						# 执行成功
      1) (integer) 80
      2) (integer) 20
      
    6. tampered with

      As above, before the transaction is exec executed, another client modifies the balance, and the version of the balance saved in the database is higher than the version you monitored before (the latest version is higher), and you submit it again. An error will be reported, and it cannot be submitted. You can input unwatchto cancel the monitoring, and then watchbalance, and operate as above until the operation is successful;

    7. unwatch: Once the monitoring lock added before exec is executed, it will be canceled;

    8. Summarize:

      1. Watch command, similar to optimistic lock, when the transaction is committed, if the value of Key has been changed by other clients, for example, a list has been pushed/popped by other clients, the entire transaction queue will not be executed;

      2. Through the WATCH command, multiple Keys are monitored before the transaction is executed. If the value of any Key changes after the WATCH, the transaction executed by the EXEC command will be abandoned, and a Nullmulti-bulk response will be returned to notify the caller that the transaction execution failed;

4, 3 stages
  1. ON: start a transaction with MULTI

  2. Enqueue: Enqueue multiple commands into the transaction. These commands will not be executed immediately after receiving them, but will be placed in the transaction queue waiting to be executed.

  3. Execution: Transactions are triggered by the EXEC command

5.3 Features
  1. Separate isolated operations: All commands in a transaction are serialized and executed sequentially. During the execution of the transaction, it will not be interrupted by command requests sent by other clients.
  2. There is no concept of isolation level: the commands in the queue will not be actually executed until they are submitted, because any instructions will not be actually executed before the transaction is committed, so there is no "query in the transaction to see the update in the transaction , you can’t see it outside the transaction” This is a very headache problem
  3. Atomicity is not guaranteed: if there is a command execution in the same redis transaction ( execution error is reported, no error was reported when joining the queue, and the creditor ) fails, subsequent commands will still be executed without rollback.

7. Redis publish and subscribe

1. What is it?

A message communication mode between processes: the sender (pub) sends messages, and the subscriber (sub) receives messages.

2. Order
  1. PSUBSCRIBE pattern [pattem …] : Subscribe to one or more channels matching the given pattern.

  2. PUBSUB subcommand [argument [argument …]] : View subscription and publishing system status

  3. PUBLISH channel message : Send the message to the specified channel.

  4. PUNSUBSCRIBE [pattem [patterm …]] : Unsubscribe from all channels of the given pattern.

  5. SUBSCRIBE channel [channel …] : Subscribe to information on the given channel or channels.

  6. UNSUBSCRIBE [channel [channel …] : Refers to unsubscribe from the given channel.

3. Case
   先订阅后发布后才能收到消息
  1. Multiple subscriptions can be made at one time, SUBSCRIBE c1 c2 c3; message publishing, PUBLISH c2 hello-redis

    #首先开redis客户端r1,订阅频道 c1,c2,c3
    127.0.0.1:6379> subscribe c1 c2 c3						# 订阅频道
    Reading messages... (press Ctrl-C to quit)
    1) "subscribe"
    2) "c1"
    3) (integer) 1
    1) "subscribe"
    2) "c2"
    3) (integer) 2
    1) "subscribe"
    2) "c3"
    3) (integer) 3
    
    
    # 另开第二个客户端 r2 ,直接发布消息
    127.0.0.1:6379> publish c2 hello-redis  				# 指定给频道 c2 发布消息
    (integer) 1
    
    
    # 客户端 r1 接收到消息,接着打印人如下:
    601) "message"
    2) "c2"
    3) "hello-redis"
    
  2. Subscribe to multiple, wildcard, PSUBSCRIBE new* to receive messages, PUBL ISH new1 redis2015

    The operation is the same as above, except that when the r2 client sends a message, the specified channel is prefixed with new, and the subsequent content can be wildcarded;

8. Redis Replication (Master/Stave)

1. What is it?

​ Jargon: That is what we call master-slave replication. After the host data is updated, it will automatically synchronize to the master/slaver mechanism of the backup machine according to the configuration and strategy. The master is mainly for writing, and the slave is mainly for reading;

2. What can I do?

​ The main function is read-write separation, disaster recovery;

3. How to play
  1. With slave (library) not with master (library)

    1. For the configuration, operations are performed on the slave library, and the main library does not move.
  2. Slave library configuration: slaveof main library IP main library port.

    1. Every time you disconnect from the master, you need to reconnect unless you configure it in the redis.conf file
    2. Info replicationt
  3. Modify the detailed operation of the configuration file.

    1. Copy multiple redis.conf files

      # 利用 cp 命令将 redis.conf 配置文件拷贝三份
      cp redis-4.0.10/redis.config myredis/redis4379.conf
      cp redis-4.0.10/redis.config myredis/redis4380.conf
      cp redis-4.0.10/redis.config myredis/redis4381.conf
      
      # 分别对三份文件操作如下
      1. 开启后台运行 				daemonize yes 
      2. Pid 文件名字				  pidfile /var/run/redis6379.pid
      3. 指定端口					  port 6379
      4. Log 文件名字				  logfile "6379.log"
      5. Dump.rdb 名字				dbfilename dump6379.rdb
      
      
      
  4. 3 common tricks

    1. One master and two servants

      # 默认一开启都是主机
      127.0.0.1:6379> info replication						# 6379 查看相关信息
      # Replication
      role:master											  # master 主机的意思
      connected_slaves:0									  # 后面没有从机
      master_repl_offset:0
      repl_backlog_active:0									# 激活状态为 0,没有激活
      repl_backlog_size:1048576
      repl_backlog_first_byte_offset:0
      repl_backlog_histlen:0
      127.0.0.1:6379> set v1 k1								#设置一个 v1
      OK
      
      
      # 设置从机 (连接指定的端口 redis-cli -p 6380)
      127.0.0.1:6380> slaveof 127.0.0.1 6379				#将自己设置为从机,连接6379端口主机
      OK
      127.0.0.1:6380> info replication					# 6380 查看自身相关信息
      # Replication
      role:slave										  # slave 从机的意思
      master_host:127.0.0.1							   # 连接的主机地址
      master_port:6379								   # 连接的主机的端口
      master_link_status:up							    # up 正在连接中
      master_last_io_seconds_ago:8
      master_sync_in_progress:0
      slave_repl_offset:169
      slave_priority:100
      slave_read_only:1
      connected_slaves:0
      master_repl_offset:0
      repl_backlog_active:0
      repl_backlog_size:1048576
      repl_backlog_first_byte_offset:0
      repl_backlog_histlen:0
      127.0.0.1:6380> get v1							  # 可以拿到主机6379设置的 v1
      "k1"
      
      --主从搭建完成后,不管主机是搭建之前设置的数据,还是之后,从机会将主机所有数据复制过来
      
      
      # 再次查看主机的信息
      127.0.0.1:6379> info replication				
      # Replication
      role:master
      connected_slaves:2						 		 # 后面跟了两个从机6380,6381
      slave0:ip=127.0.0.1,port=6380,state=online,offset=57,lag=0
      slave1:ip=127.0.0.1,port=6381,state=online,offset=57,lag=1
      master_repl_offset:57
      repl_backlog_active:1								# 激活状态为1
      repl_backlog_size:1048576
      repl_backlog_first_byte_offset:2
      repl_backlog_histlen:56
      
      
      # 所谓读写分离,从机是不能进行写操作的,6380从机写入报错如下
      127.0.0.1:6380> set v2 22
      (error) READONLY You can't write against a read only slave.
      
      
      --此基础班配置,如果主机6379挂掉了,6380,6381两个从机依然是从机,不会进行上位操作
      --若主机又恢复了过来,主机写如数据,从机依然还是可以复制获取到的
      
      --若从机6380挂了,从机6381不受影响,主机6379的从机数减一
      --若从机6380再次重启恢复,它就是 master 主机的身份,和主机6379没有半毛关系了
      除非:每次与master断开之后,都需要重新连接,除非你配置进 redis.conf 文件
      
    2. Passing the torch.

      1. The previous Slave can be the Master of the next slave, and the Slave can also receive connection and synchronization requests from other slaves. Then the slave acts as the next master in the chain, which can effectively reduce the writing pressure of the master.

      2. Midway change direction: the previous data will be cleared, and the latest copy will be recreated;

      3. Slaveof new main library IP and new main library port;

        # 主机6379 ,从机6380 连接6379 ,从机6381 连接6380
        # 主机写入 set 的数据,6380,6381都可以复制获取到
        
        
        127.0.0.1:6380> info replication		# 6380 查看自身消息
        # Replication
        role:slave							 # 名义上还是一个从机,则自身是不能进行set写操作的
        master_host:127.0.0.1
        master_port:6379
        master_link_status:up
        master_last_io_seconds_ago:5
        master_sync_in_progress:0
        slave_repl_offset:4131
        slave_priority:100
        slave_read_only:1
        connected_slaves:1						# 自身也跟随了一个从机,下面是从机6381的信息
        slave0:ip=127.0.0.1,port=6381,state=online,offset=71,lag=0
        master_repl_offset:71
        repl_backlog_active:1
        repl_backlog_size:1048576
        repl_backlog_first_byte_offset:2
        repl_backlog_histlen:70
        
    3. Anti-customer-oriented.

      --sSLAVEOF no one:使当前数据库停止与其他数据库的同步,转成主数据库!
      
      # 大前提,主机6379,从机6380,6381都跟着6379混
      # 主机6379挂掉后,手动设置新的主机 6380
      127.0.0.1:6380> slaveof no one			
      OK
      # 从机6381 手动更改跟着 6380 混
      127.0.0.1:6380> slaveof 127.0.0.1 6381		
      OK
      
      # 则6380,6381就单独组合成主从复制,6379重启回来已经没有半毛关系
      
4. Replication principle
  1. After the slave successfully reaches the master, it will send a sync command.

  2. Master receives the command to start the background saving process, and at the same time collects all received commands for modifying the data set. After the background process is executed, the master will transfer the entire data file to the slave to complete a full synchronization.

  3. Full copy: After the slave service receives the data and file data, it saves it and loads it into the memory.

  4. Incremental replication: Master continues to pass all the new collected modification commands to the slave one by one to complete the synchronization, but as long as the master is reconnected, a full synchronization (full replication) will be automatically executed.

5. Sentinel mode (sentinel)
  1. what is it

    ​ The anti-customer-based automatic version can monitor whether the host is faulty in the background, and if it fails, it will automatically convert the slave database to the master database according to the number of votes;

  2. How to play?

    1. Adjust the structure, 6379 with 80, 81

    2. Create a new sentinel.conf file under the custom /myredis directory, and the name must not be wrong.touch sentinel.conf

    3. Configure sentry, fill in the content

      1. Sentinel monitor Monitored database name (named by yourself) 127.0.0.1 6379 1

        # host6379 是自定义的名字,1 表示主机挂掉后,谁的票数多于1票,谁就是新的主机
        sentinel monitor host6379 127.0.0.1 6379 1
        
      2. The last number 1 above means that after the host hangs up, salve votes to see who will take over as the host, and the one with the most votes will become the host

    4. start sentinel

    # 上述目录依照各自的实际情况配置,可能目录不同
    redis-sentinel myredis/sentinel.conf	# 启动哨兵命令
    
    1. Normal master-slave demo

      1. 6379 as master host, 6380, 6381 follow 6379 as slave
      2. The original 6379 master hangs up
      3. Vote from the machine to elect a new master
      4. Restart the master-slave and continue to work, check info replication
      5. If the previous 6379 master is restarted, it will be the slave of the newly selected master, and will no longer be the master.
  3. A group of sentines can monitor multiple Masters at the same time

6. Disadvantages of copying

​ Since all write operations are performed on the Master first, and then updated to the Slave synchronously, there is a certain delay in synchronizing from the Master to the Slave machine. When the system is very busy, the delay problem will be more serious. The number of Slave machines An increase would also exacerbate the problem.

Nine, Redis's Java client Jedis

  1. Five types of operations ( TestAPI.java )

  2. Transaction ( TestTX.java )

  3. watch optimistic lock transaction ( TestTransactionLock.java )

  4. Master-slave replication ( TestMS.java )

  5. jedis connection pool ( JedisPoolUtil.java, TestPool.java )

    1. Obtaining a Jedis instance needs to be obtained from the JedisPool

    2. After the Jedis instance is used up, the command to modify the data set received by JedisPool needs to be returned . After the background process is executed, the master will transfer the entire data file to the slave to complete a full synchronization.
  6. Full copy: After the slave service receives the data and file data, it saves it and loads it into the memory.

  7. Incremental replication: Master continues to pass all the new collected modification commands to the slave one by one to complete the synchronization, but as long as the master is reconnected, a full synchronization (full replication) will be automatically executed.

5. Sentinel mode (sentinel)
  1. what is it

    ​ The anti-customer-based automatic version can monitor whether the host is faulty in the background, and if it fails, it will automatically convert the slave database to the master database according to the number of votes;

  2. How to play?

    1. Adjust the structure, 6379 with 80, 81

    2. Create a new sentinel.conf file under the custom /myredis directory, and the name must not be wrong.touch sentinel.conf

    3. Configure sentry, fill in the content

      1. Sentinel monitor Monitored database name (named by yourself) 127.0.0.1 6379 1

        # host6379 是自定义的名字,1 表示主机挂掉后,谁的票数多于1票,谁就是新的主机
        sentinel monitor host6379 127.0.0.1 6379 1
        
      2. The last number 1 above means that after the host hangs up, salve votes to see who will take over as the host, and the one with the most votes will become the host

    4. start sentinel

    # 上述目录依照各自的实际情况配置,可能目录不同
    redis-sentinel myredis/sentinel.conf	# 启动哨兵命令
    
    1. Normal master-slave demo

      1. 6379 as master host, 6380, 6381 follow 6379 as slave
      2. The original 6379 master hangs up
      3. Vote from the machine to elect a new master
      4. Restart the master-slave and continue to work, check info replication
      5. If the previous 6379 master is restarted, it will be the slave of the newly selected master, and will no longer be the master.
  3. A group of sentines can monitor multiple Masters at the same time

6. Disadvantages of copying

​ Since all write operations are performed on the Master first, and then updated to the Slave synchronously, there is a certain delay in synchronizing from the Master to the Slave machine. When the system is very busy, the delay problem will be more serious. The number of Slave machines An increase would also exacerbate the problem.

Nine, Redis's Java client Jedis

  1. Five types of operations ( TestAPI.java )
  2. Transaction ( TestTX.java )
  3. watch optimistic lock transaction ( TestTransactionLock.java )
  4. Master-slave replication ( TestMS.java )
  5. jedis connection pool ( JedisPoolUtil.java, TestPool.java )
    1. Obtaining a Jedis instance needs to be obtained from the JedisPool
    2. When the Jedis instance is used up, it needs to be returned to JedisPool
    3. If Jedis makes an error during use, it also needs to be returned to JedisPool

Guess you like

Origin blog.csdn.net/w2462140956/article/details/99295035