Detailed use redis

Foreplay:

Went to the favorite part of foreplay, this may be a bit long foreplay:

  • The difference Nosql and sql

    • Mysql storage structure with a relational database which is completely different, nosql KV is stored in the form of
    • Different application scenarios, sql support the relationship between complex data queries, nosql vice versa
    • sql support transactional, nosql not supported
  • redis advantages , application scenarios

    • High performance, read speed of 100,000 times per second, 80,000 times per second write speed
    • All atomic operation support
    • As a cache database, data in memory
    • Alternatively mysql under certain scenarios, such as a social networking app
    • Large-scale system, you can store session information, shopping cart order

  • redis Features

    • The NoSQL, memory database, fast read and write
    • Stored in memory, support for persistent storage
    • key-value database

1. Install redis

  1. yum installation, the most simple configuration yum source

    #前提得配置好阿里云yum源,epel源
    #查看是否有redis包
    yum list redis
    #安装redis
    yum install redis -y
    #安装好,启动redis
    systemctl start redis
    
  2. Source compiler installation, specify the installation path, custom third-party module functions

    • Compile installation advantages are:
      • You can specify when you compile the extension is installed module (module), php, apache, nginx is the same there are many third-party extension modules, such as mysql, compile and install time, if you need a custom storage engine (innodb, or MyIASM)
      • Can compile and install a unified installation path, linux software Conventions installation directory in / opt / below
      • The depot version is generally low, the compiler source code can be installed on demand, install the latest version
    • Also known as seeking child installation, as a developer do not like this type of installation ...
    1.下载redis源码
    wget http://download.redis.io/releases/redis-4.0.10.tar.gz
    2.解压缩
    tar -zxf redis-4.0.10.tar.gz
    3.切换redis源码目录
    cd redis-4.0.10.tar.gz
    4.编译源文件
    make 
    5.编译好后,src/目录下有编译好的redis指令
    6.make install 安装到指定目录,默认在/usr/local/bin
  3. Detect whether a successful start

    redis-cli    #redis 客户端工具
    #进入交互式环境后,执行ping,返回pong表示安装成功
    127.0.0.1:6379> ping
    PONG

2.redis Configuration

  • ps -ef | grep redis view the process
  • netstat -tunlp | grep redis View port
  1. Specify the configuration file, start redis secure server

    • Change the start port
    • Add redis password
    • Safe Mode is turned redis
    #redis的默认配置文件是 redis.conf
    #过滤出配置文件的有益信息(去除空白行和注释行)
    grep -v "^#" redis.conf  |grep -v "^$"
  2. Specifies the startup configuration file

    • redis-server /opt/redis-4.0.10/redis.conf & # specify the configuration file to start redis, and the background to start
    bind 192.168.182.130  #绑定服务端地址 
    protected-mode yes   #安全模式 
    port 6800        #端口 
    requirepass  haohaio             #密码   
    daemonize yes    #后台运行 
    pidfile /var/run/redis_6379.pid  #进程id文件
    loglevel notice      #日志等级
    logfile ""
  3. Use password redis, use port 6800

    • Method 1: Use

      [root@oldboy_python ~ 09:48:41]#redis-cli -p 6380
      127.0.0.1:6380> auth xxxx
      OK
    • Method 2: do not recommend the use of the password is easy to storm drain

      [root@oldboy_python ~ 09:49:46]#redis-cli -p 6380 -a xxxx
      Warning: Using a password with '-a' option on the command line interface may not be safe.
      127.0.0.1:6380> ping
      PONG

1.redis data types

  • redis is an advanced key: value storage systems, which value supports five data types

    • Strings (strings)
    • Hashes (hashes)
    • List (lists)
    • Collection (sets)
    • Ordered set (sorted sets)
  • The basic commands redis

    keys *         查看所有key
    type key      查看key类型
    expire key seconds    过期时间
    ttl key     查看key过期剩余时间        -2表示key已经不存在了
    persist     取消key的过期时间   -1表示key存在,没有过期时间
    exists key     判断key存在    存在返回1    否则0
    del keys     删除key    可以删除多个
    dbsize         计算key的数量
  1. strings Type

    • set key setting
    • get get key
    • append 追加string
    • mset a plurality of key-value pairs
    • mget obtain multiple key-value pairs
    • del delete key
    • incr Increment +1
    • decr decremented by -1
  2. Type list

    • lpush inserted from the left list
    • rpush inserted from the right list
    • lrange obtain a certain length lrange key start stop element
    • ltrim intercept certain length list
    • lpop delete the leftmost element
    • rpop delete the rightmost element
    • lpushx / rpushx key value exists is added, there is no treatment
  3. sets collection type (unordered)

    • sadd / srem add / remove elements
    • sismember determining whether a set of elements
    • smembers return all the members of the collection
    • sdiff return difference of a collection and other collections
    • Returns the intersection of several sets of sinter
    • sunion several sets of set back and
  4. Ordered set

    • Ordination orderly collection, sorting, student achievement

      127.0.0.1:6379> ZADD mid_test 70 "alex"
      (integer) 1
      127.0.0.1:6379> ZADD mid_test 80 "wusir"
      (integer) 1
      127.0.0.1:6379> ZADD mid_test 99 "yuyu"
    • zreverange flashback

    • zrange positive sequence

    • ZREM removed

    • Returns an ordered set of base mid_test

      127.0.0.1:6379> ZCARD mid_test
      (integer) 3
    • Returns the value of the member's score

      127.0.0.1:6379> ZSCORE mid_test alex
      "70"
  5. Hash data structure

    • hashes are stored mapping between string and string values, such as a user to store their full name, last name, age, etc., it is suitable for use hash.
      • hset set hash value
      • hget obtaining a hash value
      • hmset set up multiple hash value
      • hmget obtain multiple hash value
      • If the hash hsetnx already exists is not set (prevent overwriting key)
      • hkeys return all keys
      • hvals returns all values
      • hlen Returns a hash containing domain (field) number
      • hdel delete hash specified domain (field)
      • hexists determine whether there

2.redis publish-subscribe

  • redis publish-subscribe, please refer Bowen Address: https://www.cnblogs.com/pyyu/p/10013703.html

3.redis persistent storage

  • Redis is a memory database, once the server process exits, the data in the database will be lost, in order to solve this problem, Redis provides two of a lasting solution to save the data in memory to disk, to avoid loss of data.

3.1 RDB persistence

  • redisProvides RDB持久化functionality, this feature can redissave the state in memory to the hard disk, it can perform a manual.

  • Can also be re- redis.confconfigured, regularly perform .

  • RDB RDB persistence produced a file is compressed in a binary file , the file is saved on the hard disk, redis can restore the state of the database at the time of the adoption of the document. -

  • Real

    1. Trigger mechanism,

      • Manually save command
      • Or configuring a trigger condition at save 200 10 # 200 seconds, more than 10 to modify the operation of the class
    2. Establish redis configuration file, open the function rdb

      #配置文件  s21_rdb.conf 内容如下 ,有关rdb的配置参数是  dbfilename  dbmp.rdb  ,一个是 save 900 1   
      
      daemonize yes
      port 6379
      logfile /data/6379/redis.log
      dir /data/6379        #定义持久化文件存储位置
      dbfilename  s21redis.rdb        #rdb持久化文件
      bind  127.0.0.1         #redis绑定地址
      requirepass redhat                   
      save 900 1                           
      save 300 10                        
      save  60  10000                     
      
      save  20  2  #在20秒内,超过2个修改类的操作

3.2 AOF mechanism

  • The second mechanism aof mechanisms to modify your class operation command, append to the log file

  • AOF all the commands in the file format to save the new command redis protocol appended to the end of the file.

  • Advantages and disadvantages

    • Advantages: maximum program to ensure that data is not lost
    • Cons: Very large logging
    redis-client   写入数据  >  redis-server   同步命令   >  AOF文件
  • Configuration parameters

    appendonly yes
    appendfsync  always    总是修改类的操作
                 everysec   每秒做一次持久化
                 no     依赖于系统自带的缓存大小机制

1. Prepare aof profile redis.conf

daemonize yes
port 6379
logfile /data/6379/redis.log
dir /data/6379
dbfilename  dbmp.rdb
requirepass redhat
save 900 1
save 300 10
save 60  10000
appendonly yes
appendfsync everysec

2. Start redis service

redis-server /etc/redis.conf

3. Check redis data directory / data / 6379 / aof whether or not a file

[root@web02 6379]# ls
appendonly.aof  dbmp.rdb  redis.log

4. Log redis-cli, write data, real-time check aof file information

[root@web02 6379]# tail -f appendonly.aof

5. Set the new key, aof check information, and then close Redis, to check whether data persistence

redis-cli -a redhat shutdown

redis-server /etc/redis.conf

redis-cli -a redhat

3.3 Summary

  • redis persistence options? What's the difference?
    • rdb: snapshot-based persistence, faster, typically used as a backup, is copied from the master depends on persistence function rdb
    • aof: can additionally recorded redis operation log file. Redis can ensure the greatest degree of data security, similar to the mysql binlog

4.redis master-slave replication

Man of few words said, looking directly at the Case:

  1. Environmental ready,

    主从规划
    主节点:6380
    从节点:6381、6382
    
  • Run 3 redis database, to achieve a configuration from the master 2

    #主库  6379.conf 
      port 6379
      daemonize yes
      pidfile /data/6379/redis.pid
      loglevel notice
      logfile "/data/6379/redis.log"
      dbfilename dump.rdb
      dir /data/6379
    
    #从库 6380
      port 6380
      daemonize yes
      pidfile /data/6380/redis.pid
      loglevel notice
      logfile "/data/6380/redis.log"
      dbfilename dump.rdb
      dir /data/6380
      slaveof  127.0.0.1  6379 
    
    
    
    #从库 6381  
      port 6381
      daemonize yes
      pidfile /data/6381/redis.pid
      loglevel notice
      logfile "/data/6381/redis.log"
      dbfilename dump.rdb
      dir /data/6381
      slaveof  127.0.0.1  6379 
    
  1. Open master-slave replication

    edis-cli info # View database information
    redis-cli info replication

  • In 6380 and 6381 database, master-slave configuration information by modifying the configuration parameters in the form of provisional entry into force, pay attention to the configuration file

    redis-cli -p 6380 slaveof 127.0.0.1 6379
    redis-cli -p 6381 slaveof 127.0.0.1 6379
    
  1. Analog master copy from the failure, a manual switching master-slave status

                1.杀死6379进程 ,干掉主库 
    
                2.手动切换 6381为新的主库,需要先关闭它的从库身份
                redis-cli -p 6381  slaveof no one 
    
                3.修改6380的新主库是 6381
                redis-cli -p 6380 slaveof  127.0.0.1 6381
    

The availability Sentinel

  • redis-sentinel function
  1. Preparing the Environment

    • Three redis database instance, configured from a main configuration 2
    #1
    6379.conf
    port 6379
    daemonize yes
    logfile "6379.log"
    dbfilename "dump-6379.rdb"
    dir "/var/redis/data/"
    #2
    6380.conf 
    port 6380
    daemonize yes
    logfile "6380.log"
    dbfilename "dump-6380.rdb"
    dir "/var/redis/data/"
    slaveof 127.0.0.1 6379
    #3
    6381.conf 
    port 6381
    daemonize yes
    logfile "6381.log"
    dbfilename "dump-6381.rdb"
    dir "/var/redis/data/"
    slaveof 127.0.0.1 6379
    
    
    • Three redis Sentinel process, designated good, the detection of who is preparing three configuration files, as follows
    sentinel-26379.conf  
    port 26379  
    dir /var/redis/data/
    logfile "26379.log"
    
    // 当前Sentinel节点监控 192.168.182.130:6379 这个主节点
    // 2代表判断主节点失败至少需要2个Sentinel节点节点同意
    // mymaster是主节点的别名
        sentinel monitor s21ms  0.0.0.0 6379 2
    
    //每个Sentinel节点都要定期PING命令来判断Redis数据节点和其余Sentinel节点是否可达,如果超过30000毫秒30s且没有回复,则判定不可达
        sentinel down-after-milliseconds s21ms  20000
    
    //当Sentinel节点集合对主节点故障判定达成一致时,Sentinel领导者节点会做故障转移操作,选出新的主节点,
        原来的从节点会向新的主节点发起复制操作,限制每次向新的主节点发起复制操作的从节点个数为1
        sentinel parallel-syncs mymaster 1
    
    //故障转移超时时间为180000毫秒
        sentinel failover-timeout mymaster 180000
    
    
        #三个哨兵的配置文件,一模一样,仅仅是端口的区别  
        #三个哨兵的配置文件,一模一样,仅仅是端口的区别  
        #三个哨兵的配置文件,一模一样,仅仅是端口的区别  
        sentinel-26380.conf  
    
     #三个哨兵的配置文件,一模一样,仅仅是端口的区别 
        sentinel-26381.conf  
    
  2. Were started three redis database, as well as three sentinel process,

    • Note that after the first start sentinel, will modify the configuration file, if wrong, have to delete the profile, re-write
    #配置文件在这里
    # 1
     sentinel-26379.conf 
     port 26379  
     dir /var/redis/data/
     logfile "26379.log"
     sentinel monitor s21ms  127.0.0.1  6379 2
     sentinel down-after-milliseconds s21ms  20000
     sentinel parallel-syncs s21ms 1
     sentinel failover-timeout s21ms 180000
     #加一个后台运行
     daemonize yes 
    
    #2
    #仅仅是端口的不同
     sentinel-26380.conf 
    #3
     sentinel-26381.conf 
    
    • Start Sentinel
    redis-sentinel sentinel-26379.conf 
    redis-sentinel sentinel-26380.conf 
    redis-sentinel sentinel-26381.conf 
    
  3. Sentry Verify is normal

    redis-cli -p 26379 info sentinel
    
    # 正常结果master0:name=s21ms,status=ok,address=127.0.0.1:6379,slaves=2,sentinels=3
    
  4. Kill primary library, from a master switching state check

    kill -9 12749
    ps -ef|grep redis
    redis-cli -p 6380 info replication
    redis-cli -p 6381 info replication
    redis-cli -p 6380 info replication
    redis-cli -p 6381 info replication
    
  5. If you become a master database from the library, then you have completed the Sentinel high availability configuration

6.redis-cluster (Cluster Setup)

6.1 Basics

  • You look at a map, encounter

    • Built before cluster

      1568989493038

    • After the build cluster

      1568989513357

  • Why redis-cluster:

    redis官方生成可以达到 10万/每秒,每秒执行10万条命令假如业务需要每秒100万的命令执行呢?
    
  • Data distribution diagram:

    1568989738207

  • Data distribution theory

    • Distributed Database primary solution to the entire data set is mapped in accordance with zoning rules to issue multiple nodes, namely the division of data sets across multiple nodes, each node is responsible for a subset of the entire data.

    • Redis ClusterUsing hash partitioning rules, so the next will discuss hash partitioning rules.

      节点取余分区
      一致性哈希分区
      虚拟槽分区(redis-cluster采用的方式)
      
      • Node modulo partition

      • Consistent hashing partition

        • Hash partitioning data of 1 to 3 modulo 100, can be divided into three categories
          • The remainder is 0
          • The remainder is 1
          • A remainder of 2

        1568990398276

        • Consistent hashing, clients fragmentation, clockwise hash + modulo
      • Virtual Partition grooves (redis-cluster mode employed) **

        虚拟槽分区巧妙地使用了哈希空间,使用分散度良好的哈希函数把所有的数据映射到一个固定范围内的整数集合,整数定义为槽(slot)。
        
        Redis Cluster槽的范围是0 ~ 16383。
        
        槽是集群内数据管理和迁移的基本单位。采用大范围的槽的主要目的是为了方便数据的拆分和集群的扩展,
        每个节点负责一定数量的槽。
        

        1568990611784

6.2 build redis cluster

redis-cluster集群架构
多个服务端,负责读写,彼此通信,redis指定了16384个槽。
多匹马儿,负责运输数据,马儿分配16384个槽位,管理数据。
ruby的脚本自动就把分配槽位这事做了

1568990714962

  1. Preparing the Environment
    • redis Configuration

      redis-7000.conf 
      port 7000
      daemonize yes
      dir "/opt/redis/data"
      logfile "7000.log"
      dbfilename "dump-7000.rdb
      
      cluster-enabled yes   #开启集群模式
      cluster-config-file nodes-7000.conf  #集群内部的配置文件
      cluster-require-full-coverage no   
         #redis cluster需要16384个slot都正常的时候才能对外提供服务,换句话说,只要任何一个slot异常那么整个cluster不对外提供服务。 因此生产环境一般为no
      
    • 6 horses ready for children ,, 6 redis node

      # 6个配置文件,仅仅是端口的区别
      redis-7000.conf
         port 7000
         daemonize yes
         dir "/opt/redis/data"
         logfile "7000.log"
         dbfilename "dump-7000.rdb
         cluster-enabled yes
         cluster-config-file nodes-7000.conf
         cluster-require-full-coverage no
      
      redis-7001.conf 
      redis-7002.conf 
      redis-7003.conf 
      redis-7004.conf 
      redis-7005.conf 
      
    • redis support multi-instance, we have set up a cluster in a stand-alone presentation, need six instances, the master node is three, three are from the node, the number of six nodes to ensure high availability clusters.

      Each node is just different ports running!

      [root@yugo /opt/redis/config 17:12:30]#ls
      redis-7000.conf  redis-7002.conf  redis-7004.conf
      redis-7001.conf  redis-7003.conf  redis-7005.conf
      
      #确保每个配置文件中的端口修改!!
      
  2. Start 6 nodes, six horses
    redis-server 7000.conf 
    redis-server 7001.conf 
    redis-server 7002.conf 
    redis-server 7003.conf 
    redis-server 7004.conf 
    redis-server 7005.conf 
    
    #检查日志文件
    
    cat 7000.log
    #检查redis服务的端口、进程
    
    netstat -tunlp|grep redis
    
    
    ps -ef|grep redis
    #此时集群还不可用,可以通过登录redis查看
    
    redis-cli -p 7000
    set hello world
    
    #(error)CLUSTERDOWN The cluster is down
    
  3. Redis slot slot allocation
    • Use a large ruby ​​God wrote a redis module automatically assigned

      下载、编译、安装Ruby
      安装rubygem redis
      安装redis-trib.rb命令
      
    • Download and install ruby

      #下载ruby
      wget https://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.1.tar.gz
      
      #安装ruby
      tar -xvf ruby-2.3.1.tar.gz
      ./configure --prefix=/opt/ruby/
      make && make install
      
      #准备一个ruby命令
      #准备一个gem软件包管理命令
      #拷贝ruby命令到path下/usr/local/ruby
      cp /opt/ruby/bin/ruby /usr/local/
      cp bin/gem /usr/local/bin
      
    • Install ruby ​​gem package management tool

      wget http://rubygems.org/downloads/redis-3.3.0.gem
      
      gem install -l redis-3.3.0.gem
      #查看gem有哪些包
      gem list -- check redis gem
      
    • Install redis-trib.rb command

      [root@yugo /opt/redis/src 18:38:13]#cp /opt/redis/src/redis-trib.rb /usr/local/bin/
      
  4. A key to open redis-cluster cluster

    #每个主节点,有一个从节点,代表--replicas 1
    redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005
    
    #集群自动分配主从关系  7000、7001、7002为 7003、7004、7005 主动关系
    
  5. View cluster status

    redis-cli -p 7000 cluster info  
    
    redis-cli -p 7000 cluster nodes  #等同于查看nodes-7000.conf文件节点信息
    
    集群主节点状态
    redis-cli -p 7000 cluster nodes | grep master
    集群从节点状态
    redis-cli -p 7000 cluster nodes | grep slave
    
  6. After installation, check the status of the cluster

    [root@yugo /opt/redis/src 18:42:14]#redis-cli -p 7000 cluster info
    cluster_state:ok
    cluster_slots_assigned:16384
    cluster_slots_ok:16384
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:6
    cluster_size:3
    cluster_current_epoch:6
    cluster_my_epoch:1
    cluster_stats_messages_ping_sent:10468
    cluster_stats_messages_pong_sent:10558
    cluster_stats_messages_sent:21026
    cluster_stats_messages_ping_received:10553
    cluster_stats_messages_pong_received:10468
    cluster_stats_messages_meet_received:5
    cluster_stats_messages_received:21026
    
  • Written in the last words

    # 测试写入集群数据,登录集群必须使用redis-cli -c -p 7000必须加上-c参数
    
    127.0.0.1:7000> set name chao     
    -> Redirected to slot [5798] located at 127.0.0.1:7001 
        OK
    127.0.0.1:7001> exit
    [root@yugo /opt/redis/src 18:46:07]#redis-cli -c -p 7000
    127.0.0.1:7000> ping
    PONG
    127.0.0.1:7000> keys *
    (empty list or set)
    127.0.0.1:7000> get name
    -> Redirected to slot [5798] located at 127.0.0.1:7001
    "chao"
    
  • working principle:

    redis主从:是备份关系, 我们操作主库,数据也会同步到从库。 如果主库机器坏了,从库可以上。就好比你 D盘的片丢了,但是你移动硬盘里边备份有。
    redis哨兵:哨兵保证的是HA,保证特殊情况故障自动切换,哨兵盯着你的“redis主从集群”,如果主库死了,它会告诉你新的老大是谁。
    redis集群:集群保证的是高并发,因为多了一些兄弟帮忙一起扛。同时集群会导致数据的分散,整个redis集群会分成一堆数据槽,即不同的key会放到不不同的槽中。
    主从保证了数据备份,哨兵保证了HA 即故障时切换,集群保证了高并发性。
    

    Client access to any of a redis redis example, if the data is not in the example, the need to redirect the client to access the guide redis Examples

Guess you like

Origin www.cnblogs.com/bigox/p/11565024.html