Big Data -redis

repeat

Distributed Cache Database

Single-node installation

  • takes -zxvf Redis-3.2.9.tar.gz

  • cd /opt/sxt/redis-3.2.9

  • yum -y install gcc tcl (installation dependent)

  • make (in /opt/sxt/redis-3.2.9 directory)

  • make install (directory ibid)

  • Modify the configuration file: vim redis.conf

    daemonize: yes (background)

    bind 127.0.0.1 192.168.163.201 (native configuration ip)

  • Start and specify the configuration: redis-server /opt/sxt/redis-3.2.9/redis.conf

  • Local start the client redis-cli

  • Close process method

    ps -ef | grep redis redis Find the process ID

    kill the process kill -9 xxx

Command Line

Library and key operation

 

String operations

 

List operations

 

Set operations

 

Sorted set operation

 

hash operation

 

From the master node operation

For the temporary operation node, used in the program runtime

  • External node from the node to be temporary command: slaveof 192.168.163.201 6379

    After connecting to the master node, the original data is empty, then the master node synchronizing data

  • Temporary release from node subordinate command: slaveof no one (do not clean out the synchronized data)

Endurance of

RDB

  • Save snapshots of memory to the hard disk dump.rdb (default directory is stored in the configuration file)

  • Trigger: 1w of data within 60s; 300s 10 the pieces of data; (Save the configuration file attribute) of data within 900s 1

  • Normal shutdown can trigger rdb, and then restore the snapshot into memory at boot time

  • Close rdb

    Temporary closure rdb: command config set save ""

    Permanently closed edb: Profiles save "", save the configuration other comments

  • One-shot

    Command sava, manual trigger rdb, blocking execution

    bgsave, manual trigger rdb, background, does not trigger rdb

  • It may cause data loss

AOF

  • Save Operation Log (aof file appendonly.aof), stored in the default directory profile

  • redis precedence aof recover data, that is, to rewrite execution log file (rdb priority after)

  • Rewrite the log, there is a risk of data loss

  • redis.conf parameter configuration

    1. Set aof start: appendonly yes

      Priority use aof persistence (rdb priority after)

    2. appendfsync strategy

      • always: save real-time, data is not lost, efficiency is poor (after logging persistent data storage complete response)

      • everysec: 1s save a log, the default (no need to wait log persistence)

      • no: the log data to the operating system to handle data insecurity, the fastest

    3. rewrite mechanism

      Merge log file, parse the log file compression (removing invalid data)

      Configuration parameters

      Rewrite does not support inserting data: no-appendfsync-on-rewrite no

      Extended multiples: auto-aof-rewrite-percentage default 100 (double) (the size of the reference memory space reaches the maximum log file extension, to avoid duplication spatial spreading)

      Log Size reference: auto-aof-rewrite-min-size 64M default (initial upper limit of the size of the log file, the log file size reaches the reference size, the log is started rewriting)

  • Empty data flushall. If you need to recover in order to achieve the data can only be modified aof file

Master-slave replication

Client master node performs read and write operations, and then to the data synchronization from the node. Usually only performs a read operation from the node.

Separate read and write from the master copy.

Background data synchronization: non-blocking asynchronous replication, 1s performed once (synchronization failure, a small amount of data written loss)

Cluster: can also create their own slave node from the node

From the master node configuration

  • Copy the project directory with the command

    cd /opt/sxt
    scp -r redis-3.2.9 node2:`pwd`
    scp -r redis-3.2.9 node3:`pwd`
    cd /usr/local/bin
    scp * node2:`pwd`
    scp * node3:`pwd`
  • Modify the configuration file

    • Configure the local ip

      bind 127.0.0.1 192.168.163.202

    • Ip and port configuration master from a node (the default port 6379)

      slaveof 192.168.163.201 6379

  • redis account password

    • The master node is configured requirepass <password>

    • From node configuration masterauth <password>

  • Entry into force of the master node, the master node to meet the conditions before the write operation is performed

    Minimum from slaves-nodes-to-min Write <Number of slaves>

    The maximum delay-slaves-max-min LAG <Number of seconds The>

  • Start master (3 simultaneously perform node) from node

    redis-server redis.conf

    redis-cli

Sentinel sentinel

Real-time monitoring master-slave node prompt, failure, automatic failover.

Sentinel main switching migration automatic fault is permanent relationship from modifying the main configuration file in each node from the configuration information

After dropping the original host, modify back online Sentinel configure the slave node, it becomes a slave node

Sentinel is a cluster node offline by voting judge.

Configuring Sentinel

  1. Enter profile

    vim /opt/sxt/redis-3.2.9/sentinel.conf

    • Review: From the main cluster name; the current ip and port of the master node; break Votes (You can configure multiple small clusters)

      sentinel monitor mymaster 192.168.78.101 6379 2

      Through the master node, the sentinel node acquired from other sentinel

    • Modify dropped judgment time, default 30s dropped Sentinel confirmation

    • sentinel down-after-milliseconds mymaster 30000

    • Failover time, overtime think migration fails

      sentinel failover-timeout mymaster 180000

    • When failover, multi-node several new master data synchronization

      sentinel parallel-syncs mymaster 1

    • Close Protected Mode

      protected-mode no

    The use of its own configuration file, not with a modified configuration file

    sentinel monitor mymaster 192.168.163.201 6379 2
    sentinel down-after-milliseconds mymaster 30000
    sentinel failover-timeout mymaster 180000
    sentinel parallel-syncs mymaster 1
    protected-mode no
  2. Copy the configuration file to the other node

    scp sentinel.conf node2:/opt/sxt/redis-3.2.9

    scp sentinel.conf node3:/opt/sxt/redis-3.2.9

  3. Start with redis Sentinel (redis directory)

    redis-server redis.conf

    redis-sentinel sentinel.conf

    Logs can be seen in the sentinel sentinel node connection information node redis

Distributed Cluster

From 3 version began to support.

Principle: The key calculation taking Type hash, hash value for the specified value (16384) I operation, to obtain 16384 hash slots, different hash data into the groove. Distributed hash slots on each node of a distributed cluster.

Data hash slots can, migrate between nodes in the background, it does not cause node blocks.

Migration can be implemented offline hash slot on the control node, the data is automatically assigned.

Distributed also supports copying from the master node.

Data consistency problems

  • Loss of synchronization, data synchronization from the master node, non-blocking replication, lS performed once (if the synchronization failure, a small amount of data is written loss)

  • Cluster division, and at least one client and includes a master node, including a small number of (minority) are isolated instances.

    After reconnecting, split off nodes synchronized lose some data.

    Cluster Solutions division: Self find themselves as a minority party nodes, automatic logoff, reducing data loss

Build distributed

Configuration 6 profile, starting six node configuration file

Configuration such that there is a different port on the machine to run several nodes redis

  1. Profile redis.conf

    • Open ports: port 7000

    • Clusters are available: cluster-enabled yes

    • Cluster configuration: cluster-config-file nodes.conf

    • Node timeout: cluster-node-timeout 5000

    • aof persistence: appendonly yes

    • Background processes: daemonize yes

    • Protected Mode turned off protected-mode no

    port 7001
    cluster-enabled yes
    cluster-config-file nodes.conf
    cluster-node-timeout 5000
    appendonly yes
    daemonize yes
    protected-mode no
  2. The different profiles into different folders, so that the persistent data into a different directory

    For example: Create a folder in redis cluster directory, which create a 7000 folder and the folder 7001

    7000 directory placement 7000 port configuration file; 7001 directory to place the configuration file 7001 port

    In this configuration, three machines, each port 7001 and port 7000 provided two nodes on each machine redis

  3. Start each node redis

  4. Installation of dependencies ruby ​​(to perform a single node)

    yum install ruby

    yum install rubygems

    gem install redis-3.2.1.gem locally installed redis-3.2.1.gem, a gem command in the path of the file

  5. Clusters of

    replicas parameters:

    It represents a node is assigned from the master node, that node 6 (from the third main 3)

    It represents two assigned master node from the node, i.e. node 6 (2 from the master 4)

    Finally, connect each node ip and port

    cd /opt/sxt/redis-3.2.9/src 
    ./redis-trib.rb create --replicas 1 192.168.163.201:7000 192.168.163.201:7001 192.168.163.202:7000 192.168.163.202:7001 192.168.163.203:7000 192.168.163.203:7001
  6. Connecting clusters

    redis-cli -p 7000 -c (designated connection port 7000, allowing automatic switching node)

    redis-cli -h node1 -p 7000 -c (specified host + ip, allowing automatic switching node)

  7. Close cluster

    Client executes the command shutdown

    Close command node or liunx: redis-cli -p 7001 shutdown

的 Redis API

 

 

 

Guess you like

Origin www.cnblogs.com/javaxiaobu/p/11703011.html