Cache database Redis - Redis deployment and configuration

Relational databases and non-relational databases

Relational Database:

An agency of the database, created in the relational model, based on generally oriented in the record
, including oracle, mysql, sqlserver, db2

Non-relational databases:

In addition to the accident database mainstream relational database, all man-made non-relational
include redis, mongdb, hbase, couhdb

Non-relational database background

  • High demand for concurrent read and write database
  • Efficient storage and access requirements for massive data
  • Database scalability and high availability requirements

Redis Introduction

Redis-based memory to run and support persistence

Using the key-value (key-value pair) is stored in the form of

advantage:

  • With a high speed data read and write
  • Support for rich data types
  • Persistence support data
  • Atomicity
  • It supports data backup

Installation and Configuration Redis

1, install the necessary environment components, and install redis

[root@localhost ~]# yum install gcc gcc-c++ make -y  ##安装环境组件
[root@localhost ~]# mkdir /mnt/tools
[root@localhost ~]# mount.cifs //192.168.100.100/tools /mnt/tools/  ##挂载
Password for root@//192.168.100.100/tools:  
[root@localhost ~]# cd /mnt/tools/redis/
[root@localhost redis]# ls
redis-5.0.7.tar.gz
[root@localhost redis]# tar xf redis-5.0.7.tar.gz -C /opt/   ##解压
[root@localhost redis]# cd /opt/
[root@localhost opt]# ls
redis-5.0.7  rh
[root@localhost opt]# cd redis-5.0.7/
[root@localhost redis-5.0.7]# make   #编译
..........//省略过程
[root@localhost redis-5.0.7]# make PREFIX=/usr/local/redis/ install   ##安装
..........//省略过程

2, perform configuration Redis profile scripts, and configure

[root@localhost redis-5.0.7]# cd utils/
[root@localhost utils]# ./install_server.sh    ##执行脚本进行配置
Welcome to the redis service installer
This script will help you easily set up a running redis server

Please select the redis port for this instance: [6379]   ##默认端口
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf]   ##配置文件
Selected default - /etc/redis/6379.conf
Please select the redis log file name [/var/log/redis_6379.log]   ##日志文件
Selected default - /var/log/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379]   ##数据文件
Selected default - /var/lib/redis/6379
Please select the redis executable path [] /usr/local/redis/bin/redis-server
##可执行文件路径
Selected config:
Port           : 6379
Config file    : /etc/redis/6379.conf
Log file       : /var/log/redis_6379.log
Data dir       : /var/lib/redis/6379
Executable     : /usr/local/redis/bin/redis-server
Cli Executable : /usr/local/redis/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server...
Installation successful! 
[root@localhost utils]# ln -s /usr/local/redis/bin/* /usr/local/bin/   ##便于系统识别
[root@localhost utils]# netstat -ntap | grep 6379    ##查看监听端口
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      18004/redis-server  
[root@localhost utils]# /etc/init.d/redis_6379 stop  ##关闭redis
Stopping ...
Redis stopped
[root@localhost utils]# netstat -ntap | grep 6379   ##查看监听端口
tcp        0      0 127.0.0.1:6379          127.0.0.1:33970         TIME_WAIT   -   
[root@localhost utils]# /etc/init.d/redis_6379 start  ##开启redis
Starting Redis server...
[root@localhost utils]# netstat -ntap | grep 6379
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      18091/redis-server  
tcp        0      0 127.0.0.1:6379          127.0.0.1:33970         TIME_WAIT   -                   
[root@localhost utils]# 
[root@localhost utils]# vim /etc/redis/6379.conf   ##修改配置文件
bind 127.0.0.1 192.168.52.149  ##设置监听地址
[root@localhost utils]# /etc/init.d/redis_6379 restart  ##重启redis服务
Stopping ...
Redis stopped
Starting Redis server...

###Redis数据库基础操作

[root@localhost utils]# redis-cli -h 192.168.52.149 -p 6379   ##登录redis
192.168.52.149:6379> help @list    ##获取帮助列表

  BLPOP key [key ...] timeout
  summary: Remove and get the first element in a list, or block until one is available
  since: 2.0.0
...............................//省略部分内容
 RPUSHX key value
  summary: Append a value to a list, only if the list exists
  since: 2.2.0
192.168.52.149:6379> help set  ##help帮助

    SET key value [expiration EX seconds|PX milliseconds] [NX|XX]
    summary: Set the string value of a key
    since: 1.0.0
    group: string

192.168.52.149:6379> set name zhangsan   ##设置键值对
OK
192.168.52.149:6379> set net www
OK
192.168.52.149:6379> KEYS *   ##查看所有的键
1) "name"
2) "net"
192.168.52.149:6379> KEYS n??  ##查看键是n开头后面是两个字符的
1) "net"
192.168.52.149:6379> KEYS n*   ##查看键是n开头的
1) "name"
2) "net"
192.168.52.149:6379> GET net   ##查看键的值
"www"
192.168.52.149:6379> EXISTS net  ##查看键是否存在
(integer) 1     ##1是存在,0是不存在
192.168.52.149:6379> EXISTS nat
(integer) 0
192.168.52.149:6379> del net  ##删除键
(integer) 1
192.168.52.149:6379> KEYS *
1) "name"
192.168.52.149:6379> type name   ##查看键的类型
string
192.168.52.149:6379> rename name n1   ##给键重命名
OK
192.168.52.149:6379> KEYS *
1) "n1"
192.168.52.149:6379> get n1
"zhangsan"
192.168.52.149:6379> hset person score 80    ##用hash方式建立键值对
(integer) 1
192.168.52.149:6379> hset person name zhangsan   ##用hash方式建立键值对
(integer) 1
192.168.52.149:6379> hset person age 30   ##用hash方式建立键值对
(integer) 1
192.168.52.1490:6379> KEYS *
1) "person"
192.168.52.149:6379> hget person name   ##获取键的值
"zhangsan"
192.168.52.149:6379> hget person age   ##获取键的值
"30"
192.168.52.149:6379> set name lisi   ##获取键的值
OK
192.168.52.149:6379> KEYS *
1) "name"
2) "person"
192.168.52.149:6379> EXPIRE name 10   ##设置键的自动删除时间10s
(integer) 1
192.168.52.149:6379> KEYS *    ##10s内查看所有键
1) "name"
2) "person"
192.168.52.149:6379> KEYS *   ##10s后查看所有键
1) "person"
192.168.52.149:6379> exit   ##退出

3, pressure-measurement

[root@localhost utils]# redis-benchmark -h 192.168.52.149 -p 6379 -c 100 -n 100000
##并发100,100000个请求
====== SET ======
    100000 requests completed in 1.14 seconds   ##请求花费的时间
    100 parallel clients
    3 bytes payload
    keep alive: 1

84.66% <= 1 milliseconds
98.48% <= 2 milliseconds
99.69% <= 3 milliseconds
99.90% <= 18 milliseconds
100.00% <= 18 milliseconds
87642.41 requests per second

====== GET ======
    100000 requests completed in 1.13 seconds
    100 parallel clients
    3 bytes payload
    keep alive: 1
[root@localhost utils]# redis-benchmark -h 192.168.52.149 -p 6379 -q -d 100
    ##以字节形式指定set/get值的数据大小
    SET: 90497.73 requests per second
    GET: 90991.81 requests per second

4, the key to move to the other library (library 16 in total)

[root@localhost utils]# redis-cli -h 192.168.52.149 -p 6379         ##进入Redis            
192.168.52.149:6379> KEYS *
1) "mylist"
2) "counter:__rand_int__"
3) "n1"
4) "key:__rand_int__"
5) "myset:__rand_int__"
192.168.52.149:6379> SELECT 10  ##进入第11个库
OK
192.168.52.149:6379[10]> KEYS *
(empty list or set)
192.168.52.149:6379[10]> SELECT 0  ##进入第1个库
OK
192.168.52.149:6379> MOVE n1 10  ##移动键值对到第11个库
(integer) 1
192.168.52.149:6379> KEYS *
1) "mylist"
2) "counter:__rand_int__"
3) "key:__rand_int__"
4) "myset:__rand_int__"
192.168.52.149:6379> SELECT 10  ##进入第11个库
OK
192.168.52.149:6379[10]> KEYS *   ## 查看键
1) "n1"
192.168.52.149:6379[10]> GET n1
"zhangsan"
192.168.52.149:6379[10]> FLUSHDB  ##清除库中数据
OK
192.168.52.149:6379[10]> KEYS *     ##查看所有键
(empty list or set)
192.168.52.149:6379[10]> SELECT 0   ##切换到第一个库
OK
192.168.52.149:6379>KEYS *      ##查看所有的键
1) "myset:__rand_int__"
2) "mylist"
3) "key:__rand_int__"
4) "counter:__rand_int__"
192.168.52.149:6379> exit
[root@localhost utils]# 

Redis persistence

Redis is running in memory, the data in memory is lost off
to be able to reuse after Redis data, or to prevent system failures, we need to write data to disk space Redis, that persistence

Persistence classification

  • RDB way: create a snapshot of the way to get a copy of all the data in a time Redis
  • AOF way: Write command will do at the end of the file is written to a log way to record data changes

RDB endurance of

The default Redis persistence mode
the default file name dump.rdb

Triggering conditions:

  • Within a specified time interval, performing a specified number of write operations (Profile control)
  • Execution save or bgsave (asynchronous) command
  • Flushall command execution, empty database for all data
  • Shutdown command to ensure normal server shut down without loss of any data

Advantages and disadvantages:

  • Suitable for large-scale data recovery
  • If the business of data integrity and consistency do not ask, RDB is a good choice
  • Data integrity and consistency is not high
  • When the memory for backup

RDB data file recovery

Copy the file to the bin directory dump.rdb redis installation directory, and restart the service to redis

Placed RDB endurance of

[root@localhost utils]# vim /etc/redis/6379.conf 

#900秒之内至少一次写操作
save 900 1

#300秒之内至少发生10次写操作
save 300 10

#60秒之内发生至少10000次写操作
save 60 10000

#只要满足其一都会触发快照操作,注释所有的save项表示关闭RDB

#RDB文件名称
dbfilename dump.rdb

#RDB文件路径
dir /var/lib/redis/6379

#开启压缩功能
rdbcompression yes

AOF persistence

Redis is not turned on by default
insufficient (data inconsistency) make up the RDB
in the form of a log to record each write operation, and append to a file
Redis will perform a restart to complete the data front to rear based on the contents of the log file will be written instruction Return to work

According to the data file recovery AOF

Copy the file to the bin directory appendonly.aof redis installation directory, you can restart the service redis

AOF persistence configuration

[root@localhost utils]# vim /etc/redis/6379.conf 

#开启AOF持久化
appendonly yes

#AOF文件名称
appendfilename "appendonly.aof"

#always:同步持久化,每次发生数据变化会立刻写入磁盘
# appendfsync always

#everysec:默认推荐,每秒异步记录次(默认值)
appendfsync everysec

#no:不同步,交给操作系统决定如何同步
# appendfsync no

#忽略最后一条可能存在问题的指令
aof-load-truncated yes

AOF rewrite mechanism

AOF works is to write to append to the file, the contents of the file redundancy will be more and more
when the AOF file size exceeds the threshold set, Redis will be the content of AOF file compression

AOF rewrite principle

Redis will fork out a new process to read the data in memory (not to read the old files), and re-written to a temporary file, and finally replace the old aof file

AOF rewrite the configuration

[root@localhost utils]# vim /etc/redis/6379.conf 
#在日志进行BGREWRITEAOF时, 如果设置为yes表示新写操作不进行同步fsync,
#只暂存在缓冲区里,避免造成磁盘I0操作冲突,等重写完成后在写入。redis中默认为no
no-appendfsync-on-rewrite no

#当前AOF文件大小是上次日志重写时AOF文件大小两倍时,发生BGREWRITEAOF操作
auto-aof-rewrite-percentage 100

#当前AOF文件执行BGREWRITEAOF命令的最小值,
#避免刚开始启动Reids时由于文件尺寸较小导致频繁的BGREWRITEAOF
auto-aof-rewrite-min-size 64mb

Redis Performance Management

##查看redis内存使用
[root@localhost utils]# /usr/local/redis/bin/redis-cli
127.0.0.1:6379> info memory

Memory fragmentation rate

● the operating system allocates memory value used_ Memory RSS memory value is divided by redis used
Used
_memory calculated
● memory fragmentation by the operating system inefficient allocation / recovery due to the physical memory
is not contiguous physical memory allocation
● track memory fragmentation rate understanding redis instance resource performance is very important to
memory fragmentation rate slightly greater than 1 is reasonable, this value indicates the memory fragmentation rate is relatively low
memory fragmentation ratio exceeds 1.5, described redis consumed the actual needs 150% of physical memory, wherein 50% is memory fragmentation of
memory fragmentation rate of less than 1, indicating Redis memory allocation exceeds the physical memory, the operating system being swapping

Memory Usage

● memory usage redis instance exceed the maximum memory available, the operating system will start with
memory and swap space swap
● Avoid memory swapping
for selected cached data size
using Hash data structure as much as possible
to set key expiration time

Recovery key

● ensure rational allocation of limited resources redis memory
● When the memory usage reaches the maximum threshold settings, you need to choose a strategy of key recovery of
default recovery policy is not allowed to delete
redis.conf profile modification maxmemory-policy property values

  • volatile-lru: LRU algorithm using data from a set-out expiration time of the data collection
  • volatile-ttl: selection of expiring data from the phase-out has set the expiration time of data collection (recommended)
  • volatile-random: randomly selected from a set of data-out expiration time of the data collection
  • allkeys-lru: LRU algorithm using data from all the data set out in
  • allkeys-random: selecting data from the data set out any
  • no-enviction: prohibit out of data

Guess you like

Origin blog.51cto.com/14449541/2460723