Redis slow log and redis master-slave replication configuration

Redis slow query log:

vim /etc/redis.conf

slowlog-log-slower-than 10000

Redis also has a slow query log similar to MySQL. This parameter defines that a query execution time exceeding 10000 microseconds will be recorded in the log. Among them, 1 second = 1,000,000 microseconds.

slowlog-max-len 128

This parameter defines the maximum number of slow query logs. In fact, Redis's slow log is also stored in memory, which is also a k/v form of data.

supplement:

slowlog get //List all slow query logs

slowlog get 2 //Only list 2

slowlog len //View the number of slow query logs

Use Redis in PHP (installed in the previous document without installing php)

php install redis extension module-install using pecl

/usr/local/php-fpm/bin/pecl install redis

报错:Cannot find autoconf. Please check your autoconf installation and the$PHP_AUTOCONF environment variable. Then, rerun this script.

ERROR: `phpize' failed

Solution: yum -y install m4 autoconf

Re-install using pecl:

/usr/local/php-fpm/bin/pecl install redis

vim /usr/local/php/etc/php.ini //Add extension = redis.so in the last line

Check if there is a redis module

/usr/local/php-fpm/bin/php -m

Install from source code

wget http://pecl.php.net/get/redis-5.3.2.tgz

tar -xzvf redis-5.3.2.tgz

cd redis-5.3.2

/usr/local/php-fpm/bin/phpize

./configure --with-php-config=/usr/local/php-fpm/bin/php-config

make && make install

vi /usr/local/php/etc/php.ini //Add extension = redis.so

Use **redis -storage ** session** in php **

vim /usr/local/php-fpm/etc/php.ini

;session.save_handler = files

Add two lines below

session.save_handler = "redis"

session.save_path = "tcp://127.0.0.1:6379"

Or it can be configured like this in the apache virtual host configuration file:

php_value session.save_handler "redis"

php_value session.save_path "tcp://127.0.0.1:6379"

Or add to the pool corresponding to the php-fpm configuration file:

php_value[session.save_handler] = redis

php_value[session.save_path] = "tcp://127.0.0.1:6379"

Create test file

wget http://study.lishiming.net/.mem_se.txt

mv .mem_se.txt session.php

Test: /usr/local/php-fpm/bin/php session.php

Redis master-slave configuration

Through the persistence function, Redis guarantees that data will not be lost (or a small amount of loss) even when the server is restarted. However, since the data is stored on a server, if the server fails, such as the hard disk is broken, It can also cause data loss.

In order to avoid a single point of failure, we need to deploy multiple copies of data on multiple different servers. Even if one server fails, other servers can continue to provide services.

This requires that when the data on one server is updated, the updated data is automatically synchronized to other servers. At this time, Redis master-slave replication is used.

Two linux:

Main redis: 192.168.111.136

From redis: 192.168.111.140

Main redis set password:

vim /etc/redis.conf

requirepass admin123

appendonly yes

bind 127.0.0.1 192.168.111.136

Modify the configuration from redis

vim /etc/redis.conf

# replicaof <masterip> <masterport>

replicaof 192.168.111.136 6379

# masterauth <master-password>

masterauth admin123

bind 127.0.0.1 192.168.111.140

Restart the master-slave redis service

systemctl restart redis

Test: The master redis creates a key to see if the slave redis is displayed synchronously

Main redis:

[root@jinkai redis-6.0.6]# redis-cli

127.0.0.1:6379> auth admin123

OK

127.0.0.1:6379> info replication

# Replication

role:master

connected_slaves:1

slave0:ip=192.168.111.140,port=6379,state=online,offset=294,lag=0

master_replid:a678d12ebaea9481bc9d322ef9b9fcc8330a2c3d

master_replid2:0000000000000000000000000000000000000000

master_repl_offset:294

second_repl_offset:-1

repl_backlog_active:1

repl_backlog_size:1048576

repl_backlog_first_byte_offset:1

repl_backlog_histlen:294

127.0.0.1:6379> flushall

OK

127.0.0.1:6379> set km1 vm1

OK

127.0.0.1:6379> get km1

"vm1"

From redis:

[root@jinkai redis-6.0.6]# redis-cli -a admin123

Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.

Warning: AUTH failed

127.0.0.1:6379> info replication

# Replication

role:slave

master_host:192.168.111.136

master_port:6379

master_link_status:up

master_last_io_seconds_ago:3

master_sync_in_progress:0

slave_repl_offset:350

slave_priority:100

slave_read_only:1

connected_slaves:0

master_replid:a678d12ebaea9481bc9d322ef9b9fcc8330a2c3d

master_replid2:0000000000000000000000000000000000000000

master_repl_offset:350

second_repl_offset:-1

repl_backlog_active:1

repl_backlog_size:1048576

repl_backlog_first_byte_offset:99

repl_backlog_histlen:252

127.0.0.1:6379> get km1

"vm1"

The default replica-read-only yes for writing data to slave redis, if it is no, you can write data to slave

127.0.0.1:6379> set ks1 vs1

(error) READONLY You can't write against a read only replica.

Guess you like

Origin blog.51cto.com/11451960/2640782
Recommended