1. Enterprise-level persistent configuration strategy
In the enterprise, the RDB generation strategy is almost the same as the default
save 60 10000: If you want to make sure that the RDB loses up to 1 minute of data as much as possible, then try to generate a snapshot every 1 minute. During the low peak period, the amount of data is small and unnecessary
10000-> Generate RDB, 1000-> RDB, according to your own application and business data, you decide for yourself
AOF must be turned on, fsync, everysec
auto-aof-rewrite-percentage 100: the current AOF size has swelled to more than 100% of the last time, twice the last time
auto-aof-rewrite-min-size 64mb: according to your data volume, 16mb, 32mb
2. Enterprise-level data backup solution
RDB is very suitable for cold backup, after each generation, there will be no modification
Data backup solution
(1) Write a crontab scheduled scheduling script to do data backup
(2) Copy an RDB backup every hour and go to a directory, only keep the last 48 hours of backup
(3) Keep a copy of the RDB of the day every day Backup, go to a directory, only keep the backup of the most recent month
(4) Every time you copy the backup, delete the old backup
(5) Back up and send all the data on the current server every night Go to a remote cloud service
/usr/local/redis
Copy every hour to backup, delete the data 48 hours ago
crontab -e
0 * * * * sh /usr/local/redis/copy/redis_rdb_copy_hourly.sh
redis_rdb_copy_hourly.sh
#!/bin/sh cur_date=`date +%Y%m%d%k` rm -rf /usr/local/redis/snapshotting/$cur_date mkdir /usr/local/redis/snapshotting/$cur_date cp /var/redis/6379/dump.rdb /usr/local/redis/snapshotting/$cur_date del_date=`date -d -48hour +%Y%m%d%k` rm -rf /usr/local/redis/snapshotting/$del_date
Copy once a day
crontab -e
0 0 * * * sh /usr/local/redis/copy/redis_rdb_copy_daily.sh
redis_rdb_copy_daily.sh
#!/bin/sh cur_date=`date +%Y%m%d` rm -rf /usr/local/redis/snapshotting/$cur_date mkdir /usr/local/redis/snapshotting/$cur_date cp /var/redis/6379/dump.rdb /usr/local/redis/snapshotting/$cur_date del_date=`date -d -1month +%Y%m%d` rm -rf /usr/local/redis/snapshotting/$del_date
Upload all data to a remote cloud server once a day
3. Data recovery plan
(1) If the redis process hangs, then restart the redis process, directly restore the data based on the AOF log file
No more demo, in the AOF data recovery part, demo, fsync everysec, at most one second is lost
(2) If the machine where the redis process is located hangs, then after restarting the machine, try to restart the redis process and try to perform data recovery directly based on the AOF log file
AOF is not damaged, and can also be recovered directly based on AOF
AOF append-only, write sequentially, if the AOF file is damaged, then use redis-check-aof fix
(3) If the latest AOF and RDB files in redis are lost / corrupted, you can try to restore the data based on a copy of the latest RDB data currently on the machine
The current latest AOF and RDB files have been lost / damaged to be unrecoverable, generally not a malfunction of the machine, man-made
Big data system, hadoop, some people accidentally put the directory corresponding to a large number of data files stored in hadoop, rm -rf, a small company of my friends, the operation and maintenance are not reliable, and the permissions are not very good.
The files under / var / redis / 6379 have been deleted
Find the latest RDB backup. The hour-level backup is fine. The hour-level backup must be the latest. Copy to redis and you can restore to an hour of data
Disaster Recovery Drill
appendonly.aof + dump.rdb, prefer appendonly.aof to recover data, but we found that appendonly.aof automatically generated by redis has no data
Then our own dump.rdb has data, but obviously our data is not used
When redis starts, it automatically re-based on the data in memory, and generated a latest RDB snapshot, directly using the empty data, overwriting the data we have, copy the past dump.rdb
After you stop redis, you should actually delete appendonly.aof, then copy our dump.rdb, and then restart redis
It's very simple, even if you delete appendonly.aof, but because AOF persistence is turned on, redis will first restore based on AOF, even if the file is not in it, then create a new empty AOF file
Stop redis, temporarily close AOF in the configuration, then copy a copy of RDB, and then restart redis, can the data be recovered, can it be recovered
When the brain is hot, turn off redis, manually modify the configuration file, open aof, and restart redis, the data is gone, the empty aof file, all the data is gone
In the case of data loss, how to restore data perfectly based on RDB cold backup, while keeping AOF and RDB dual
Stop redis, close aof, copy rdb backup, restart redis, confirm data recovery, directly modify the redis configuration on the command line, open aof, this redis will write the log corresponding to the data in memory to the aof file
At this time, the data of the two data files of aof and rdb are synchronized
config set appendonly yes Hot modify the configuration parameters, the actual parameters in the configuration file may not be permanently modified, stop redis again, manually modify the configuration file, open the aof command, and restart redis again
(4) If all the RDB files on the current machine are damaged, then pull the latest RDB snapshot from the remote cloud service to recover the data
(5) If it is found that there is a major data error, for example, a program that went online for a certain hour has completely contaminated the data and the data is completely wrong, then you can choose an earlier time point to recover the data
For example, the code went online at 12 o'clock, and the code was found to have bugs, which caused all the cached data generated by the code to be written to redis, which was all wrong.
Find a cold backup of 11 point RDB, and then follow the above steps to restore the data to 11 point, is it okay?