Redis dump data optimization (transfer)

It has been two months since the redis revision project was launched. I will record the experience related to redis and give you a reference:
our redis server is a master and a slave, using an R710 machine, 8 cores, and 24G memory; about 2 million are inserted every day. There are about 30 million records in the library now, occupying 9G of memory; because the memory grows too fast every day, I am worried that it will soon be unable to load, so I wrote a script to delete the expired data every day;
the problem in operation now:
1. The operation of redis is basically stable, and the service has never been interrupted by itself. If the php script is set to set, it can set 10,000 small data in about 1 second, which is not as high as the official data; but it may be necessary to restart the service after modifying the configuration. It takes 1 to 2 minutes to completely load the data in the hard disk into the memory. Before the loading is completed, redis cannot provide services;
2. In the default configuration of redis, if the number of record changes reaches 10,000 every 60 seconds, it needs to be dumped to However, in fact, because of the excess of this number, our redis dumps data to the hard disk almost constantly; when dumping data to the hard disk, I estimate that in order to achieve an atomic effect and avoid data loss, redis first dumps the data to the hard disk. Dump the data to a temporary file, and then rename it to the data file name you set in the configuration file. As mentioned earlier, it takes 1 to 2 minutes to load the data, and the dump data should be about 1 minute; the dumped file is about the same 1 to 2 G; in this way, the server almost always maintains the IO load of writing a 2G file per minute; the disk is basically not idle;
3. Still in the dump, except that the disk is not idle, the CPU is also all the way Soaring: redis is fork a child process to dump data to the hard disk, the original process occupies 30%+ of the CPU, and the child process of the dump data alone uses a CPU core, and the cpu occupies 100%;
4. redis is used in the dump data At this time, it is the fork child process, which causes a problem: redis originally occupied 9G of memory, and fork another process when dumping data, the child process inherited the memory allocation, and also took up 9G of memory.... redis occupied all of a sudden 18G of memory up;
After discovering these problems, I modified the redis configuration file and set it to dump data as long as there is one write modification within 30 minutes, which greatly reduced the system load.
The idea in the envisaged:

The main Redis does not dump the data, no matter how many times it is written, it will not dump to the hard disk, or the dumping time is very long; the secondary redis is mainly responsible for dumping the data reasonably to the hard disk for backup; the main redis starts from the secondary redis first. The scp or ftp download data is returned; subject to subsequent verification;

 

Article source: http://www.162cm.com/archives/1062.html

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326526646&siteId=291194637