33_redis Some common problems in practice and optimization ideas (including Linux kernel parameter optimization)


Basically, until now, everyone can go directly to the company and build redis.

Because there are actually some things that may not explain the details, such as the setting of some parameters

Different companies, different businesses, different amounts of data may have different parameters to be adjusted

So far, everyone is almost the same, according to this idea, to build redis to support high concurrency, high availability, massive data architecture, deployment

You can use some of the existing data in the company, import it, millions, 10 million, enter

Do various stress tests, performance, redis-benchmark, concurrency, QPS, high-availability drills, how much data each machine can store, and horizontal expansion to support more data

Based on the test environment and test data, do various exercises to explore some of the details that are most suitable for you

You said that you rely on a set of courses to get everything that is 100% technically impossible.

The master leads the door, the practice is personal

The only criterion for a good course is that at this price, it can teach you some technology and architecture that are worth the price, which you ca n’t learn from other places, or it will cost several times to learn it yourself. Time groped

The value of this course has reached

You said that you spent hundreds of dollars, bought a course, requirements, courses, after learning, immediately is a lonely sword, directly to the company can solve all kinds of problems easily

In this world, there is no such curriculum, reasonable values, so that everyone can have a very good benign interactive process

spark etc. courses

Actually learning courses to do projects, 100% will encounter a lot of problems that I didn't expect. When I encounter them, I will try to solve them first. If you encounter problems, it is your accumulation of experience

Encountered problems, add my QQ, and then consult with me, I will show you, it is also possible

spark, elasticsearch, java architecture courses

70% ~ 80% of the questions, I can help you get it, I can do it

1. Fork time-consuming leads to high concurrent request delay

When RDB and AOF, in fact, there will be a process of generating RDB snapshots, AOF rewrite, disk IO consumption, main process fork child process

When forking, the child process needs to copy the space memory page table of the parent process, which will also take a certain amount of time

Generally speaking, if there is 1 G of data in the parent process, the fork may cost about 20ms, if it is 10G ~ 30G, it will cost 20 * 10, even 20 * 30, which is a few hundred milliseconds.

latest_fork_usec in info stats, you can see the duration of the last form

Redis stand-alone QPS is generally tens of thousands, and the fork may slow down the request time of tens of thousands of operations at once, from a few milliseconds to 1 second

Optimization ideas

The fork time-consuming is related to the memory of the redis master process. Generally, the memory of redis is controlled within 10GB, slave-> master, full copy

2. The blocking problem of AOF

Redis writes data to the AOF buffer, opens a separate site for fsync operations, once per second

But the main thread of redis will check the time of fsync twice. If the last fsync time exceeds 2 seconds, then the write request will block

everysec, lost at most 2 seconds of data

Once fsync exceeds a delay of 2 seconds, the entire redis is slowed down

Optimization ideas

Optimize the writing speed of the hard disk, it is recommended to use SSD, do not use ordinary mechanical hard disk, SSD, greatly improve the speed of disk reading and writing

3. Master-slave replication delay

Master-slave replication may time out seriously, this time requires a good monitoring and alarm mechanism

In info replication, you can see the offset of the master and slave replication. Make a difference to see the corresponding delay.

If there is too much delay, then alarm

4. Master-slave replication storm

If you allow multiple slaves to perform full replication from the master at once, a large rdb is sent to multiple slaves at the same time, which will cause network bandwidth to be seriously occupied

If a master really wants to mount multiple slaves, try to use a tree structure instead of a star structure

5、vm.overcommit_memory

0: Check whether there is enough memory, if not, apply for memory fails
1: Allow the use of memory until it is used up
2: The memory address space cannot exceed swap + 50%

If it is 0, it may cause operations such as fork to fail, and not enough memory space is applied for

cat /proc/sys/vm/overcommit_memory
echo "vm.overcommit_memory=1" >> /etc/sysctl.conf
sysctl vm.overcommit_memory=1

6、swapiness

cat / proc / version, check the Linux kernel version

If the Linux kernel version is <3.5, then the swapiness is set to 0, so that the system would rather swap than the oom killer (kill the process).
If the Linux kernel version> = 3.5, then the swapiness is set to 1, so that the system would rather swap than the oom killer

Ensure that redis will not be killed

echo 0 > /proc/sys/vm/swappiness
echo vm.swapiness=0 >> /etc/sysctl.conf

7, the largest open file handle

ulimit -n 10032 10032

Go online and search for it yourself, different operating systems, versions, and settings are not the same

8、tcp backlog

cat /proc/sys/net/core/somaxconn
echo 511 > /proc/sys/net/core/somaxconn

 

Guess you like

Origin www.cnblogs.com/hg-super-man/p/12742131.html