Redis cluster construction, master-slave replication, sentinel mode, cache breakdown, penetration, avalanche

        Let me explain first, because Baobao is very poor and cannot buy several servers, so he can only build a pseudo cluster on one server, haha.

        1. Basic service establishment

                The establishment of a stand-alone service will not be repeated here because it has been written in the previous article. Just follow the steps in the previous article. If you want to run multiple redis services on one server, just start them through different configuration files. After the stand-alone service is built (you only need to build one), copy a few more redis.conf files, which can be named redis1. conf, redis2.conf, redis3.conf, etc., and then modify the port number and service id in the configuration file so that they are not the same. If they are in the same directory, you also need to modify the log file name and dump.rdb name.

        Example:

        port 6379 #The host can not be modified, and the slave can be changed to 6380, 6381, 6382, etc.

        pidfile /var/run/redis_6379.pid #The host can be left unchanged or the slave can be changed to another inconsistent name.

        logfile "6379.log" #The default is no name. If it is not in the same directory, it does not need to be modified. If it is in the same directory, it only needs to be inconsistent.

        dbfilename dump.rdb #Similarly, if it is not in the same directory, it does not need to be modified. If it is in the same directory, it only needs to be inconsistent.

         After modification, enter the installation directory and start it normally through: redis-server /bug path/redis/redis1.conf.

        Three services have been configured and have started normally. Now the question is, how do you know which one is the master and which one is the slave? The key point: all redis are master by default . How to check it? After connecting to the client (redis-cli -p 6379 port -a password, if there is no password, you can remove -a), check the information through the command: info replication, as shown in the figure:

                 If they are not on the same server, the above operations do not need to be performed . How to configure the slave machine?

        2. The slave connects to the host

                ① For temporary connection, just execute the command: SLAVEOF 127.0.0.1 6379, as shown in the figure:

                 ② For permanent connection, modify the configuration in the configuration file redis.conf, as shown in the figure: (Note: This is to modify the configuration file of the slave)

                         After configuring, just restart the service (if you are not on the same server, please pay attention to the firewall). Then what information will be displayed on the host after connecting, as shown in the figure:

                 In fact, there is another form of connection. The current one is equivalent to: B->A, C->A. Another one is: C->B->A, A is the host, B and C are slaves, B is connected to host A, and C is connected to slave B. Both forms are feasible.

2. Redis master-slave replication

        1. What is master-slave replication?

                Simply put, copying the data of the host to the slaves in one master and multiple slaves is called master-slave replication. Master-slave replication only exists in a cluster environment.

        2. Features

                        ① Separation of reading and writing: The host is only responsible for writing, and the slave is only responsible for reading.

                        ② Unidirectional: it can only be copied from the master to the slave.

                        ③ Data storage and backup: In a cluster environment, even if the host hangs up, the data can be restored on the slave machine.

                        ④ Service reliability: Through the sentinel mode, after the host hangs up, a slave will be elected to become the host immediately, thus ensuring the normal operation of the project.

                        ⑤ Load balancing: Because it is a cluster and separation of reading and writing is achieved, the slave machine can easily achieve load balancing.

        3. Two rules for copying

                Full copy: Copy all data. When a slave is disconnected and then reconnected, full copy will be performed.

                Incremental copy: Copy newly added data. After the master writes a piece of data, it will be copied to the slave immediately. At this time, only the new data will be copied, not all the data.

                Master-slave replication is almost enough. In fact, there is not much. As long as the cluster is set up, redis will run by itself. No other processing is needed at all, and it is very user-friendly.

3. Sentry Mode

        If you only build the redis service in the above way, when the host goes offline and the maintenance personnel do not deal with it immediately, the entire service will be unavailable and write operations will no longer be possible, and personnel will be required to reconfigure the host every time. , very inconvenient!

        Therefore, the sentry mode was extended. When the host goes offline, the sentinel automatically elects one of the slaves to become the host through voting, thereby ensuring the availability of the service.

        The three most commonly used and basic configurations of the Sentinel service are:

        sentinel down-after-milliseconds mymaster 1000

        # sentinel down-after-milliseconds The name is verified every 1 second , which can be understood as the heartbeat of the host. If it does not beat for more than 1 second, it means there may be a problem.

        sentinel monitor mymaster 127.0.0.1 6379 2

        # sentinel monitor Name host ip port confirmation number . If the sentinel does not jump for 1 second, it will think that there may be a problem with the host. However, it will not immediately think that the host is offline. Instead, it will send a request to other sentinels to ask them whether It is also considered that the host is offline. If more than 2 machines exceed the above configuration, it will be considered that the host is offline. Then it will be confirmed that the host is indeed offline. At this time, the sentinel will conduct an election to elect a new host.

        sentinel auth-pass mymaster xxxxx

        # sentinel auth-pass name The password of the host . If the host does not have a password, there is no need to configure it.

        The most important thing is the above two configurations. Once configured, you can start Sentinel. Of course, if you build a pseudo cluster on a server, you must also configure the port number.

        Let’s start modifying the configuration. First, find the sentinel.conf file in our decompression directory. This is the sentinel configuration file. Several other commonly used configurations:

daemonize yes #Whether to run in a protected process, usually changed to yes

port 26379 #The port number of the sentinel. When building a sentinel cluster, please change the port number.

logfile "" #Sentinel's log file name

sentinel monitor mymaster 127.0.0.1 6379 2 #explained separately above

sentinel auth-pass mymaster xxxxx #redis host password

sentinel down-after-milliseconds mymaster 30000 #explained separately above 

sentinel parallel-syncs <master-name> <numslaves> #The slave sometimes cannot process the request. Here is how many slaves are allowed to be unable to process the request. This is usually set to 1.

        Start the sentinel through the configuration file: redis-sentinel /bug/redis/sentinel.conf (be careful to enter your own service path), check the process: ps -ef|grep redis

         Sentinel Notes:

                1. When building a sentinel cluster, the sentinel service must be an odd number.

                2. After the host goes offline and a new host has been elected, the original host will become a slave after it comes online.

                It’s that simple, get it done and call it a day!

4. What is cache breakdown, cache penetration, cache avalanche and how to avoid and solve it?

        Before talking about this, we must first understand our normal request process:

        The user initiates a query --> the backend receives the request --> the program queries the redis cache --> the data is returned if it exists

                                                                                             -->Query the database if it does not exist-->Return data from database-->Save the program into redis cache-->Return data

        1. Cache breakdown

                Reason for occurrence: Cache breakdown mainly occurs in the "query redis cache" process. In a high-concurrency environment, when a large number of requests query the same data, the data does not exist or has just expired. At this time, a large number of requests will all be queried. Database, if the database cannot withstand the pressure, it will collapse.

                Solution:

                        1. Since the problem is caused by the expiration of data in redis, then just let the data never expire. Haha, of course this will take up a lot of space, and it is impossible to never expire. For hot data, the expiration time can be extended appropriately.

                        2. Use a mutex lock, which means that only one thread is allowed to query at a time, blocking subsequent query requests, and then when the data is queried, it is stored in the redis cache, so that subsequent query requests can query the redis cache, so that it can be very good Solve the cache breakdown problem (note: this will require higher mutex lock requirements).

        2. Cache penetration

                Reason for occurrence: Cache penetration mainly occurs in the "query database" process. When a query request occurs, a large number of parameters cannot be found in the redis cache, and the data cannot be found in the database, but the request keeps querying. This will also cause the database server to crash.

                Solution:

                        1. When there is nothing in the redis cache or in the database, you can return an empty string and store it in the cache. At this time, the database will not be requested again. However, this method is not recommended because there will be some other problems.

                        2. Use Bloom filter to filter out parameters that do not comply with the rules.

        3. Cache avalanche

                Reason for occurrence: Cache avalanche also mainly occurs in the "query database" process. When a large amount of redis cached data expires at the same time or the redis service is directly offline, and at this time a large number of requests appear, the request can only query the database. This It will also cause our database server to crash.

                Solution:

                        1. Do not set a large amount of data to expire at the same time. When there may be a large number of requests, you must first warm up the data and load the data into the cache in a time-sharing manner to ensure that the data does not expire at the same time.

                        2. Locking is the same as our cache breakdown processing method above. Although this will reduce the user experience, it at least ensures the availability of the service.

                        3. Increase the number of redis servers, so that even if one or two servers go offline, it will have little impact on the overall service.


Summarize

        My five years of Java development experience tells me that if the project really needs to use the redis cluster, then it will definitely be equipped with a dedicated operation and maintenance personnel, but in the current environment, we are all asked to Programmers need to become full-stack engineers, so even if we are not proficient in redis, we must be able to use it and know some basic concepts.

Guess you like

Origin blog.csdn.net/xiaobug_zs/article/details/124553645