Redis and message queue use actual combat

 Message queue is a very common technology used by LeEco. Within our department, different projects use different message queue implementations. The following is the flow chart of the payment system (drawn by the department brothers, borrow it):

As you can see from the figure, the kafka message queue is used in it. The function is to do aggregation after database sub-database and sub-table, and asynchronously aggregate to a total table. Redis is also used in it to handle repeated submission of orders under high concurrency. We also use the apache qpid message queue of the company's unified cluster, which is an implementation of AMQP and is mainly used for communication between different departments. Generally, large companies will have some unified clusters of the company, but this unified cluster is relatively transparent to developers, so when departments cooperate with each other, they use it more, and use it within their own departments to avoid pits. Everyone would rather build one by themselves. set. Redis is more useful. Ali's brother Yang has built an exception log monitoring platform, mainly using redis for data transmission and storage.

  I won't say much about what other people do. In the afternoon, I will talk about the actual use of redis in my own framework. This is the flow chart of epiphany offline data (official website: www.fhadmin.org).

  As can be seen from the figure, the processing process is basically dealing with redis. The basic data structure of Redis is the jump table. When dealing with storage like this, data structures must be understood. For example, the initial version of lucene search also used a jump table, and later it was changed to a graph-based finite automaton. If you want to know more about the jump table, you can read my other article "Basic Rules and Algorithms You Must Know When Seeing Lucene Source Code" . Like some frameworks written in java, such as dubbo, spring IoC, when it comes to registration, it needs to be registered in one place, and the data structure in the JVM is generally a hashmap. To be precise: Spring IoC is registered through a hashmap to hold the loaded BeanDefinition object.

Redis persistence principle

  Redis provides two ways to persist data, namely RDB (Redis DataBase) and AOF (APPEND ONLY FILE). The RDB persistence method can perform snapshot storage of data at specified time intervals. The AOF persistence method records each write operation of the server. When the server restarts, these commands are re-executed to restore the original data. The AOF command uses the redis protocol to append and save each write operation to the end of the file. Redis can also rewrite the AOF file in the background, so that the size of the AOF file will not be too large. However, I have asked many departments that their persistence is not enabled for performance reasons. If two persistence methods are enabled at the same time, when redis restarts, the AOF file will be loaded first to restore the original data, because the data set saved by the AOF file is usually more complete than the data set saved by the RDB file.

  Take a look at the C language implementation of persistence. When Redis needs to execute RDB, the server will perform the following operations: redis calls the system function fork() to create a child process. The subprocess writes the dataset to a temporary RDB file. When the child process finishes writing to the temporary RDB file, redis replaces the original RDB file with the new temporary RDB file and deletes the old RDB file. When executing fork, the Linux operating system (generally the servers of large companies are this system) will use the copy-on-write strategy, that is, the parent and child processes share the same memory data when the fork function occurs, and when the parent process needs to update When there is a piece of data, the operating system will copy the piece of data to ensure that the data of the child process is not affected, so the new RDB file stores the memory data at the moment when the fork is executed. (Official website: www.fhadmin.org) The RDB file is in a compressed binary format, so the space occupied will be smaller than the data size of the memory. However, the compression operation is very CPU-intensive, so compression can be disabled through the configuration file configuration.

  Learn about the corresponding redis command. In addition to automatic snapshots, you can also manually send the save or bgsave command to make redis direct snapshots. The save command is performed on the main process and blocks other requests. The latter will fork the child process for the snapshot operation.

  Compare with mysql storage. The RDB method is similar to the mysqldump command backup of mysql. And AOF is closer to binlog.

Redis内存优化

   redis配置文件中有个maxmemory参数设置,如果没有设置会继续分配内存,因此可以逐渐吃掉所有可用内存。因此,通常建议配置一些限制和策略。这样做的优点是:不会导致因为内存饥饿而整机死亡。缺点是:Redis可能会返回内存不足的错误写命令。redis有6种过期策略。

  1>volatile-lru:只对设置了过期时间的key进行LRU

  2>allkeys-lur:对所有的key进行LRU

  3>volatile-random:随机删除即将过期的key

  4>allkeys-random:从所有的key中随时删除

  5>volatile-ttl:删除即将过期的,ttl(tiime to live)剩余生存时间

  6>noeviction:永不过期,返回错误

  参数的设置可以采用命令方式,也可以采用配置文件方式(所有的配置都支持这两种),配置命令如

  config set maxmemory-policy volatile-lru

  还可以设置随机抽样数,如

  config set maxmemory-samples 5 就是说每次进行淘汰的时候,会随机抽取5个key从里面淘汰最不经常使用的。

   

  redis压缩列表(ziplist)。压缩列表是列表键和哈希键的底层实现之一。当一个列表键只包含少量表项,并且每个列表要么是小整数,要么是较短的字符串,那么redis就会使用压缩列表来作为列表键的底层实现。(官网:www.fhadmin.org)当一个哈席键只包含少量key-value对,且每个key和value要么是小整数,要么是较短字符串,那么redis就会使用ziplist作为哈希键的底层实现。

  

  When I introduced my epiphany framework (which is also reflected in the above flow chart), if the structure in a key is a hash, in the case of a hash key less than 1k, I directly use the hash, and if it is greater than 1k, considering the write The performance is poor, so I directly pack and compress the hash into a large value for storage. One of the reasons to consider using both strategies is that small hash tables use very little memory, saving storage space.

Off topic time:

  The name of the painting is "Washing the Lead"

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326216007&siteId=291194637