Best practices for cache usage

In the process of design review, the author summarizes some excellent practices of developers in designing cache systems.

Best Practice 1

The cache system mainly consumes the memory of the server. Therefore, when using the cache, you must first evaluate the size of the data that the application needs to cache, including the cache data structure, cache size, cache number, cache expiration time, and then according to the business situation. Calculate the usage of capacity in a certain time in the future, and apply for and allocate cache resources based on the results of the capacity assessment, otherwise it will cause waste of resources or insufficient cache space.

Best Practice 2

It is recommended to separate the services that use the cache. Core services and non-core services use different cache instances to physically isolate them. If there are conditions, please use separate instances or clusters for each service to reduce mutual application. Possibility of influence. The author often hears that some companies use shared caches, causing cache data to be overwritten and online incidents where the cached data is garbled.

Best Practice 3

According to the amount of memory provided by the cache instance, the number of cache instances that the application needs to use is pushed. Generally, a cache management operation and maintenance team will be established in the company. This team will virtualize the cache resources into multiple cache instances of the same memory size. For example, a The instance has 4GB of memory, and you can apply for enough instances to use when you apply for the application. You need to shard such an application. It should be noted here that if we use the RDB backup mechanism, each instance uses 4GB of memory, then our system needs more than 8GB of memory, because the RDB backup uses the copy-on-write mechanism, you need to fork out a child process, and copy a Copies of memory, so double memory storage size is required.

Best Practice 4

The cache is generally used to speed up the read operation of the database. Generally, the cache is accessed first, and then the database is accessed, so the setting of the cache timeout time is very important. I once encountered an Internet company that caused the cache timeout to be set too long due to errors in operation and maintenance operations, which caused the service thread pool to collapse and eventually caused a service avalanche.

Best Practice 5

All cache instances need to add monitoring, which is very important, we need to do reliable monitoring of slow queries, large objects, and memory usage.

Best Practice 6

If multiple services share a cache instance, of course, we do not recommend this situation, but due to cost control, this situation often occurs, we need to restrict the key used by each application through the specification must have a unique prefix, and carry out Isolation design to avoid the problem of overlapping caches.

Best Practice 7

Any cache key must set the cache expiration time, and the expiration time cannot be concentrated at a certain point, otherwise it will cause the cache to occupy the memory or cache penetration.

Best Practice 8

Do n’t put low-frequency access data in the cache. As we said before, the main purpose of our cache is to improve read performance. A small partner once designed a set of batch processing systems. Because the batch processing system needs to A large data model is used for calculations, so the small partner saves this data model in the local cache of each node and receives updated messages through the message queue to maintain the real-time nature of the model in the local cache, but this model It is only used once, so using the cache in this way is wasteful. Since it is a batch task, you need to divide the task and perform batch processing, using a divide and conquer, stepwise calculation method to get the final result.

Best Practice 9

The cached data is not easy to be too large, especially Redis, because Redis uses a single-threaded model. When the data of a single cache key is too large, it will block the processing of other requests.

Best Practice 10

For keys that store more values, try not to use collection operations such as HGETALL. This operation will cause request blocking and affect the access of other applications.

Best Practice 11

The cache is generally used in the scenario of accelerating query in the trading system. When there is a large amount of updated data, especially in batch processing, please use the batch mode, but this scenario is less.

Best Practice 12

If the performance requirements are not very high, try to use a distributed cache instead of a local cache, because the local cache is replicated between each node of the service, and at a certain moment the copies are inconsistent, if this cache represents Is a switch, and the request in the distributed system may be repeated, which will cause the repeated request to go to two nodes. The switch of one node is on and the switch of one node is off. It will cause duplication of processing, and in severe cases will cause financial losses.

Best Practice 13

When writing the cache, you must write completely correct data. If part of the cache data is valid and part of it is invalid, you would rather give up the cache rather than write part of the data into the cache, otherwise it will cause a null pointer, program exception, etc.

Best Practice 14

Under normal circumstances, the order of reading is cache first, then database; the order of writing is database first, then cache.

Best Practice 15

When using local cache (such as Ehcache), we must strictly control the number of cache objects and life cycle. Due to the characteristics of the JVM, too many cache objects will greatly affect the performance of the JVM, and even cause problems such as memory overflow.

Best Practice 16

When using the cache, there must be a downgrade process, especially for critical business links. When the cache is faulty or invalid, it must be returned to the source for processing.

Guess you like

Origin www.cnblogs.com/lupeng2010/p/12705817.html