The interviewer asked: What is caching? What are the advantages of Redis in caching?

Table of contents

I. Introduction

2. The concept of caching

3. Local cache and cache

Local cache:

1. Functions and advantages:

2. Common implementation methods:

3. Life cycle management:

4. Failure strategy:

Buffer Cache:

1. Definition and function:

2. Caching algorithm:

3. Implementation level:

4. Optimization technology:

The relationship between the two:

4. Advantages of Redis in caching


I. Introduction

When we talk about caching, we are referring to a storage technology used to temporarily store frequently accessed data so that it can be retrieved faster on subsequent requests. The core idea of ​​caching is to increase the access speed of data by sacrificing some space in exchange for faster time.

Caching is a technology used to temporarily store data with the purpose of increasing the speed of data access. The cache is usually located between the data source (such as database, API, etc.) and the application. It stores previously retrieved or calculated results so that these data can be obtained faster in subsequent requests, thus reducing the load on the data source.

2. The concept of caching

  1. Cache Hit and Cache Miss:

    • Hit: A cache hit occurs when the data requested by the application is present in the cache. At this point, the data can be obtained directly from the cache without reloading from the data source.
    • Miss: A cache miss occurs if the requested data does not exist in the cache. At this point, the data needs to be loaded from the data source and stored in the cache for future use.
  2. Cache expiration and invalidation strategies:

    • Cache expiration: The cached data can set an expiration time. Once this time is exceeded, the data will be considered expired and needs to be reloaded from the data source.
    • Invalidation strategy: When the data in the cache changes, it may be necessary to invalidate the data in the cache through some kind of invalidation strategy to ensure that the latest data is used.
  3. Cache elimination strategy:

    • LRU (Least Recently Used): Eliminate the least recently used data.
    • LFU (Least Frequently Used): Eliminate the least frequently used data.
    • FIFO (First In, First Out): Eliminate data in the order in which it enters the cache.
  4. Local cache and distributed cache:

    • Local cache: A cache stored within the application process, valid for a single application instance.
    • Distributed cache: A cache shared by multiple application instances, usually located on an independent cache server, used to support shared data of multiple application instances.
  5. Cache breakdown, avalanche and penetration:

    • Cache breakdown: Refers to the moment when the cache of a hot key becomes invalid, a large number of requests access the database at the same time, causing a surge in database load.
    • Cache avalanche: means that a large amount of data in the cache fails at the same time, causing a large number of requests to directly access the database, causing huge pressure on the database.
    • Cache penetration: refers to malicious requests bypassing the cache and directly accessing the database through changing keys, causing the database to be accessed frequently.
  6. Caching application scenarios:

    • Scenario of reading more and writing less: Caching is suitable for data that is frequently read but rarely updated, such as article content, product information, etc.
    • Caching of hotspot data: For certain hotspot data, caching is used to increase the reading speed and reduce the load on the data source.
    • Caching of expensive results: When certain calculation results take a long time, the calculation results can be cached to avoid repeated calculations.

3. Local cache and cache

As we dive into caching, consider looking at local caches and cache buffers in more detail.

Local cache:

1. Action and performance:
  • Function: Local cache refers to the data cache stored in the application process, which is used to speed up access to frequently accessed data.
  • Advantages: Because it is stored directly in the application memory, the read and write speed of the local cache is very fast, which is very effective for data that is frequently used within the application.
2. Real view method:
  • Map or ConcurrentHashMap: Use the data structure of key-value pairs to quickly locate data through keys.
  • Memory Objects: Store data directly as objects within your application for faster access.
3. Life cycle management:
  • Manual management: Developers need to manually control the loading, updating and expiration of cached data.
  • Automatic management: Using some caching libraries, such as Guava Cache, you can automatically manage cached data through configuration.
4. Failure strategy:
  • Based on time: You can set the validity time of cached data. Once the set time is exceeded, the data will be considered expired.
  • Event-based: When the data source changes, the cached data is invalidated through the event mechanism.

Buffer Cache:

1. Consequent action:
  • Definition: The high-speed buffer is a memory area used to store disk data. It is usually used to cache blocks or pages on the disk to increase the reading speed of disk data. .
  • Function: Reduce frequent access to slow disks by caching recently used disk data, thereby improving overall system performance.
2. Contribution method:
  • LRU (Least Recently Used): Replace the least recently used data out of the cache.
  • FIFO (First In, First Out): Replace data in the order of the earliest data entering the cache.
3. Real view:
  • File system cache: The operating system caches disk data through the file system cache.
  • Database cache: Database systems also typically have their own buffers for caching data blocks on disk.
4. Advanced technology:
  • Prefetching: Load the data that may be used into the cache in advance to reduce the latency of subsequent access.
  • Writeback strategy: Delay write operations to the cache until execution under certain conditions to improve the efficiency of write operations.

The relationship between the two:

  • Relevance: Local caches are typically application-level caches, while cache caches are operating system or database-level caches. They work together to improve overall system performance.

  • Hierarchical relationship: Local cache is usually manually managed by developers based on business needs, while high-speed buffers are automatically managed by the operating system or database system.

  • Scope: The local cache usually serves data within the application, while the cache buffer serves disk data reads for the entire system.

4. Advantages of Redis in caching

  1. Fast read and write operations: Redis mainly stores data in memory and has very fast read and write speeds. Compared with disk-stored databases, Redis' memory storage can quickly meet the demand for data reading, thereby improving access speed.

  2. Flexible data structure: Redis supports a variety of data structures, such as strings, hashes, lists, sets, ordered sets, etc. This makes it suitable for a variety of different types of data storage and processing needs, providing more flexible caching options.

  3. Persistence options: Although Redis is primarily an in-memory database, it provides a variety of persistence options that allow data to be saved to disk so that it can be restored on restart. This provides data durability and reliability.

  4. Distributed cache: Redis supports distributed cache, which can be scaled out to handle large-scale data and high traffic through sharding and replication mechanisms.

  5. Atomic operations: Redis supports atomic operations, which means that complex operations, such as self-increment, self-decrement, set operations, etc., can be performed in a single operation. This ensures data consistency in concurrent environments.

  6. Publish/subscribe mode: Redis provides a publish/subscribe mechanism, which can be used to implement message queues, achieve asynchronous communication and decouple system components.

  7. Flexible expiration policy: Redis supports setting expiration time for cached data. When the data is no longer used, it can automatically expire and release the memory, thereby preventing the cached data from becoming obsolete.

  8. High availability: Redis supports master-slave replication and Sentinel mechanisms, providing a high-availability solution to ensure that service availability can be maintained even if there is a node failure.

Overall, the advantages of Redis in caching include high performance, flexible data structures, persistence options, distributed capabilities, and rich functionality, making it one of the preferred technologies widely used in the caching layer.

Guess you like

Origin blog.csdn.net/weixin_43728884/article/details/134679003