Essential for cache optimization: Master the optimization skills of hot and cold separation and reordering

Author: Michael Odby

In today's era of high concurrency and big data, system performance optimization is very important. As an effective means to improve system performance, cache optimization is widely used in various scenarios. Among them, hot and cold end separation and reordering are two common cache optimization methods. This blog will introduce the principles, implementation and application scenarios of these two optimization methods in detail, hoping to provide help for your system performance optimization.

Cache optimization is an effective means to improve system performance, among which hot and cold end separation and reordering are two common optimization methods.

cache optimization

1. Separation of hot and cold ends

The hit rate of the cache is affected by many factors, one of the most important factors is the size of the cache. In practical applications, it is often encountered that the data set is very large. If all the data is put into the cache, the hit rate of the cache will be very low, which will affect the performance of the system. At this time, the strategy of separating the hot and cold ends can be considered.

The so-called hot and cold end separation is to divide the data set into two parts: cold data and hot data. Cold data refers to data with low access frequency, which need not be placed in the cache, while hot data refers to data with high access frequency, which should be placed in the cache first. By separating the hot and cold ends, the hit rate of the cache can be effectively improved, thereby improving the performance of the system.

2. Reorder

In practical applications, the order of data access is often not random, but has certain rules. If the data is accessed according to this rule, the hit rate of the cache can be effectively improved. Therefore, a reordering strategy can be used to optimize the cache.

The so-called reordering is to reorder the data according to certain rules, so that the data with high access frequency is in the front, and the data with low access frequency is in the back. In this way, when accessing data, the data ranked in front can be accessed first, thereby improving the hit rate of the cache.

It should be noted that the reordering strategy needs to be determined according to the specific data set, and different data sets may require different reordering strategies. At the same time, reordering may increase a certain amount of calculation, and a balance needs to be made between performance and hit rate.

for example

An example of using hot and cold end separation and reordering strategies in Android to improve the image loading cache hit rate

class ImageLoader(private val context: Context) {
    
    
    private val memoryCache: LruCache<String, Bitmap>
    private val diskCache: DiskLruCache

    init {
    
    
        // 计算可用的最大内存
        val maxMemory = (Runtime.getRuntime().maxMemory() / 1024).toInt()
        // 取可用内存的 1/8 作为缓存大小
        val cacheSize = maxMemory / 8
        memoryCache = object : LruCache<String, Bitmap>(cacheSize) {
    
    
            override fun sizeOf(key: String, value: Bitmap): Int {
    
    
                // 计算 Bitmap 的大小,单位是 KB
                return value.byteCount / 1024
            }
        }
        // 获取磁盘缓存路径
        val cacheDir = context.externalCacheDir?.path ?: context.cacheDir.path
        val diskCacheDir = File(cacheDir + File.separator + "image_cache")
        if (!diskCacheDir.exists()) {
    
    
            diskCacheDir.mkdirs()
        }
        diskCache = DiskLruCache.open(diskCacheDir, 1, 1, 10 * 1024 * 1024)
    }

    //
    fun displayImage(url: String, imageView: ImageView) {
    
    
        val bitmap = memoryCache.get(url)
        if (bitmap != null) {
    
    
            imageView.setImageBitmap(bitmap)
            return
        }
        loadFromDiskCache(url, imageView)
        loadFromNetwork(url, imageView)
    }

    private fun loadFromDiskCache(url: String, imageView: ImageView) {
    
    
        var bitmap: Bitmap? = null
        try {
    
    
            val snapshot = diskCache.get(url)
            if (snapshot != null) {
    
    
                val inputStream = snapshot.getInputStream(0)
                val fileDescriptor = (inputStream as FileInputStream).fd
                bitmap = BitmapFactory.decodeFileDescriptor(fileDescriptor)
                if (bitmap != null) {
    
    
                    memoryCache.put(url, bitmap)
                    imageView.setImageBitmap(bitmap)
                }
            }
        } catch (e: IOException) {
    
    
            e.printStackTrace()
        }
    }

    private fun loadFromNetwork(url: String, imageView: ImageView) {
    
    
        // 发送网络请求获取图片数据
        // ...

        // 解码图片数据并显示
        val bitmap = decodeBitmapFromData(imageData, reqWidth, reqHeight)
        if (bitmap != null) {
    
    
            memoryCache.put(url, bitmap)
            try {
    
    
                val editor = diskCache.edit(url)
                if (editor != null) {
    
    
                    val outputStream = editor.newOutputStream(0)
                    bitmap.compress(Bitmap.CompressFormat.PNG, 100, outputStream)
                    editor.commit()
                }
            } catch (e: IOException) {
    
    
                e.printStackTrace()
            }
            imageView.setImageBitmap(bitmap)
        }
    }

    private fun decodeBitmapFromData(data: ByteArray, reqWidth: Int, reqHeight: Int): Bitmap? {
    
    
        // 解码图片数据并返回 Bitmap 对象
        // ...
    }
}

The ImageLoader class encapsulates the logic of image loading. It implements the strategy of separating hot and cold ends through LruCache and DiskLruCache, puts frequently accessed pictures into LruCache, and puts infrequently used pictures into DiskLruCache. When loading a picture, first check whether the picture has been cached from LruCache, if it has been cached, it will be displayed directly, otherwise it will be searched from DiskLruCache, if the picture is not found, then the picture will be obtained through network request and cached in LruCache and DiskLruCache, and finally displayed in ImageView.

In this example, the implementation of reordering is mainly reflected in the order of loading pictures, first look up the cache from LruCache, then look up the cache from DiskLruCache, and finally make a network request to get the picture data. Such an order can maximize the cache hit rate, reduce the number of network requests, and also shorten the image loading time.

The realization of the separation of hot and cold ends is reflected in putting infrequently used pictures into DiskLruCache. Because the read and write speed of DiskLruCache is relatively slow, putting infrequently used pictures into it can avoid the problem that LruCache's cache is full and causes frequent cache elimination. This can ensure that commonly used pictures can always be cached in LruCache, improving the cache hit rate.

Other application scenarios and used scenarios

1. ViewHolder cache in RecyclerView : In RecyclerView, ViewHolder is used to reuse item views. By caching frequently accessed Views, the sliding performance of RecyclerView can be greatly improved, especially in the case of large data sets. (Multi-layout or comment list type)
2. Database query : During database query, hot data and cold data can be separated according to data usage frequency, and hot data can be cached to improve query performance.
3. JIT (Just-In-Time) compiler : In Android, the JIT compiler compiles bytecode into native code to improve the execution speed of the application. Reordering can optimize the code generation process of the JIT compiler, improving compilation speed and execution speed.
4. UI interface rendering : When rendering the UI interface, you can use hot and cold separation to cache commonly used layouts and components to avoid re-rendering each time, thereby improving the response speed and performance of the interface.
5. Dynamic class loading : When using reflection to dynamically load classes in an application, the process of class loading can be optimized by reordering to improve the response speed of the application.
6. Preloading resources : When the application starts, you can preload some commonly used resources by means of cold and hot separation, so as to avoid loading them when they are needed, thereby improving the startup speed and performance of the application.
7. Network request : When making a network request, commonly used data can be cached by separating hot and cold to avoid repeated requests, thereby improving the response speed and performance of the application.

In fact, it is still the thought that is playing

Summarize

To play something, you still have to figure out what is the foundation of this thing, or what are the negative factors. for example:

  1. There needs to be enough data to support hot and cold separation and reordering, otherwise these optimizations may not bring significant performance improvements, and may even cause additional overhead.
  2. The implementation of hot and cold separation and reordering needs to consider the life cycle of data to avoid data being cached or destroyed incorrectly.
  3. Hot and cold separation and reordering may cause the display order of the data to not meet the user's expectations, and appropriate processing is required to ensure the display effect of the data.
  4. It is necessary to consider multi-thread security issues during implementation to avoid data confusion or other abnormal conditions caused by concurrent access.
  5. Sufficient testing and performance analysis is required when using hot and cold separation and reordering to ensure that these optimization techniques achieve the desired performance gains and do not introduce new problems and risks.

Guess you like

Origin blog.csdn.net/maniuT/article/details/130019715