Analysis of Android's Glide picture framework

Why use Glide?

  1. Simple to use, chain calls, and simple API. You can load images in three steps with, load, and into
  2. The life cycle is automatically bound, and image requests are managed according to the bound Activity or Fragment life cycle.
  3. Process Bitmap efficiently. Supports bitmap reuse and active recycling to reduce system recycling pressure.
  4. It takes up less memory (using RGB565 format). Each pixel of RGB8888 takes up twice as many bytes as RGB565, so it takes up a little more memory.
  5. Supports multiple image formats (Gif, Webp, Jpg)

Glide loading process

The loading process of Glide is roughly as follows:

  1. Glide.with gets the RequestManager bound to the life cycle
  2. RequestManager obtains the corresponding RequestBuilder through load
  3. The RequestBuilder in the into method builds the corresponding Request and Target. Handle Request and Target to RequestManager for unified management.
  4. Call Request.track to start image request
  5. The request attempts to load images from the active cache, LRU cache, and file cache through Engine. When the corresponding image does not exist in the above cache, it will be obtained from the network.
  6. The network can be roughly divided into ModelLoader model matching, DataFetcher data acquisition, and then go through decoding, image transformation, and conversion. If the original data can be cached, the decoded data will also be encoded and cached in the file.

Glide’s life cycle management

  1. Create a UI-less Fragment (SupportRequestManagerFragment) and bind it to the current Activity so that the Fragment can sense the Activity life cycle;
  2. When creating a Fragment, initialize Lifecycle and LifecycleListener, and call related methods in onStart(), onStop(), and onDestroy() of the life cycle.
  3. Pass in the Lifecycle object when creating RequestManager and implement the LifecycleListener interface
  4. In this way, when the life cycle changes, the RequestManager can be notified through the interface callback to process the request.

Glide's life cycle is mainly divided into two parts:
How to perceive the life cycle of the current page?
Implemented by creating a UI-less Fragment;
how to pass the life cycle?
The relationship between RequestManager and Fragment is implemented through callbacks of Lifecycle and LifeCycleListener interfaces.

Glide cache management

Glide divides its cache into two parts, one is the memory cache and the other is the hard disk cache. The memory cache is divided into two types, weak reference and Lrucache; the disk cache is DiskLrucache, and the DiskLrucache algorithm is similar to Lrucache, so now It seems that Glide's level 3 cache should be WeakReference + Lrucache + DiskLrucache.

The purpose of introducing caching

  • Reduce traffic consumption and speed up response
  • The creation/destruction of Bitmap consumes more memory and may lead to frequent GC; using cache can load Bitmap more efficiently and reduce lag.

Glide caching process

Glide cache is divided into: memory cache and disk cache. The memory cache is composed of weak references + Lrucache. The
order of reading data is: weak reference > LruCache > DiskLruCache > network;
the order of writing to the cache is: network --> DiskLruCache. –>Weak reference–>LruCache

Memory caching principle

1. Weak reference
underlying data structure: HashMap maintenance, key is the cache key, this key consists of 10 parameters such as image URL, width, height; value is the weak reference form of the image resource object.

Map<Key, ResourceWeakReference> activeEngineResources = new HashMap<>();

2. LruCache
underlying data structure: LinkedHashMap, LRU means least recently used, a commonly used replacement algorithm that selects the most recently unused objects for elimination. The characteristic of LinkedHashMap linked list is to insert recently used files into the head of the list and unused pictures at the end.
This is Glide's custom LruCache:

#LruCache
Map<T, Y> cache = new LinkedHashMap<>(100, 0.75f, true);

3. Access Principle When
retrieving data
, there is a concept called image reference counter in the memory cache. Specifically, define an acquired variable in EngineResource to record the number of times the image is referenced, call the acquired() method to +1 the variable, and call release. () method sets the variable to -1.
To obtain image resources, first get the cache from the weak reference. If you get it, the reference count will be +1; if not, get the cache from LruCache. If you get it, the reference count will also be +1. At the same time, transfer the image from the LruCache cache to the weak reference cache pool. in; if there is no more, open the thread pool through EngineJob to load the image. If you get it, the reference count +1 will put the image into a weak reference.
Storing data
Obviously, this is what happens after loading the image. Open the thread pool through EngineJob to load the image. After getting the data, it will call back to the main thread and save the image to a weak reference. When the image is no longer used, such as when the request is paused or loaded or the resource is cleared, it will be transferred from the weak reference to the LruCache cache pool. To sum up, pictures in use are cached using weak references, and pictures that are not in use temporarily are cached using LruCache; the same picture will only appear in one of weak references and LruCache.
Why introduce weak references?

  1. The pressure dividing strategy reduces the probability of trimToSize in Lrucache. If the image being removed is a large image, Lrucache is at the critical point. At this time, the remove operation will delay Lrucache's trimToSize operation.
  2. Improve efficiency: Weak references use HashMap, and Lrucache uses LinkedHashMap. In terms of access efficiency, HashMap is definitely higher.

Handwritten Lrucache

The first data structure of handwritten Lru is LinkedHashMap.
The core data structure of the LRU cache algorithm is a combination of hash linked list, doubly linked list and hash table.

//采用双向链表+哈希表
public class LRUCache {
    
    
       class DLinkedNode{
    
    
            int key;
            int value;
            DLinkedNode prev;
            DLinkedNode next;
            public DLinkedNode(){
    
    

            }
            public DLinkedNode(int _key,int _value){
    
    
                key=_key;
                value=_value;
            }
        }
        private Map<Integer,DLinkedNode> cache=new HashMap<Integer,DLinkedNode>();
        private int size;
        private int capacity;
        private DLinkedNode head,tail;

        public LRUCache(int capacity) {
    
    
            this.size=0;
            this.capacity=capacity;
            //使用伪头部和伪尾部节点
            head=new DLinkedNode();
            tail=new DLinkedNode();
            head.next=tail;
            tail.prev=head;
        }

       public int get(int key) {
    
    
        DLinkedNode node = cache.get(key);
        if (node == null) {
    
    
            return -1;
        }
        // 如果 key 存在,先通过哈希表定位,再移到头部
        moveToHead(node);
        return node.value;
    }
        public void put(int key, int value) {
    
    
        DLinkedNode node = cache.get(key);
        if (node == null) {
    
    
            // 如果 key 不存在,创建一个新的节点
            DLinkedNode newNode = new DLinkedNode(key, value);
            // 添加进哈希表
            cache.put(key, newNode);
            // 添加至双向链表的头部
            addToHead(newNode);
            ++size;
            if (size > capacity) {
    
    
           
                // 如果超出容量,删除双向链表的尾部节点
                DLinkedNode tail = removeTail();
                // 删除哈希表中对应的项
                cache.remove(tail.key);
                --size;
            }
        }
        else {
    
    
            // 如果 key 存在,先通过哈希表定位,再修改 value,并移到头部
            node.value = value;
            moveToHead(node);
        }
    }
        private void addToHead(DLinkedNode node){
    
    
            node.prev=head;
            node.next=head.next;
            head.next.prev=node;
            head.next=node;
        }
        private void removeNode(DLinkedNode node){
    
    
            node.prev.next=node.next;
            node.next.prev=node.prev;
        }
        private void moveToHead(DLinkedNode node){
    
    
            removeNode(node);
            addToHead(node);
        }
        private DLinkedNode removeTail(){
    
    
            DLinkedNode res= tail.prev;
            removeNode(res);
            return res;
        }


}

This data structure looks like this.
Insert image description here
With this structure, let's analyze it one by one:
1. If we add elements from the tail of the linked list by default every time, then obviously the elements closer to the tail are the most recently used, and the elements closer to the head are the ones that have not been used for the longest time.
2. For a certain key, we can quickly locate the node in the linked list through the hash table to obtain the corresponding val.
3. Linked lists obviously support rapid insertion and deletion at any position, just change the pointer. It's just that traditional linked lists cannot quickly access elements at a certain position according to the index, but with the help of a hash table, you can quickly map to any linked list node through the key, and then insert and delete it.
Why is it a doubly linked list?
Because we need to delete the operation. To delete a node, you not only need to obtain the pointer of the node itself, but also need to operate the pointer of its predecessor node. Only the doubly linked list can support direct search of the predecessor, ensuring the time complexity of the operation is O(1).

Disk cache principle (DiskLruCache)

  • DiskCacheStrategy.DATA: Only cache original images;
  • DiskCacheStrategy.RESOURCE: Only cache converted images;
  • DiskCacheStrategy.ALL: caches both original images and converted images; for remote images, caches DATA and RESOURCE; for local images, only caches RESOURCE;
  • DiskCacheStrategy.NONE: Do not cache anything;
  • DiskCacheStrategy.AUTOMATIC: Default strategy, tries to use the best strategy for local and remote images. When downloading network images, use DATA (the reason is simple, processing local images is much easier than the network); for local images, use RESOURCE.

If the data is not obtained in the memory cache, the thread pool will be opened through EngineJob to load the image. There are two key classes here: DecodeJob and EngineJob. EngineJob maintains a thread pool internally to manage resource loading and notify callbacks when the resources are loaded; DecodeJob is a task in the thread pool.

The disk cache is managed through DiskLruCache. According to the cache policy, there will be two types of pictures, DATA (original pictures) and RESOURCE (converted pictures). The disk cache obtains cache data through ResourcesCacheGenerator, SourceGenerator, and DataCacheGenerator in sequence. ResourcesCacheGenerator obtains the converted cache data; SourceGenerator obtains the original cached data without conversion; DataCacheGenerator obtains image data through the network and then caches different images to disk according to different caching strategies.

Glide memory management

Glide memory management is divided into

  • Prevention and treatment of OOM
  • memory thrashing

OOM prevention
1. Glide image sampling
Glide performs sampling calculations for larger images based on the ratio of the current UI display size to the actual size to reduce the memory usage of the image . Generally speaking, the size of the picture = picture width * picture height * number of bytes occupied by each pixel.
For pictures in the resource folder:

The height of the picture = the height of the original picture X (the dpi of the device / the dpi corresponding to the directory)

The width of the image = the width of the original image X (the dpi of the device / the dpi corresponding to the directory)

onlowMemory/onTrimMemory

  • When the memory is too low, onlowMemory will be called. In onlowMemory, Glide will clear some cached memory to facilitate memory recycling.
  • When onTrimMemory is called, if the level is that system resources are tight, Glide will recycle the contents related to the Lru cache and BitMap reuse pool.
  • If onTrimMemory is called for other reasons, Glide will reduce the cached content to 1/2 of the maximum configured cache content.
    2. Weak reference
    Glide manages image requests through RequestManager, and RequestManager is completed internally through RequestTracker and TargetTracker. The way they are held is weak reference.
    3. Life cycle binding
    reduces the size of images loaded into memory and clears unnecessary object references, thereby reducing the probability of OOM.

Handling memory jitter

1. Glide uses reuse pool technology to pool some commonly used corresponding processes, such as EngineJob DecodeJob related to image loading, etc., which need to reuse a large number of created objects, and reuse objects through the object reuse pool. 2. BitmapPool reuses Bitmap objects
. When decoding pictures, set BitmapFactory.Options.inBitmap to achieve memory reuse.

Threads and thread pools in Glide

There are two aspects about threads and thread pools in Glide:
1. Image loading callback
Glide has two image loading methods into and submit

  • Images loaded through into will be called back to the main thread through MAIN_THREAD_EXECUTOR of Executors.
  • The callback through submit will be processed in the current thread through DIRECT_EXECUTOR of Executors.

2.Glide’s thread pool configuration

As the smallest unit of CPU scheduling, threads will consume a lot of money every time they are created and recycled. By using the thread pool, you can

  1. Reduce resource consumption: Reduce the consumption caused by thread creation and destruction by reusing created threads.
  2. Improve response speed: When a task arrives, it can be executed immediately without waiting for thread creation
  3. Improve the manageability of threads: Threads are scarce resources. If they are created without restrictions, they will not only consume system resources, but also reduce the stability of the system. The thread pool can be used for unified allocation, monitoring and tuning.
  4. Effective control of concurrency

There are four thread pool configurations provided in Glide.

1.DiskCacheExecutor This thread pool has only one core thread and no non-core threads. All tasks are executed serially in the thread pool. Commonly used in Glide and loading images from files.

2. SourceExecutor This thread also has only core threads and no non-core threads. The difference from DiskCacheExecutor is that the number of core threads is determined according to the number of cores of the CPU. If the number of cpu cores exceeds 4, the number of core threads is 4. If the number of cpu cores is less than 4, then the number of cpu cores is used as the number of core threads. Used in Glide to load images from the network.

3.UnlimitedSourceExecutor has no core threads and the number of non-core threads is infinite. This type of thread pool is often used for tasks that perform large amounts of work and end quickly. Almost no resources are consumed after all tasks are completed.

4. AnimationExecutor has no core threads. The number of non-core threads is determined according to the number of Cpu cores. When the number of Cpu cores is greater than or equal to 4, the number of non-core threads is 2, otherwise it is 1.

How Glide loads different types of resources

Glide determines the resource type ultimately required by the current request Target through the as method of RequestManager. The load method determines the type of model resource that needs to be loaded. The resource loading process goes through ModelLoader's model loading matching, decoder decoding, and transcoder conversion. These processes are constructed into a LoadPath. Each LoadPath contains a lot of DecodePath. The main function of DecodePath is to decode and convert the data loaded by ModelLoader. Glide will traverse all LoadPaths that may parse the corresponding data until the data is actually parsed successfully.

How Glide loads Gif

First, you need to distinguish the type of the loaded image. That is, after the network request gets the input stream, get the first 3 bytes of the input stream. If it is a GIF file header, the returned image type is GIF.
After it is determined to be a GIF animation, a GIF decoder will be built, which can read the data of each frame from the GIF animation and convert it into a Bitmap. Then use Canvas to draw the Bitmap to the ImageView, and use the Handler for the next frame. Send a delay message to achieve continuous playback. After all Bitmap drawing is completed, it will cycle again, so the effect of loading GIF animations is achieved.
GIF is decoded into multiple pictures for infinite rotation. Each frame switching is a picture loading request. After loading a new frame of data, the old frame of data will be cleared, and then the loading request of the next frame of data will continue. .

Guess you like

Origin blog.csdn.net/ChenYiRan123456/article/details/131586846