YYCache source of reading memory cache design

YYCache overall structure is divided into two parts: memory caching and disk caching.

However, small capacity cache memory provides high speed access, a large-capacity disk cache provides low but persistent storage.

Why do we need the cache?

Memory read and write speeds much greater than the write speed of the disk, the presence of frequently used data in memory, again, when used to read directly from the memory, thereby improving performance.

Several points of cache memory design

1. caching algorithm: Scene select the appropriate caching algorithms based on, in order to make a higher cache hit rate. Common caching algorithm: LRU (least recently used), LRU-2 (similar to the LRU, barriers to entry becomes 2 times), LFU (least frequently used), FIFO (First In First Out) and so on.

2. The read and write performance: the same case, the speed of storing data and reading data.

3. Thread safety: read-write cache might be taken into account in multiple threads.

LRU algorithm


LRU algorithm (least recently used) As the name suggests, eliminate the least recently used data.

1. Access the data cache misses, at this time if the buffer is not full time. The new data directly into the front (which determines the basic structure of an array can not be used, only a linked list, an array head insertion time complexity is O (n), the head of the linked list insertion time complexity is O (1)) .

2. Access the data cache hit, the data access into the top of the list.

3. Access the data miss data, but this time the cache is full, then you need to eliminate end of the list, and then goodbye to the new data into the first.

YYModel memory cache design

YYModel memory cache is part of YYMemoryCachethe classes. YYMemoryCache LRU caching algorithm used by hashMap + dual linked list. Because hashMap read operations is O (1), used to ensure read speed . Doubly linked list deletion and insertion operations are O (1), to ensure a good speed write operation.

Why not singly linked list?

1. Single chain deletion operation requires to read the previous node, while a node needs to re-read pre traverse.

2. Some would say that is not right, the next node data can be assigned to the current node, then delete the next node, such as deleting the current node (delete the data of the current node). First of all it is a copy of the data operation again, if you delete the last one, this approach is not. The elimination mechanism caching algorithms are just delete the last one.

YYModel structure


hashmap role is to quickly read. Doubly linked list used to maintain order, to achieve elimination mechanism.

LRU mechanism corresponding pseudo-code :

1. Access the data cache misses, at this time if the buffer is not full time. The new data directly into the front

if (!hashmap[@"key"]) {
    Node *node = [[Node alloc] init];
    node.data = data;
    hashmap[@"key"] = node;
    if (self.linkedList.head) {
        node.next = self.linkedList.head;
        node.prev = nil;
        self.linkedList.head.prev = node;
        self.linkedList.head = node;
    } else {
        node.prev = nil;
        node.next = nil;
        self.linkedList.head = node;
        self.linkedList.tail = node;
    }
}复制代码

2. Access the data cache hit, the data access into the top of the list.

if (hashmap[@"key"]) {
    Node *node = hashmap[@"key"];

    if (node == self.linkedList.head) {
    	return;
    } else if (node == self.linkedList.tail) {
    	node.prev.next = nil;
    	self.linkedList.tail = node.prev;
    	node.next = self.linkedList.head;
    	self.linkedList.head.prev = node;
    	self.linkedList.head = node;
    } else {
    	node.prev.next = node.next;
    	node.next.prev = node.prev;
    	node.next = self.linkedList.head;
    	self.linkedList.head.prev = node;
    	self.linkedList.head = node;
    }
}复制代码

3. Access the data miss data, but this time the cache is full, then you need to eliminate end of the list, and then goodbye to the new data into the first.

Node *node = self.linkedList.tail;
node.prev.next = nil;
self.linkedList.tail = node.prev;
[hashmap removeObjectForKey:@"key"];
free(node);复制代码

Thread Safety

1. When reading and writing thread-safe lock guarantee

2. Remove the release node in asynchronous thread and improve performance.

if (_lru->_totalCost > _costLimit) {
    dispatch_async(_queue, ^{
    [self trimToCost:_costLimit];
    });
}复制代码


Reproduced in: https: //juejin.im/post/5d08efe46fb9a07ee30e1beb

Guess you like

Origin blog.csdn.net/weixin_34013044/article/details/93172956