LinkedHashMap source code analysis and application of LRU
Introduction to LinkedHashMap
We all know that LinkedHashMap is based on HashMap, which guarantees the order in which elements are added; in addition, it also supports LRU and can be used as a cache center
Source Code Analysis Purpose
-
Analyze how to maintain the order of elements
-
How LRU is implemented
Source code analysis
realization of order
public class LinkedHashMap<K,V> extends HashMap<K,V> implements Map<K,V>{
transient LinkedHashMap.Entry<K,V> head;
transient LinkedHashMap.Entry<K,V> tail;
final boolean accessOrder;
static class Entry<K,V> extends HashMap.Node<K,V> {
Entry<K,V> before, after;
Entry(int hash, K key, V value, Node<K,V> next) {
super(hash, key, value, next);
}
}
Node<K,V> newNode(int hash, K key, V value, Node<K,V> e) {
}
void afterNodeAccess(Node<K,V> p) {
}
void afterNodeInsertion(boolean evict) {
}
void afterNodeRemoval(Node<K,V> p) {
}
}
LinkedHashMap inherits from HashMap, and mainly rewrites four methods in HashMap, namely newNode
, afterNodeAccess
, afterNodeInsertion
, afterNodeRemoval
, which will call back after HashMap creates elements, accesses elements, adds elements, and deletes elements; LinkedHashMap mainly uses these methods Method, build a doubly linked list to connect elements, so as to achieve the purpose of sequential access.
-
The variables head and tail point to the first element and the last element respectively
-
The added element will be packaged into
Entry
an object,Entry
inherited from , andHashMap.Node
on this basis, the variable is used to point to the two elements before and after the current element to form a chain structurebefore
after
-
accessOrder
: true indicates that the elements are sorted according to LRU, that is, the recently accessed ones are placed at the end of the linked list, and the last visited ones are placed at the head of the linked list. The default is false
add element
There is no method to rewrite HashMap in LinkedHashMap put
. According to the source code of the put method of HashMap, we can know that when an element is added, newNode
the method will be called to create a Node object, and then afterNodeInsertion
the method will be called. Let's take a look at how LinkedHashMap rewrites these two method of:
Node<K,V> newNode(int hash, K key, V value, Node<K,V> e) {
LinkedHashMap.Entry<K,V> p =
new LinkedHashMap.Entry<K,V>(hash, key, value, e);
linkNodeLast(p);
return p;
}
private void linkNodeLast(LinkedHashMap.Entry<K,V> p) {
LinkedHashMap.Entry<K,V> last = tail;
tail = p;
if (last == null)
head = p;
else {
p.before = last;
last.after = p;
}
}
- When creating an element node, a
LinkedHashMap.Entry
node will be created first, and then the element will be added to the end of the linked list
void afterNodeInsertion(boolean evict) {
// possibly remove eldest
LinkedHashMap.Entry<K,V> first;
if (evict && (first = head) != null && removeEldestEntry(first)) {
K key = first.key;
removeNode(hash(key), key, null, false, true);
}
}
-
After the element is added, the method will be called
afterNodeInsertion
. After LinkedHashMap rewrites the method, itremoveEldestEntry
will decide whether to delete the header element according to the method. This is mainly to support LRU. If the method returns true, it means that the cache is full and the linked list header will be removed. element -
Among them
evict
, the interpretation of variables in HashMap is that if false indicates the creation mode, that is, the put method called when initializing HashMap data during deserialization
get element
public V get(Object key) {
Node<K,V> e;
if ((e = getNode(hash(key), key)) == null)
return null;
if (accessOrder)
afterNodeAccess(e);
return e.value;
}
void afterNodeAccess(Node<K,V> e) {
// move node to last
LinkedHashMap.Entry<K,V> last;
if (accessOrder && (last = tail) != e) {
LinkedHashMap.Entry<K,V> p =
(LinkedHashMap.Entry<K,V>)e, b = p.before, a = p.after;
p.after = null;
if (b == null)
head = a;
else
b.after = a;
if (a != null)
a.before = b;
else
last = b;
if (last == null)
head = p;
else {
p.before = last;
last.after = p;
}
tail = p;
++modCount;
}
}
LinkedHashMap rewrites the get method of HashMap, mainly to accessOrder
judge whether to adjust the position of the element according to the LRU according to the variable after obtaining the element; the default is false, so the order will not change after the element is obtained, that is, the original Order;
But if you want to store data as an LRU cache center, you need to set it to true, and then call the afterNodeAccess method that rewrites the HashMap to adjust the position of the element. In fact, it is to move the currently accessed element to the end of the linked list
remove element
void afterNodeRemoval(Node<K,V> e) {
// unlink
LinkedHashMap.Entry<K,V> p =
(LinkedHashMap.Entry<K,V>)e, b = p.before, a = p.after;
p.before = p.after = null;
if (b == null)
head = a;
else
b.after = a;
if (a == null)
tail = b;
else
a.before = b;
}
After the element is removed, the element needs to be removed from the doubly linked list, and the logic is relatively simple
iterate through the collection
LinkedHashMap rewrites the method of traversing Map related collections, such as entrySet
, keySet
etc., so that the outside can access elements according to the order of the doubly linked list
public Set<Map.Entry<K,V>> entrySet() {
Set<Map.Entry<K,V>> es;
return (es = entrySet) == null ? (entrySet = new LinkedEntrySet()) : es;
}
final class LinkedEntrySet extends AbstractSet<Map.Entry<K,V>> {
public final int size() {
return size; }
public final void clear() {
LinkedHashMap.this.clear(); }
public final Iterator<Map.Entry<K,V>> iterator() {
return new LinkedEntryIterator();
}
public final boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry<?,?> e = (Map.Entry<?,?>) o;
Object key = e.getKey();
Node<K,V> candidate = getNode(hash(key), key);
return candidate != null && candidate.equals(e);
}
public final boolean remove(Object o) {
if (o instanceof Map.Entry) {
Map.Entry<?,?> e = (Map.Entry<?,?>) o;
Object key = e.getKey();
Object value = e.getValue();
return removeNode(hash(key), key, value, true, true) != null;
}
return false;
}
public final Spliterator<Map.Entry<K,V>> spliterator() {
return Spliterators.spliterator(this, Spliterator.SIZED |
Spliterator.ORDERED |
Spliterator.DISTINCT);
}
public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
if (action == null)
throw new NullPointerException();
int mc = modCount;
for (LinkedHashMap.Entry<K,V> e = head; e != null; e = e.after)
action.accept(e);
if (modCount != mc)
throw new ConcurrentModificationException();
}
}
LRU support
LinkedHashMap inherently supports LRU and can be used as an LRU cache center. When LinkedHashMap is created, parameters can be passed in to accessOrder
set the member variable to true; it means that the order needs to be adjusted according to element access;
public LinkedHashMap(int initialCapacity,
float loadFactor,
boolean accessOrder) {
super(initialCapacity, loadFactor);
this.accessOrder = accessOrder;
}
In this way, when an element is accessed, the element will be moved to the end of the queue. After multiple visits, the most recently accessed element is at the end of the queue, and the earliest accessed element is at the head of the queue;
public V get(Object key) {
Node<K,V> e;
if ((e = getNode(hash(key), key)) == null)
return null;
if (accessOrder)
afterNodeAccess(e);//移动到队尾
return e.value;
}
When inserting an element, a method is provided removeEldestEntry
to determine whether to delete the head element according to the current maximum cache capacity
void afterNodeInsertion(boolean evict) {
// possibly remove eldest
LinkedHashMap.Entry<K,V> first;
if (evict && (first = head) != null && removeEldestEntry(first)) {
K key = first.key;
removeNode(hash(key), key, null, false, true);
}
}
protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
return false;
}
So creating an LRU cache center only takes two steps:
-
accessOrder
Set to true when creating a LinkedHashMap -
Override
removeEldestEntry
method, in this method, if the current number of elements is greater than the maximum allowed number of cache elements, return true and remove the head element in the doubly linked list
Simple LRU cache center implementation:
public class LruCacheCenter<K, V> extends LinkedHashMap<K, V> {
private int mMaxCacheSize;//最大缓存个数
public LruCacheCenter() {
this(MAX_CACHE_NUM);
}
public LruCacheCenter(int maxCacheSize) {
super(maxCacheSize, 0.75f, true);//accessOrder设置为true
this.mMaxCacheSize = maxCacheSize;
}
@Override
protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
return size() > mMaxCacheSize;
}
}