Java collection interview questions, Map interview questions, List interview questions, etc.





1. What interfaces does the Java collection framework have?

List、Set、Map、Queue、Deque

2. What is the difference between List Set Map Queue Deque?

  • List(A good helper to deal with order): elements are ordered, repeatable, and nullable
  • Set(Focus on the unique nature): Elements are disordered, non-repeatable, and nullable
  • Map(Experts who use Key to search): Elements are out of order, non-repeatable, Key cannot be null, Value can be null
  • Queue(used to simulate the use of queue data structures): elements are ordered, repeatable, and nullable
  • Deque(Double-ended queue, which can be used as a stack and a queue): elements are ordered, repeatable, and nullable

2.1 What is the difference between ArrayList (array) and LinkedList (doubly linked list)?

  • ArrayList: The interior is implemented through an array, supports random access, implements RandomAccessthe interface, implements Serializablethe interface, implements Cloneablethe interface, thread is not safe, and has high efficiency
  • LinkedList: The interior is implemented through a linked list, does not support random access, does not implement RandomAccessthe interface, implements Serializablethe interface, implements Cloneablethe interface, the thread is not safe, and the efficiency is high

2.1.1 Reasons for using ArrayList instead of LinkedList

  • Insertion: ArrayList tail insertion efficiency is high, and LinkedList head and tail insertion efficiency is high. But the insertion efficiency is very low, because ArrayList needs to move elements, and LinkedList needs to move pointers.
  • Deletion: ArrayList tail deletion is efficient, and LinkedList head and tail deletion is efficient. But intermediate deletion is very inefficient, because ArrayList needs to move elements, and LinkedList needs to move pointers.
  • Modification: Random access efficiency of ArrayList is high, and random access efficiency of LinkedList is low.
  • Query: ArrayList has high random access efficiency, and LinkedList has low random access efficiency.
    So we just use ArrayList.

2.2 What is the difference between HashMap and HashSet? (hash table red-black tree)

  • HashMap: The interior is realized by array + linked list + red-black tree. Duplicate elements are not allowed, but only one nullelement is allowed. The thread is not safe and the efficiency is high.
  • HashSet: Internally, it is HashMapimplemented through, no repeated elements are allowed, only one element is allowed null, thread is not safe, and the efficiency is high.

2.3 What is the difference between HashMap and Hashtable?

  • HashMap: The interior is realized by array + linked list + red-black tree, no repeated elements are allowed, only one element is allowed null, thread is not safe, and the efficiency is high
  • Hashtable: The interior is implemented by array + linked list, no duplicate elements are allowed, no nullelements are allowed, thread safe, and low efficiency

2.4 What is the difference between HashMap and ConcurrentHashMap?

  • HashMap: The interior is realized by array + linked list + red-black tree, no repeated elements are allowed, only one element is allowed null, thread is not safe, and the efficiency is high
  • ConcurrentHashMap: The interior is realized by array + linked list + red-black tree. Duplicate elements are not allowed, and no nullelements are allowed. Thread safety and high efficiency

2.5 What is the difference between HashMap and TreeMap?

  • HashMap: The interior is realized by array + linked list + red-black tree, no repeated elements are allowed, only one element is allowed null, thread is not safe, and the efficiency is high
  • TreeMap: The interior is realized through red-black tree, no repeated elements are allowed, no nullelements are allowed, threads are not safe, and the efficiency is high. Elements can be sorted. Implemented through the SortedMap interface.

3. HashMap

  • Before JDK1.8: The linked list is mainly used to solve hash conflicts (zipper method).
  • After JDK1.8: When the length of the linked list is greater than 8, the linked list is converted into a red-black tree (to improve query efficiency). When the length of the array is greater than 64, the linked list is converted into a red-black tree, and when it is less than 6, the red-black tree is converted into a linked list (to improve insertion efficiency). (Open Address Act)

3.1 Initial capacity and load factor of HashMap

  • Initial capacity: HashMap initial capacity, that is, the capacity of the hash table when it is created, the default is 16.
  • Load factor: The load factor is a value to measure the fullness of the HashMap. The default is 0.75f, that is, when the number of elements in the HashMap is greater than or equal to the capacity * load factor, the capacity will be expanded.

3.2 How does HashMap expand?

When the number of elements in the HashMap is greater than or equal to the capacity * load factor, the capacity will be expanded to double the original capacity.

3.3 Why is the expansion doubled? Why is the length of HashMap a power of 2?

  • In order to reduce hash collisions and improve query efficiency.
  • The size of the array is a power of 2, so the modulo operation can be converted into a bit operation, which is more efficient.

3.4 How does HashMap resolve hash conflicts?

Linked list method (zipper method): Store the linked list in an array, and when a hash collision occurs, store the conflicting elements in the linked list.
Open address method: When a hash collision occurs, an algorithm is used to find the next empty hash address.

3.5 How to determine the position of an element in an array?

Calculate the hash value and the modulo operation of the array length through the hashCode() method of the key to obtain the position in the array.

public class AbstractHashMap<K, V> extends AbstractMap<K, V> implements Map<K, V> {
    
    
    @Override
    public V put(final K key, final V value) {
    
    
        final Object convertedKey = convertKey(key);
        final int hashCode = hash(convertedKey);
        final int index = hashIndex(hashCode, data.length);
        HashEntry<K, V> entry = data[index];
        //...
        }

        addMapping(index, hashCode, key, value);
        return null;
    }

    protected int hashIndex(final int hashCode, final int dataSize) {
    
    
        return hashCode & dataSize - 1;
    }
}

3.6 HashMap HashSet if check duplicate? How to judge equality?

Whether they are equal is judged by the hashCode of the element, and whether they are equal is judged by equals.

public class AbstractHashMap<K, V> extends AbstractMap<K, V> implements Map<K, V> {
    
    
    @Override
    public V put(final K key, final V value) {
    
    
        //...
        while (entry != null) {
    
    
            if (entry.hashCode == hashCode && isEqualKey(convertedKey, entry.key)) {
    
    
                final V oldValue = entry.getValue();
                updateEntry(entry, value);
                return oldValue;
            }
            entry = entry.next;
        }
        //...
    }

    protected boolean isEqualKey(final Object key1, final Object key2) {
    
    
        return key1 == key2 || key1.equals(key2);
    }
}

3.7 The hash algorithm disturbance function of HashMap, why is JDK1.8 better than the hash algorithm of JDK1.7?

The hash method of JDK 1.8 is more simplified than that of JDK 1.7, but the principle remains the same.

    static final int hash(Object key) {
    
    
      int h;
      // key.hashCode():返回散列值也就是hashcode
      // ^:按位异或
      // >>>:无符号右移,忽略符号位,空位都以0补齐
      return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
  }

Compare the source code of the hash method of HashMap of JDK1.7.

static int hash(int h) {
    
    
    // This function ensures that hashCodes that differ only by
    // constant multiples at each bit position have a bounded
    // number of collisions (approximately 8 at default load factor).

    h ^= (h >>> 20) ^ (h >>> 12);
    return h ^ (h >>> 7) ^ (h >>> 4);
}

Compared with the hash method of JDK1.8, the performance of the hash method of JDK 1.7 will be slightly worse, because after all, it has been disturbed 4 times.

3.8 The problem of hashMap multi-threaded infinite loop

  • JDK1.7: When multiple threads perform expansion operations at the same time, it will cause the linked list to form a loop, forming an infinite loop. Head insertion will cause the linked list to form a loop.
  • JDK1.8: When multiple threads perform expansion operations at the same time, it will cause the linked list to form a loop, forming an infinite loop. The tail insertion method will not cause the linked list to form a loop.

4. ArrarList

4.1 ArrayList expansion mechanism

  • The initial capacity of the ArrayList is 10. When the number of elements in the ArrayList is greater than or equal to the capacity, the capacity will be expanded to 1.5 times the original capacity.






My Github address , welcome everyone to join my open source project, or (contact me on my homepage) to join your open source project, click Github-Stars.

\ open source project name dependent type version number describe
1 spring-boot-starter-trie pom 1.0.0-SNAPSHOT The query speed under specific requirements far exceeds that of open source search tools, and the B+ tree under innodb or the inverted index in ES cannot compare with it.
2 spring-boot-starter-trie jar 1.0.0-M1 Provides SpringCloud-based service nodes, which can be used for service discovery through the Nacos registry, realizing dynamic expansion and contraction of the tree, and dynamic online and offline services.
3 Data-Provider pom 1.0.0-SNAPSHOT It provides queries from multiple data sources and data type synchronization. As a Jar, it can rely on dynamically providing data on other services.

Guess you like

Origin blog.csdn.net/jj89929665/article/details/130908738