HashMap interview will ask data structure knowledge summary

  If at the time of reading this article, the structure of the HashMap is not very understanding, I suggest you refer to some time ago wrote , "die you dig --HashMap series analysis (based on jdk1.8)" , the following may be mentioned knowledge point some help.

1: HashMap data structure?

A: the structure of the hash table (hash list: list array +) implemented, and combines the advantages of an array list. When the chain length of more than 8, the list is converted into a red-black tree.
transient Node <K, V> [ ] table;

2: HashMap works?

  HashMap and the underlying array is unidirectional hash linked list implementation, each element in the array chain, the internal class Node (implemented Map.Entry <K, V> Interface) implemented, HashMap put & get through the storage and retrieval methods.

When an object is stored, the K / V key passed to put () method:

  ①, call hash (K) Method K is calculated hash value, then the combined length of the array, the array index calculated;

  ②, adjust the array size (number of elements when the container is greater than the capacity * loadfactor, resize expansion of the container will be 2N);
  ③, if the hash value of K I is not present in the HashMap, the insert, if present. collision occurs;
    . If the hash value of K II is present in a HashMap and both of them equals returns true, the update key-value pair;
    . If the hash value of K III is present in a HashMap and both of them equals returns false, end of the list (the end interpolation) is inserted or red-black tree (tree adding mode).

(Using interpolation until JDK 1.7, JDK 1.8 tail using interpolation)
(Note: when a collision is greater than the lead list TREEIFY_THRESHOLD = 8, put into a red-black tree list conversion)

  Acquiring object, K passed get () method: ①, call hash (K) method (calculated hash value of K) is located so as to acquire the key index list array; ②, traverse the list order, the equals () method Node list to find the same value V corresponding to the value of K.

hashCode is positioned storage location; the equals qualitative both comparison for equality.

3. When two objects hashCode what will happen the same?

  Because the same hashCode, not necessarily equal (equals method comparison), so the two objects at the same standard where the array, the "collision" in this occurrence. And because HashMap use linked lists to store objects, the Node will be stored in the list.

4. Do you know realize hash of it? Why should this be achieved?

  In JDK 1.8, by 16-bit exclusive hashCode high or low 16-bit implementation (): (h = k.hashCode ()) ^ (h >>> 16), mainly from the speed, quality and efficacy to consider reduce system overhead, it will not cause a collision because the subject did not participate at computing high, causing the.

5. Why use exclusive or operator?
  32 ensures that the object of hashCode value as long as there is a change, the entire hash () return value will change. As far as possible to reduce collisions.

How table capacity 6.HashMap determined? What loadFactor that? How to change this capacity? This change will bring about what issues?

  ①, the array size is determined by the table capacity of this parameter, the default is 16, when configured to be passed, the maximum limit is 30 <<. 1;
  ②, loadFactor loading factor is, the main purpose is to confirm whether the array requires a dynamic table extension, the default value is 0.75, such as the table size of the array 16, the load factor is 0.75, threshold is 12, when the actual size of the table exceeds 12, table requires dynamic expansion;
  ③, upon expansion, calling a resize () method, the table length becomes twice the original (note that the length of the table, rather than the threshold)
  ④, if the next big data, the loss of performance will bring the extended, high performance requirements in place, such losses it can be very deadly.

Process put method 7.HashMap?

  • A: The "call acquires the hash value of a hash function corresponding to Key, then calculate its array subscript;
  • If the hash collision does not appear, directly into the array; if the hash collision occurs, the list of places way back on the list;
  • If the chain length exceeds the threshold value (TREEIFY THRESHOLD == 8), put into a red-black tree list turn, less than the length of the chain 6, put back black tree list;
  • If the node's key already exists, its value can be replaced;
  • When a set of key-value greater than 12, the method calls for the array resize expansion. "

8. The process of expansion of the array?

  Create a new array, which is twice the capacity of the old array, and recalculate the storage location in the old array nodes. The position of the node in a new array of only two, the original standard position or index + original size of the old array.

9. The list of zip method leads to too deep question why not a binary search tree instead and select the red-black tree? Why not always use red-black tree?

  Chose the red-black tree is a binary search tree in order to solve the defects, binary search tree in exceptional cases will become a linear structure (with the original use this as a list structure, causing deep problems), traversing Find It will be very slow. The red-black tree after inserting the new data may be required by left-handed, right-handed, discoloration of these operations to maintain balance, the introduction of red-black tree is to find data quickly, to solve the query list depth of the problem, we know the red-black tree belongs to the balanced binary tree, However, in order to maintain "balance" is a price to pay, but the cost of the loss of resources is less than the traverse linear list, so when length is greater than 8, will use the red-black tree, if the chain length is very short, it does not We need to introduce a red-black tree, but will slow introduction.

10. Tell me about your views on the red-black tree?

  • 1, i.e. each node in the non-red black
  • 2, the root is always black
  • 3. If a node is red, then its child nodes must be black (not necessarily vice versa)
  • 4, each leaf node are black empty node (NIL node)
  • 5, each path from the root node to a leaf node or a blank node, must contain the same number of black nodes (i.e., the same black height)

11.jdk8 in for HashMap what changes?

  • In java 1.8, if the length of the list than 8, then the list will be converted to red-black tree. (Must be greater than the number of buckets 64, only when the expansion of less than 64)
  • When hash collisions occur, java 1.7 will be inserted at the head of the list, and java 1.8 is inserted at the end of the list
  • In the java 1.8, Entry Node alternatively be (a change vest).

12.HashMap, LinkedHashMap, TreeMap What is the difference?

  HashMap reference to other problems;
  a LinkedHashMap saved inserted sequential recording, when traversing with the Iterator, first take the recording must be inserted first; traversing slower than the HashMap;
  the TreeMap implement SortMap interface able to save records based on the key sorting (default key values in ascending order, can specify the sort comparator)

13.HashMap & TreeMap & LinkedHashMap usage scenarios?

  In general, most of the use of a HashMap.
  When inserted in a Map, deleting and locating elements;: the HashMap
  the TreeMap: in cases where natural order or custom order traversal keys;
  a LinkedHashMap: the same order as the required output and input.

14.HashMap and HashTable What is the difference?

  ①, HashMap is not thread-safe, HashTable are thread-safe;
  ②, due to the thread-safe, so HashTable behind the efficiency of HashMap;
  ③, HashMap allows a maximum of a key record is null, the value allows multiple records null, and allowed HashTable;
  ④, the HashMap default size of the array is initialized 16, 11 HashTable, when the expansion of the former, tripling, which tripling + 1'd;
  ⑤, the HashMap need to recalculate the hash value, and directly HashTable use object hashCode

What another thread 15.Java safe and HashMap class is very similar? The same is thread-safe, it is synchronized with the HashTable What is the difference in the thread?

  ConcurrentHashMap class (Java is a thread and contracting in java.util.concurrent provide safe and efficient HashMap achieve).
  HashTable is unlocked using the principles synchronize keyword (object is locked);
  and for ConcurrentHashMap, in JDK 1.7 in the segment lock mode uses; 1.8 direct use of JDK CAS (lock-free algorithms) + synchronized.

16.HashMap & ConcurrentHashMap the difference?

  In addition to locking, no big difference on principle. In addition, HashMap of key-value pairs allow null, but ConCurrentHashMap are not allowed.

17. Why ConcurrentHashMap higher than HashTable efficiency?

  HashTable using a lock (lock the entire list structure) deal with concurrency issues, multiple threads compete a lock, easily blocked;

  ConcurrentHashMap 

  • JDK 1.7 using lock segment (ReentrantLock + Segment + HashEntry), equivalent to a HashMap into a plurality of segments, each assigned a lock so that access to support multi-threading. Lock granularity: based Segment, comprising a plurality of HashEntry.
  • JDK 1.8 using CAS + synchronized + Node + red-black tree. Lock granularity: Node (the first node) (implemented Map.Entry <K, V>). Lock granularity is reduced.

18. For a detailed analysis ConcurrentHashMap lock mechanism (JDK 1.7 VS JDK 1.8)?

  In JDK 1.7, segmented locking mechanism employed to achieve concurrent updating operation, using the underlying list storage structure + array comprising two static inner core classes and Segment HashEntry.
    ①, Segment succession of ReentrantLock (reentrant lock) to act as a lock, each daemon Segment objects each hash bucket number mapping table;
    ②, used to package the key mapping table HashEntry - value pairs;
    ③, each barrels are linked by a number of objects up HashEntry list

  JDK 1.8, the use of Node + CAS + Synchronized to ensure concurrency safety. Segment type canceled, the direct use of the key table memory array; when the length of the list of objects exceeds HashEntry TREEIFY_THRESHOLD, the list is converted into a red-black tree, to enhance performance. The bottom of the list is changed to an array + + red-black tree.

 

19.ConcurrentHashMap In JDK 1.8, why use the built-in lock synchronized instead of re-entry lock ReentrantLock?

  ①, the particle size is reduced;
  ②, JVM development team did not give up synchronized, and more synchronized based on the JVM to optimize space, more natural.
  ③, a large amount of data in the operation, the JVM for memory pressure, based ReentrantLock API will cost more memory.

20.ConcurrentHashMap brief?

①, important constants:
  Private transient volatile int sizeCtl;
  When is negative, -1 for initializing, -N represents N - 1 threads ongoing expansion;
  when is 0, table indicates not initialized;
  when other positive when the number representing the time for expansion or the size of the initialization.

②, data structures:
  the Node is the basic unit of the storage structure, the inheritance HashMap Entry, for storing data;
  the TreeNode inheritance Node, but replaced by a binary tree data structure, red-black tree storage structure is, for red-black trees storing data;
  TreeBin TreeNode packaging container is, to provide a number of conditions and a red-black tree conversion lock control.

③, when the storage object (PUT () method):
  1. If not initialized, call initTable () to initialize method;
  2. If the hash does not directly conflict CAS no lock inserted;
  3. If additional capacity is needed, it is to be expansion;
  4. If the hash conflicts exist, to ensure thread safety lock, two cases: one is a linked list then traverse directly inserted to the end, one is red-black tree is inserted in accordance with the red-black tree structure;
  5. If the number of the list is greater than the threshold 8, the structure must first be converted into red-black tree, break once again into circulation
  6. If added successfully invoked addCount () method of statistical size, and check whether the capacity is needed.

④, the expansion method of transfer (): The default capacity is 16, when the expansion, the capacity becomes twice the original.
  helpTransfer (): call multiple threads work together to help carry out the expansion, such efficiency will be higher.

⑤, acquiring the object (GET () method):
  1. Calculate a hash value, to locate the position of the index table, if the node is in line with the first return;
  2. If you experience expansion, calls are marked expansion node ForwardingNode .find () method to find the node, match returns;
  3. above do not meet, then it is down to traverse the node, match returns, otherwise it finally returns null.

What is the degree of concurrency 21.ConcurrentHashMap?

  ConccurentHashMap can also update the program runs and does not produce the maximum number of threads that lock contention. The default is 16, and may be provided in the constructor. When the user sets the degree of concurrency, of ConcurrentHashMap use less than the minimum value of the exponent 2 as the actual degree of concurrency (if the user is provided a degree of concurrency 17, 32 compared to the actual degree of concurrency)


There would be time HashTable, ConcurrentHashmap resolution.

Reference blog: https://www.cnblogs.com/heqiyoujing/p/11143298.html

     https://www.jianshu.com/p/75adf47958a7

Guess you like

Origin www.cnblogs.com/Young111/p/11519952.html