Why is the time complexity of the order of the hash table is a constant O (1)

In the hash table to add, delete, search and other operations, under very high performance, regardless of hash conflict, positioning can be completed only once, time complexity is O (1), the hash table is how order to achieve stunning realization constant O (1) it?

We know that the physical storage structure only two data structures: sequential storage structure and the storage structure of the chain (such as stacks, queues, trees, etc. from the logical structure of FIG to abstract mapped into memory, and both the physical organization form), look at the array element according to an index, a positioning can be achieved, hash tables, use of this feature, the trunk is an array of hash table.

For example, we want to add or find an element, we passed the key of the current element is mapped to a location in the array through a function, through an array subscript a positioning operation can be completed.

Memory location = f (key)

Among them, the function f is generally known as a hash function that design is good or bad will directly affect the merits of the hash table.

Similarly the seek operation, to calculate the actual memory address hash function, then the corresponding address can be removed from the array.

Therefore, the time complexity of the hash table is constant order O (1).

Guess you like

Origin www.cnblogs.com/gaopengpy/p/12058055.html