The actual data structure used Redis Analysis

First, a list of common data structures
1.String:
simple key-value type, value may be a String, may also be digital. Use scene has caching system drop-down box value, save Distributed session, limiting the interface (use expired mechanisms expire, key increment mechanism incr), the user points and so on.
2.List:
simple list of strings, the queue data structure type (the FIFO), can be inserted or deleted in the data queue head to tail. Common usage scenarios with a message queue (lpop, rpush), charts, real time charts (Zsort can be achieved).
3.Hash: String type field and value mapping table. Usage scenarios a user logs status information is stored, (object properties change frequently) shopping cart saved.
4.Set:
String type collection, disorderly elements will not be repeated. Usage scenarios systematic whitelist blacklist settings
5.Sort the SET:
String type collection, elements sequenced, and unduplicated. Charts and the like commonly used in real-time design of
two simple dynamic string SDS (simple dynamic string)

sdshdr {struct
// record the number of bytes used buf array
// equal to the length of the string stored SDS
int len;
// record array buf unused number of bytes
int Free;
// byte array, with saving string to
char buf [];
};
memory is shown below:
[image insert described herein] (https://img-blog.csdnimg.cn/20190724134444162.png?x-oss-process=image/watermark! , type_ZmFuZ3poZW5naGVpdGk, shadow_10, text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L05hbkd1b0h1YW5nRG91, size_16, color_FFFFFF, t_70)
Free: 0 indicates SDS not assigned to any unused space.
len: 5 represents the currently stored character string length is 5.
buf: char type array of characters.
! [Inserted here described image] (https://img-blog.csdnimg.cn/20190724135159624.png)
different free here equal FIG. 5 shows a character array there are five units in length unassigned.
In this unused space, redis achieve a pre-allocated space and inert release two strategies.
Pre-allocated Space: When the string spatial expansion, the expanded memory more than necessary, thus reducing the number of re-allocation of memory required to increase the string operation is continuously performed.
Inert release: when shortening the string operation, the memory does not immediately reallocate the excess bytes recovered shortened, but the use of the attribute record number of free bytes down, wait for subsequent use. (When SDS also provides a corresponding API, when we need to, you can also release the unused space manually.)

Three, List
list node definition:

struct {listnode typedef

// preamble node
struct listnode * PREV;

// rear node
struct listNode * next;

value // node
void * value;

} listnode;

doubly linked list
[inserted here described image] (https: /! /img-blog.csdnimg.cn/20190724143745229.png)
use the list operation lists

List struct {typedef

// header node
listnode * head;

// table tail node
listNode * tail;

the number of nodes included in the list //
unsigned Long len;

// copy node value function
void * (* dup) (void * ptr );

// node value of the release function
void (Free *) (void * PTR);

// node value comparison function
int (* match) (PTR void *, void * Key);

} List;

It provides a list structure for the list table pointer head, tail pointer table tail, and chain length counter len, the dup, free and match members are necessary functions for achieving a particular type of polymorphism list:
● DUP Copy function is used node list stored value;
● Free list node function for releasing the stored value;
● function is used to match the input value and the other comparison values stored in the node list are equal
[inserted here described image] (HTTPS! :? //img-blog.csdnimg.cn/20190724144054122.png x-oss-process = image / watermark, type_ZmFuZ3poZW5naGVpdGk, shadow_10, text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L05hbkd1b0h1YW5nRG91, size_16, color_FFFFFF, t_70)
four dictionaries
Redis dictionary implemented using a hash table as the underlying a hash table hash table which can have multiple nodes, and each node on the hash table holds a dictionary of key value pairs. Similar to the HashMap java

struct {dictht typedef

// hash table array
dictEntry ** Table;

// hash table size
unsigned Long size;

// size of the hash mask table for calculating the index value
//. 1 is always equal-size
unsigned Long sizemask ;

number // the hash table of an existing node
unsigned Long Used;

} dictht;

Node definitions

struct {dictEntry typedef

// key
void * Key;

// value
Union {
void * Val;
uint64_tu64;
int64_ts64;
} V;

// hash table point to the next node, the list is formed
struct * Next dictEntry;
} dictEntry;

relationship between the two structure is as follows:
! [inserted here described image] (https://img-blog.csdnimg.cn/2019072414502986.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L05hbkd1b0h1YW5nRG91,size_16,color_FFFFFF,t_70) ! [insert picture description here] (https://img-blog.csdnimg.cn/20190724145131933.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L05hbkd1b0h1YW5nRG91,size_16,color_FFFFFF,t_70)

When you want to add a new key to the dictionary, the program need to be calculated according to key the hash key value and the index value, and then according to the index value, the hash table comprising key-value pairs new node Specifies the index into the hash table array above.

** Hash Algorithm **
> hash function using a dictionary is provided, calculates a hash value of the key key
> = the hash dict-> type-> HashFunction (key);
sizemask properties and the hash value using a hash table>, calculated index value
> depending on the circumstances, ht [x] may be ht [0] or ht [1] ndex = hash & dict-> ht [x] .sizemask;


** ** resolve key conflicts

> When there are two or more the number of keys are assigned to the same array index hash table above, we call these key clash (collision).
> Redis hash table method using an address chain (separate
> Chaining) key to resolve the conflict, each node has a hash table next pointer, hash table plurality of nodes may constitute a singly linked list with the next pointer, are allocated to multiple nodes on the same index can be used to connect this way linked list, which would address the key issues of the conflict.

! [Insert Picture description here] (https://img-blog.csdnimg.cn/20190724162244200.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L05hbkd1b0h1YW5nRG91,size_16,color_FFFFFF,t_70)
** Expansion and contraction **

> When you save the hash table keys to too much or too little, it is necessary to corresponding expansion or contraction of the hash table by rerehash (rehash). Specific steps:
>
> 1, if executing an extended operation, creates a size equal ht [0] .used * 2n hash table (i.e. extensions are used according to each original hash table based on an original hash table doubling the space to create another hash table). Conversely if you are performing a shrink operation, each contraction is reduced doubled to create a new hash table has been based on the use of space.
>
> 2, again using the above hashing algorithm to calculate the index value, and the new key-value pairs into the hash table location.
>
> 3, are all key-value pairs after migration is complete, release the memory space of the original hash table.

** ** condition that triggers expansion

> 1, the server is not currently running BGSAVE BGREWRITEAOF command or command, and the load factor is greater than or equal to 1.
>
> 2, the server is currently executing BGSAVE BGREWRITEAOF command or command, and the load factor is greater than or equal to 5.
>
> PS: Load Factor = number of saved hash table / hash table size of nodes. Asymptotically the rehash
>
> expansion and contraction operation is not disposable, a centralized completion, but a plurality of times, progressive completed. If you save the Redis key-value pairs in only a few dozens, then rehash
> can be done instantly, but if the key-value pairs have millions, tens of millions or even hundreds of millions, then to conduct a one-time
> rehash, will inevitably lead to other operations can not be performed within a period of time Redis. So Redis uses progressive
> rehash, so during the period of progressive rehash, delete the dictionary to find updates and other operations may be performed on the two hash table, first the hash table is not found, it will go to the second ha hope to find on the table. But be
> increased operation must be carried out on a new hash table.

Guess you like

Origin www.cnblogs.com/jarvisblogs/p/11257828.html