Redis and Memcache

https://www.cnblogs.com/xrq730/p/4948707.html

What is MemCache

Memcached understanding of memory storage mechanism
Slab Allocator memory allocation mechanism
under Memcached uses a default allocation mechanism called Slab Allocator, manage memory.
Before 0. malpractice memory allocation
before this mechanism occurs, memory allocation is carried out by all of the records easily malloc and free. However, this approach can lead to memory fragmentation, increasing the burden on the operating system memory manager, in the worst case, cause the operating system is slower than memcached process itself. Slab Allocator is to solve the problem born.

Slab Allocator is the basic principle in accordance with a predetermined size, allocating memory in page units, a default page is 1M, can be specified when activated by parameters -I divided into blocks of various sizes (the chunk), and the same size blocks into groups (chunk set), the application memory if required, the memcached will be divided and a new page allocated to the area in need slab. Once the page is assigned will not be recovered before rebooting or re-allocation to solve memory fragmentation problems.

Page
allocated to Slab memory space, the default is 1MB. After Slab into chunk assigned according to the size of the cut slab.
Chunk
for memory caching records.
Memcached caching is the place to store the actual data, it is to manage the size of the maximum storage size of its slab. The size of each chunk is the same slab, chunk size slab1 shown above is 88 bytes, slab2 is 112 bytes. Since the specific length is allocated memory, and therefore can not effectively use the memory allocation.
Slab Class slab memory allocation mechanism is a linux operating system
group composed of the same size chunk.
Memcached is not the size of all the data are put together, but the advance data space is divided into a series of slabs, each slab is only responsible for data storage within a certain range. memcached according to the size of received data, select the most appropriate data size slab. holds in memcached free chunk list of the slab, chunk selected based on the list, then the data cached therein.


As shown, each slab only stored thereon a slab of large size and less than or equal to the maximum size of its own data. For example: 100-byte string will be saved to slab2 (88-112), each slab is responsible unequal space, a maximum value of the slab to the previous default memcached 1.25 times, the It can be modified by modifying the growth in the proportion of the -f parameter.


concrete slab memory allocation process is as follows:
Memcached at boot time by -m parameter maximum use of memory, but this will not be a start to finish occupation, but gradually assigned to each slab of.
If a new data is to be stored:
first choose a suitable slab, the slab and then see if there are free chunk, if there is a direct deposit into; If you do not have to apply, when to apply slab to page memory units, No matter how much size, will have a size of 1M page is assigned to the slab (the page will not be recycled or re-distribution, always belong to the slab).
After application to the page, the page will be the slab by size chunk of memory for segmentation, this becomes an array of a chunk, and then select from this chunk for storing a data array.
If no free page when the slab will be LRU, LRU rather than the entire memcache.

1, slab class: in memcached, the management of the slab elements is managed as a unit. Each slab class corresponds to one or more of the same spatial size chunk.
2, chunk: minimum unit of storage elements. User data item (key, value, etc.) will eventually saved the chunk. memcached based on the size of the element to be placed in a suitable slab class. Each chunk space slab class is the same, so the storage element to come, there may be some space remaining chunk.
3, page: fixed size is 1MB. When there is insufficient space slab class, will apply page, and the page is cut according to the size of the chunk.


MemCache is a free, open-source, high-performance, distributed, distributed memory object caching system, used for dynamic Web applications in order to reduce the load on the database.
It is to reduce the number of database read by caching data and objects in memory, thereby increasing the speed of the site visit.
MemCaChe Redis higher compared to when using memcached simple key-value store memory utilization; and Redis do when using the hash key-value store structures using modular compression,
memory utilization than Memcache
cache:
the object will be called in memory, you can quickly call when in use, do not have to create a new duplicate instances. You can reduce system overhead and increase system efficiency.
MemCache called "distributed cache", but does not have a distributed function, it can only be achieved MemCache distributed cache client such as through consistent hashing distributed algorithms.

The principle MemCache
MemCache data stored in memory, stored in the memory means personally think points:
1, the speed of accessing data faster than traditional relational databases, because Oracle, MySQL these traditional relational database in order to maintain data persistent data stored in the hard disk, IO operation is slow
2, MemCache data stored in memory at the same time means that as long MemCache restart, the data will disappear
3, since MemCache data stored in memory, the machine will be digit limit, 32-bit machines can use up to 2GB of memory, 64-bit machine is no upper limit
MemCache principles, the most important content of the memory allocation could there MemCache, memory allocation MemCache uses a fixed space allocation, or your own draw a diagram illustrates:

This picture which involves the slab_class, slab, page, chunk four concepts,
Slab Class
group composed of the same size chunk.
Slab
is a memory allocation mechanism linux operating system
Page
allocated to Slab memory space, the default is 1MB. After Slab into chunk assigned according to the size of the cut slab.
Chunk
for memory caching records.
The relationship between them is:
1, MemCache memory space is divided into a set of slab
2, under each slab there are several page, each page is the default 1M, should be, then this slab if a slab occupied 100M memory words there are 100 page
. 3, each of which contains a set of page chunk, a chunk of data stored where the real size of a chunk of the same is fixed inside the slab
4, the same size of the chunk slab are grouped together, referred slab_class
MemCache memory allocation is called allocator, the number of slab is limited, a few dozen or a few dozen, the configuration and startup parameters related.
Place over the value stored MemCache is determined by the size of the value, the stored value will always be a slab with a chunk size of the closest, such as slab [. 1] a chunk size of 80 bytes, slab [2] the chunk size is 100 bytes, slab [. 3] of the chunk size is 128 bytes (the chunk adjacent slab substantially at a ratio of 1.25 for growth, this ratio can be specified with the -f starts when MemCache), then over a 88 byte value, this value will be placed in the No. 2 slab. When put slab, the slab is first to apply memory, application memory is a page as a unit, so the data into the first time, regardless of size is how much will have 1M-size page is assigned to the slab. After application to the page, the page will be the slab by size chunk of memory for segmentation, this becomes a chunk array, and finally selected from the array in this chunk for storing data.
If the slab is not how to do chunk can be assigned, if no additional start MemCache -M (Prohibition LRU, not enough memory Out Of Memory error will be reported in this case), then this slab will MemCache least recently used chunk in data clean up, and then put the latest data. MemCache for memory allocation and recovery algorithm, summarized three points:
1, MemCache memory allocation chunk there will be a waste of memory, value allocation of 88 bytes in 128 bytes (followed by larger) of the chunk, on the loss of 30 bytes, but it also avoids the problem of management of memory fragmentation
2, MemCache against the LRU algorithm is not global, is for the slab of
3, should be able to understand why MemCache stored value size is limited because a new data come, slab will first apply to the page units of a memory, the memory only apply for a maximum of 1M, so the value can not be greater than the natural size of the 1M

and then summarize the features and limitations of MemCache
As already for MemCache made a more detailed interpretation, summarized here MemCache limitations and characteristics again:
1, the amount of item data can be saved MemCache is no limit, as long as enough memory
2, MemCache single process the largest in the 32 machine use of memory, the previous 2G article mentioned many times, 64-bit machine is no restriction
3, Key maximum of 250 bytes, the length can not be stored more than
4, the largest single item of data is 1MB, the data is not more than 1MB to store
5, MemCache server is not secure, such as a known MemCache node, you can telnet past directly, and through flush_all make key already exists for the failure immediately
6, MemCache not be able to traverse all of the item, because this relatively slow speed of operation and other operations will block
7, MemCache performance derived from the two-phase hash structure: in the first stage the client, by calculating a hash algorithm according to the Key value of the node; the second phase the server, Hash algorithm through an internal search for the real item and returned to the client. From an implementation standpoint, MemCache is a non-blocking, event-based server program
8, MemCache set up a Key to add a certain value, the incoming expiry Key value of 0 indicates that permanent, the Key value will be in 30 days after the failure,

 

 

 

 

Guess you like

Origin www.cnblogs.com/wyf2019/p/10959558.html