memcache原理1.5.8——内存分配与淘汰

本文主要参考:


好,下面上货。

首先需要了解一下memcache是如何进行内存管理的。

内存分配
首先通过命令行的-m参数给数据预留内存。然后内存会按照默认每页1M大小分配给需要的slab class。然后这1M内存根据需要配切分成指定大小的chunks
然后看一下启动memcache时的一些log
$ ./memcached -vv
slab class   1: chunk size        80 perslab   13107
slab class   2: chunk size       104 perslab   10082
slab class   3: chunk size       136 perslab    7710
slab class   4: chunk size       176 perslab    5957
slab class   5: chunk size       224 perslab    4681
slab class   6: chunk size       280 perslab    3744
slab class   7: chunk size       352 perslab    2978
slab class   8: chunk size       440 perslab    2383
slab class   9: chunk size       552 perslab    1899
slab class  10: chunk size       696 perslab    1506
[...etc...]
比如上面的例子中,如果要存储的数据大小是50bytes,那么会存储到slab1中,如果是90bytes,会存储到slab2中。memcache会找到对应的合适大小的slab class,然后存储到其中。

这里需要强调的是,在老的1.4.25版本前,memcache是懒过期(没有过期自动清理机制),同时也没有page回收再分配机制,也就是我们常说的,一旦memcache存储的格局定下来就不会发生变化了。

但是,在新版本中已经有了这两个的支持,下面我详细介绍一下新版memcache的内存分配机制和淘汰机制。

总体的过程是这样的,
首先lru_crawler会隔指定时间进行查找是否有过期item。然后如果有过期会进行删除处理(这里和历史版本1.4.X中的懒过期有区别,过期后会直接删除掉)。
然后,如果开始了slab_automove=1的选项,那么内部会有一个进程进行处理,会处理被分配超过2个page的slab class,如果当前的slab class有能够回收的page,把对应的page回收放在global_page_pool中供其他有需要的slab class使用。
最后,当某个slab class需要申请page的时候,会优先使用global_page_pool中的page,如果没有,会向内存新申请一个page。


下图是回收memcache的page到global_page_pool中。


下图是memcache的默认设置:



需要注意的是,启动的命令需要添加

/usr/local/memcached/bin/memcached -d -m 4 -u root -p 13212 -o slab_reassign,slab_automove=1


下面看一下官网的一些解释:

slab重新分配
Slabs Reassign
--------------

NOTE: This command is subject to change as of this writing.

The slabs reassign command is used to redistribute memory once a running
instance has hit its limit. It might be desirable to have memory laid out
differently than was automatically assigned after the server started.

slabs reassign <source class> <dest class>\r\n

- <source class> is an id number for the slab class to steal a page from

A source class id of -1 means "pick from any valid class"

- <dest class> is an id number for the slab class to move a page to

The response line could be one of:

- "OK" to indicate the page has been scheduled to move

- "BUSY [message]" to indicate a page is already being processed, try again
  later.

- "BADCLASS [message]" a bad class id was specified

- "NOSPARE [message]" source class has no spare pages

- "NOTFULL [message]" dest class must be full to move new pages to it

- "UNSAFE [message]" source class cannot move a page right now

- "SAME [message]" must specify different source/dest ids.

slab自动分配
Slabs Automove
--------------

NOTE: This command is subject to change as of this writing.

The slabs automove command enables a background thread which decides on its
own when to move memory between slab classes. Its implementation and options
will likely be in flux for several versions. See the wiki/mailing list for
more details.

The automover can be enabled or disabled at runtime with this command.

slabs automove <0|1>

- 0|1|2 is the indicator on whether to enable the slabs automover or not.

The response should always be "OK\r\n"

- <0> means to set the thread on standby

- <1> means to return pages to a global pool when there are more than 2 pages
  worth of free chunks in a slab class. Pages are then re-assigned back into
  other classes as-needed.

- <2> is a highly aggressive mode which causes pages to be moved every time
  there is an eviction. It is not recommended to run for very long in this
  mode unless your access patterns are very well understood.


LRU_Crawler
-----------

NOTE: This command (and related commands) are subject to change as of this
writing.

The LRU Crawler is an optional background thread which will walk from the tail
toward the head of requested slab classes, actively freeing memory for expired
items. This is useful if you have a mix of items with both long and short
TTL's, but aren't accessed very often. This system is not required for normal
usage, and can add small amounts of latency and increase CPU usage.

lru_crawler <enable|disable>

- Enable or disable the LRU Crawler background thread.

The response line could be one of:

- "OK" to indicate the crawler has been started or stopped.

- "ERROR [message]" something went wrong while enabling or disabling.

lru_crawler sleep <microseconds>

- The number of microseconds to sleep in between each item checked for
  expiration. Smaller numbers will obviously impact the system more.
  A value of "0" disables the sleep, "1000000" (one second) is the max.

The response line could be one of:

- "OK"

- "CLIENT_ERROR [message]" indicating a format or bounds issue.

lru_crawler tocrawl <32u>

- The maximum number of items to inspect in a slab class per run request. This
  allows you to avoid scanning all of very large slabs when it is unlikely to
  find items to expire.

The response line could be one of:

- "OK"

- "CLIENT_ERROR [message]" indicating a format or bound issue.

lru_crawler crawl <classid,classid,classid|all>

- Takes a single, or a list of, numeric classids (ie: 1,3,10). This instructs
  the crawler to start at the tail of each of these classids and run to the
  head. The crawler cannot be stopped or restarted until it completes the
  previous request.

  The special keyword "all" instructs it to crawl all slabs with items in
  them.

The response line could be one of:

- "OK" to indicate successful launch.

- "BUSY [message]" to indicate the crawler is already processing a request.

- "BADCLASS [message]" to indicate an invalid class was specified.

lru_crawler metadump <classid,classid,classid|all>

- Similar in function to the above "lru_crawler crawl" command, this function
  outputs one line for every valid item found in the matching slab classes.
  Similar to "cachedump", but does not lock the cache and can return all
  items, not just 1MB worth.

  Lines are in "key=value key2=value2" format, with value being URI encoded
  (ie: %20 for a space).

  The exact keys available are subject to change, but will include at least:

  "key", "exp" (expiration time), "la", (last access time), "cas",
  "fetch" (if item has been fetched before).

The response line could be one of:

- "OK" to indicate successful launch.

- "BUSY [message]" to indicate the crawler is already processing a request.

- "BADCLASS [message]" to indicate an invalid class was specified.

猜你喜欢

转载自blog.csdn.net/wild46cat/article/details/80916457