Memcached Introduction
Memcached installing / running
Memcached supports many platforms: Linux, FreeBSD, Solaris, Mac OS, can also be installed on Windows.
Linux systems install memcached, first of all must first install libevent library.
sudo apt-get install libevent ibevent- dev automatically download and install (the Ubuntu / the Debian) yum the install the libevent automatically download and install the libevent-devel (Redhat / Fedora / Centos)
Memcached installation
Ubuntu/Debian
sudo apt-get install memcached
Redhat/Fedora/Centos
yum install memcached
FreeBSD
portmaster databases/memcached
Running Memcached
Memcached run the command:
$ / Usr / local / memcached / bin / memcached -h command help
Note: If the automatic installation is located memcached command / usr / local / bin / memcached .
Startup options:
- -d is to start a daemon;
- -m is the amount of memory allocated to Memcache use, the unit is MB;
- -u user is running Memcache;
- -l is listening server IP address, you can have multiple addresses;
- -p is set Memcache port ,, listening port is preferably at least 1024;
- -c is the maximum number of simultaneous connections running, the default is 1024;
- -P set up pid file is saved in Memcache.
(1) runs as a foreground program:
Enter the following command from a terminal, start memcached:
/usr/local/memcached/bin/memcached -p 11211 -m 64m -vv slab class 1: chunk size 88 perslab 11915 slab class 2: chunk size 112 perslab 9362 slab class 3: chunk size 144 perslab 7281 中间省略 slab class 38: chunk size 391224 perslab 2 slab class 39: chunk size 489032 perslab 2 <23 server listening <24 send buffer was 110592, now 268435456 <24 server listening (udp) <24 server listening (udp) <24 server listening (udp) <24 server listening (udp)
This shows the debug information. This started in the foreground of memcached, listening on TCP port 11211, the maximum memory usage for the 64M. Much of the information about the debugging information is stored.
(2) runs as a background service program:
# /usr/local/memcached/bin/memcached -p 11211 -m 64m -d
or
/usr/local/memcached/bin/memcached -d -m 64M -u root -l 192.168.0.200 -p 11211 -c 256 -P /tmp/memcached.pid
Memcached works analysis
Memcached is a distributed key features, so you can install Memcached up multiple servers to assemble a larger cache server. As a result, Memcached can help us to minimize the pressure on the database, so that we can build a faster, more scalable WEB application.
Many Web applications will save the data to RDBMS, application server and data read from the display in the browser. But with the increasing amount of data, centralized access, there will be a heavier burden RDBMS database in response to the deterioration of the site show a significant impact on delay. Memcached is a high-performance, distributed memory cache server, caching database query results by reducing the number of database access, to improve the speed of dynamic Web applications, improve scalability. The following figure shows the cooperative work with the database side memcache:
2. If the requested data is not found in the cache, this time to go to query the database. At the same time return the requested data, the data stored in the cache copy.
3. (such as the case of the data has been modified, or deleted) "freshness", whenever the data changes to keep the cache, the cache synchronization information to be updated to ensure that users do not get to the cache of old data.
- Simple protocol
- Libevent-based event processing
- Built-in memory storage
- Distributed memcached not communicate with each other
How to implement distributed scalability
Memcached distributed is not implemented on the server side, but implemented in the client application, i.e., the development of target nodes through the built-in algorithms of data, as shown below:
reference:
Comparison and analysis (data cache system) program Memcache, Redis, MongoDB