Redis vs memcached

If you simply compare the differences between Redis and Memcached, most of them will get the following points:
1 Redis not only supports simple k/v type data, but also provides storage of data structures such as list, set, and hash. Memcached only supports the storage of serialized objects. Although it can store maps, lists, etc., the serialized
2 Redis supports data backup, that is, data backup in master-slave mode.
3 Redis supports data persistence, which can keep the data in memory in the disk, and can be loaded again for use when restarting.
In Redis, not all data is always stored in memory. This is the biggest difference compared to Memcached (I personally think so). Redis will only cache all key information. If Redis finds that the memory usage exceeds a certain threshold, it will trigger the swap operation. Redis calculates which key corresponding value needs to be based on "swappability = age*log(size_in_memory)" swap to disk. Then the values ​​corresponding to these keys are persisted to disk and cleared in memory at the same time. This feature allows Redis to hold data that exceeds the size of its machine's own memory. Of course, the memory of the machine itself must be able to hold all the keys, after all, the data will not be swapped. At the same time, when Redis swaps the data in memory to the disk, the main thread providing the service and the sub-thread performing the swap operation will share this part of the memory, so if the data that needs to be swapped is updated, Redis will block this operation until the sub-thread Modifications can be made after the swap operation is completed.
You can refer to the comparison before and after using the Redis-specific memory model:


VM off: 300k keys, 4096 bytes values: 1.3G used
VM on: 300k keys, 4096 bytes values: 73M used
VM off: 1 million keys, 256 bytes values: 430.12M used
VM on: 1 million keys, 256 bytes values: 160.09M used
VM on: 1 million keys, values ​​as large as you want, still: 160.09M used

when running from Redis When reading data in , if the value corresponding to the read key is not in memory, then Redis needs to load the corresponding data from the swap file, and then return it to the requester. There is an I/O thread pool problem here. By default, Redis will block, that is, it will respond after all swap files are loaded. This strategy is more suitable when the number of clients is small and batch operations are performed. But if Redis is applied to a large-scale website application, this obviously cannot satisfy the situation of large concurrency. Therefore, when Redis runs, we set the size of the I/O thread pool, and perform concurrent operations on the read requests that need to load the corresponding data from the swap file to reduce the blocking time.

Comparison of redis, memcache, and mongoDB
From the following dimensions, we compared redis, memcache, and mongoDB. Welcome to make a brick
. 1. The performance is
relatively high, and performance should not be a bottleneck for us.
Generally speaking, redis and memcache in terms of TPS About the same, more convenient than mongodb
2. The convenience of operation
Memcache data structure is a
little richer than redis, in terms of data operation, redis is better, less network IO times
mongodb supports rich data expression, index, most similar to relational database, support The query language is very rich
3. The size of the memory space and the size of the data volume
Redis added its own VM features after version 2.0, breaking through the limitations of physical memory; the expiration time can be set for the key value (similar to memcache)
Memcache can modify the maximum available memory, using the LRU algorithm
mongoDB is suitable for the storage of large amounts of data. It relies on the operating system VM for memory management, and it also eats more memory. The service should not be combined with other services.
4. Availability (single point problem)
For single point problems,
redis, relying on the client to achieve Distributed read and write; during master-slave replication, every time the slave node reconnects to the master node, the entire snapshot must be relied on. There is no incremental replication. Due to performance and efficiency issues,
the single-point problem is more complicated; automatic sharding is not supported, and it needs to rely on programs. Set a consistent hash mechanism.
An alternative is to not use the replication mechanism of redis itself, use its own active replication (multiple storage), or change to incremental replication (need to be implemented by yourself), the balance between consistency issues and performance
Memcache itself has no data redundancy . There is no need for redundant mechanisms; for fault prevention, a mature hash or ring-based algorithm is used to solve the jitter problem caused by a single point of failure.
mongoDB supports master-slave, replicaset (internally adopts paxos election algorithm, automatic failure recovery), auto sharding mechanism, which shields the client from failover and segmentation mechanism.
5. Reliability (persistence)
For data persistence and data recovery,
redis supports (snapshots, AOF): relying on snapshots for persistence, while AOF enhances reliability, it has an impact on performance.
Memcache does not support, usually used in Do cache, improve performance;
MongoDB has adopted binlog method to support the reliability of persistence since version 1.8.
6. Data consistency (transaction support)
Memcache In concurrent scenarios, using cas to ensure consistency
redis transaction support is relatively weak, and can only ensure that each operation in the transaction is continuous Execute
mongoDB does not support transactions
_ Load, improve performance; do cache, improve performance (suitable for reading more and writing less, for large amounts of data, you can use sharding) MongoDB: mainly solves the problem of access efficiency of massive data Reprinted from http://250688049.blog.51cto.com/ 643101/1132097






Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327013303&siteId=291194637