Redis concurrent usage scenarios and high-performance

high performance

Assume such a scenario, you have an operation, a request came, all kinds of mess you Hangchikengchi action queries mysql, check out half a result, time-consuming 600ms. But the results may not become the next few hours, or can be changed without immediate feedback to the user. So this time I supposed to?

Cache ah, toss the result of 600ms check out, throw cache, a key corresponding to a value, the next time someone check the same data, do not go mysql toss a 600ms. Taken directly from the cache, through a check out a key value, 2ms get. 300 times performance increase.

This is called high performance.

Is to put some of your time-consuming complex operations to check out the results, if it is determined not ye changed back, but then there are a lot of requests at once, then put a direct result of the cache, the cache is directly read back just fine.

 

High concurrency

mysql database so heavy, simply did not designed to allow you to play high concurrent, although you can also play, but natural support is not good. mysql single 2000qps started easily supported to the police.

So if you have a system, a second peak of over 10,000 requests, that a single mysql will definitely die. You can only this time on the cache, the cache put a lot of data, do not put mysql. Caching feature simple, it means key-value operation, the amount of concurrent easy one second stand-alone support tens of thousands of hundreds of thousands, support high concurrency so easy. Stand-alone carrying amount is several times the concurrent mysql stand-alone.

 

 

So to combine the scene and maybe think about it, why do you use the cache?

Many students in the general project lacks high concurrency scenarios, then you do not toss, the direct use of high performance that bar scene, you think there can cache the results of a complex query scenarios, follow-up can greatly improve performance, optimize the user experience, there are, he said this reason, no? Then you have to compile a come out, or you're not funny Mody
 

problem analysis

The difference between (1) redis and the memcached

redis richer data type supported (support more complex scenarios) : the Redis not only support the simple k / v types of data, while also providing a storage list, set, zset, hash and other data structures. memcache support simple data types, String.
redis and Memcached as support persistent data, in order to ensure efficiency, the data is cached in memory. Redis difference is that will periodically update the data written to disk or to modify the operation of writing additional log file, and on this basis realize the master-slave (master-slave) synchronization, and the presence of all the data will only Memecache among the memory.

Cluster mode : memcached cluster no native mode, you need to rely on the client to write data to achieve the cluster carved pieces; but redis currently is native support for cluster mode.

Memcached is multi-threaded, non-blocking IO network model reuse; Redis use single-threaded multi-channel multiplexed IO model


(2) Why redis single-threaded model can efficiently so high?

 1) Pure memory operation

 2) The core is based on the non-blocking IO multiplexing mechanism

 3) single-threaded instead to avoid the frequent context switching multithreading issues (Baidu)

 

Guess you like

Origin www.cnblogs.com/zhaosq/p/12655722.html