7 application scenarios of Redis

One: cache - hot data

For hot data (data that is frequently queried, but is not frequently modified or deleted), the first choice is to use redis cache. After all, the bubbling QPS and strong stability are not available in all similar tools, and compared to Memcached also provides a wealth of data types that can be used. In addition, the data in memory also provides persistence mechanisms such as AOF and RDB to choose from. It can be cold, hot, or hot and cold.

In combination with specific applications, it should be noted that many people use spring's AOP to build automatic production and clearing of redis caches. The process may be as follows:

  • Query redis before selecting the database, if there is some use redis data, give up the select database, if not, select the database, and then insert the data into redis

  • Update or delete the database money, query whether the data exists in redis, if it exists, delete the data in redis first, and then update or delete the data in the database

The above operation is basically no problem if the concurrency is small, but in the case of high concurrency, please pay attention to the following scenarios:

In order to update, the data in redis was deleted first. At this time, another thread executed the query and found that it was not in redis. The query SQL was executed instantly, and a piece of data was inserted into redis, and back to the update statement just now, this sad thread I don't even know that the damn select thread just made a huge mistake! So the wrong data in this redis will exist forever until the next update or delete.

Two: Counter

Applications such as counting clicks. Due to the single thread, concurrency problems can be avoided, no errors are guaranteed, and 100% millisecond performance! Cool.

Command: INCRBY

Of course it's over, don't forget persistence, after all, redis only stores memory!

Three: Queue

  • It is equivalent to a message system, ActiveMQ, RocketMQ and other tools are similar, but I personally think it is okay to use it simply. If the requirements for data consistency are high, professional systems such as RocketMQ should be used.

  • Since redis adds data to the queue, it returns the position of the added element in the queue, so it can be determined which user is accessing this service.

  • Queues can not only turn concurrent requests into serial, but also use queues or stacks

Four: Bit operation (big data processing)

It is used in scenarios with a data volume of hundreds of millions, such as system check-in of hundreds of millions of users, statistics on the number of re-logins, whether a user is online, etc.

Think about Tencent's 1 billion users, and you need to check whether a user is online in a few milliseconds. What can you do? Don't say create a key for each user, and then record them one by one (you can calculate the memory required, it will be terrible, and there are many similar requirements, how much will Tencent light cost...) Okay. Use bit operations here - use the setbit, getbit, bitcount commands.

The principle is:

Build a long enough array in redis, each array element can only have two values ​​​​of 0 and 1, and then the subscript index of this array is used to represent the user id (must be a number) in our example above, so obviously , this large array of hundreds of millions can build a memory system through subscripts and element values ​​(0 and 1), and several scenarios I mentioned above can be realized. The commands used are: setbit, getbit, bitcount

Five: Distributed lock and single-threaded mechanism

  • To verify repeated requests on the front end (similar situations can be freely extended), it can be filtered through redis: each request uses hashes such as request Ip, parameters, and interfaces as keys to store redis (idempotent request), set how long the validity period is, and then download When the second request comes, first search for the key in redis, and then verify whether it is a repeated submission within a certain period of time.

  • The second-kill system, based on the single-threaded feature of redis, prevents the occurrence of database "blasting"

  • Global incremental ID generation, similar to "seckill"

Six: the latest list

For example, the latest news list on the news list page, if the total number is very large, try not to use low products such as select a from A limit 10, try the LPUSH command of redis to build a List, and insert them in order. But what if the memory is cleared? It's also simple. If you can't query the storage key, you can use mysql to query and initialize a List into redis.

Seven: Leaderboard

Whoever scores higher will be ranked higher. Command: ZADD (there is a sequel, sorted set)

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324378177&siteId=291194637