Redis talk about common usage scenarios in the project?

Edit Article

Recently wrote a scaffold, where redis scene pretty much, so it summarized under common usage scenarios

01 Cache

> set User:1:name shanyue EX 100 NX

OK

> get User:1:name

"shanyue"

Redis cache is the most photographed using a scenario, only the use of set / get can be achieved, but there are some points to consider

(1) how best to set the cache

(2) how to maintain consistency with upstream data cache

(3) how to solve the cache vaginal bleeding, breakdown caching problem

02 session: the user login and authentication code

> set 5d27e60e6fb9a07f03576687 '{"id": 10086, role: "ADMIN"}' EX 7200

OK

> get 5d27e60e6fb9a07f03576687

"{\"id\": 10086, role: \"ADMIN\"}"

It is also very common scenario, but with respect to stateful session, also consider using the JWT, have advantages and disadvantages

03 message queue

> Lpush UserEmailQueue 1 2 3 4

lpop UserEmailQueue

> RPOP UserEmailQueue

1

> RPOP UserEmailQueue

2

Redis can be regarded as a distributed queue queue, as message queues, producers in a plug data, consumer data out at the other end: (lpush / rpop, rpush / lpop). But there are also some shortcomings, and these shortcomings might be fatal, but for some it does not matter lost several messages scenes can still be considered

(1) no ack, the message is likely lost

(2) needs to be done persistence of configuration redis

04 filter (dupefilter)

> sadd UrlSet http://1

(integer) 1

> sadd UrlSet http://2

(integer) 1

> sadd UrlSet http://2

(integer) 0

> smembers UrlSet

1) "http://1"

2) "http://2"

scrapy-redis作为分布式的爬虫框架,便是使用了 redis 的 Set 这个数据结构来对将要爬取的 url 进行去重处理。

# https://github.com/rmax/scrapy-redis/blob/master/src/scrapy_redis/dupefilter.py

def request_seen(self, request):

    """Returns True if request was already seen.

    Parameters

    ----------

    request : scrapy.http.Request

    Returns

    -------

    bool

    """

    fp = self.request_fingerprint(request)

    added = self.server.sadd(self.key, fp)

    return added == 0

不过当 url 过多时,会有内存占用过大的问题

05 分布式锁

set Lock:User:10086 06be97fc-f258-4202-b60b-8d5412dd5605 EX 60 NX

# 释放锁,一段 LUA 脚本

if redis.call("get",KEYS[1]) == ARGV[1] then

    return redis.call("del",KEYS[1])

else

    return 0

end

这是一个最简单的单机版的分布式锁,有以下要点

(1)EX 表示锁会过期释放

(2)NX 保证原子性

解锁时对比资源对应产生的 UUID,避免误解锁

当你使用分布式锁是为了解决一些性能问题,如分布式定时任务防止执行多次 (做好幂等性),而且鉴于单点 redis 挂掉的可能性很小,可以使用这种单机版的分布式锁。

06 Rate Limit

限流即在单位时间内只允许通过特定数量的请求,有两个关键参数

(1)window,单位时间

(2)max,最大请求数量

最常见的场景: 短信验证码一分钟只能发送两次

FUNCTION LIMIT_API_CALL(ip):

current = GET(ip)

IF current != NULL AND current > 10 THEN

    ERROR "too many requests per second"

ELSE

    value = INCR(ip)

    IF value == 1 THEN

        EXPIRE(ip,1)

    END

    PERFORM_API_CALL()

END

可以使用计数器对 API 的请求进行限流处理,但是要注意几个问题

(1)在平滑的滑动窗口时间内在极限情况下会有两倍数量的请求数

(2)条件竞争 (Race Condition)

这时候可以通过编程,根据 TTL key 进行进一步限制,或者使用一个 LIST 来维护每次请求打来的时间戳进行实时过滤。以下是 node 实现的一个 Rate Limter。

this.client

  .multi()

  .set(rlKey, 0, 'EX', secDuration, 'NX')

  .incrby(rlKey, points)

  .pttl(rlKey)

  .exec((err, res) => {

    if (err) {

      return reject(err);

    }

    return resolve(res);

  })

if (res.consumedPoints > this.points) {

  // ...

} else if (this.execEvenly && res.msBeforeNext > 0 && !res.isFirstInDuration) {

  // ...

  setTimeout(resolve, delay, res);

} else {

  resolve(res);

}

07 分布式 websocket

可以通过 redis 的 PUB/SUB 来在 websocket server 间进行交流。


Guess you like

Origin blog.51cto.com/14453419/2421320
Recommended