Openresty achieve cache synchronization with redis

lead

"Everything is the devil single cache, instead of being destroyed, not as good as another dance."

reason

Before our uAuth to a bug, the specific reason that, when a user password changing the original token should fail, but often have a normal line of the original token access circumstances. But in the test environment, it does not in any case reproduce it.

Later, a careful analysis of the source code, is due to the token is stored with the openresty cache, when the token fails, the only online on the n servers in a made failure treatment, while the other n-1 table cache is still valid.

Thinking

Cache inconsistent  ---- This is indeed a problem likely to encounter a lot of scenes, then how do? There are two solutions:

  1. Kill openresty cache, the cache memory design changed from openresty / redis / mysql redis / mysql two-layer structure.

  2. Cache synchronization mechanism design openresty solve the problem from the stalk.

1, it is indeed simple, direct and effective way, but:

Use openresty, let me use the cache, and that a salted fish What is the difference?

So, I chose the second way, then the question is, how to design the synchronization mechanism? The classic synchronous certainly publish / subscribe to get the first time naturally think of kafka, but check around openresty found in the official resty.kafka only support production, and most of the scenes are used for logging. When redis, there is a classic example subscribe, so good, you do it.

Yard line and

Package publish / subscribe operations

Since synchronization, Nazan whole place. The first step, a packaging redis_message.lua

 1 local redis_c = require "resty.redis"
 2 local cjson = require 'cjson.safe'
 3 local M = {}
 4 local mt = {__index = M}
 5 function M:new(cfg)
 6     ocal ins = {
 7         timeout = cfg.timeout or 60000,
 8         pool = cfg.pool or {maxIdleTime = 120000,size = 200},
 9         database = cfg.database or 0,
10         host = cfg .host,
11         port = cfg. port,
12         password = cfg .password or ""
13     }
14     setmetatable(ins,mt)
15     return ins
16 end
17 local function get_con(cfg)
18     local red = redis_c:new()
19     red:set_timeout(cfg.timeout)
20     local ok,err = red:connect(cfg.host,cfg.port)
21     if not ok then
22         return nil
23     end
24     local count ,err = red:get_reused_times()
25     if 0 == count then
26         ok ,err = red:auth(cfg.password)
27     elseif err then
28         return nil
29     end
30 
31     red:select(cfg.database)
32     return red
33 end
34  
35 local function keep_alive(red,cfg)
36     local ok,err = red:set_keepalive(cfg.pool.maxIdleTime,cfg.pool.size)
37     if not ok then
38         red:close()
39     end
40     return true
41 end
42 
43 function M:subscribe(key,func)
44     local co = coroutine.create(function()
45         local red = get_con(self)
46         local ok,err = red:subscribe(key)
47         if not ok then
48             return err
49         end
50         local flag = true
51         while flag do
52             local res,err = red:read_reply()
53             if err then
54                 ;
55             else
56                 if res[1] == "message" then
57                     local obj = cjson.decode(res[3])
58                     flag = func(obj.msg)
59                 end
60             end
61             red:set_keepalive(100,100)
62         end
63     end)
64     coroutine.resume(co)
65 
66 end
67 function M:publish(key,msg)
68     local red = get_con(self)
69     local obj = {}
70     obj.type = type(msg)
71     obj.msg = msg
72     local ok,err = red:publish(key,cjson.encode(obj))
73     if not ok then
74         return false
75     else
76         return true
77     end
78     keep_alive(red,self)
79 end
80 
81 return M
View Code

 

This messagel.lua, there are a few points to look at:

  1. redis the subscribe operation is subcribe CHANNEL

  2. When the subscribe receive the appropriate information is an array, followed by [events, channels, data], so we consider only events from simple, i.e.  messagethis situation.

  3. After subcribe our external package is passed to a function, the function returns false if you stop subscribing.

Design Message Format

In order to ensure universal cache operations, we designed the message format is:

  1. 1 local msg = {key = key ,cache = cache_name,op=op,data=data,timeout=timeout}
    View Code

In this one:

  1. cache represents a cache name we want to synchronize.

  2. key indicates that the key to cache synchronization value

  3. op set up three types, namely set, expire, del

  4. Depending op can be selectively incoming data, timeout (timeout settings used)

Joining Configuration

Adding a configuration, to save me here is the app / config / cacheSync

1 return {
2     redis = {
3         host="10.10.10.111",
4         port=6379,
5         database = 3,
6         password = "Pa88word"
7     },
8     queueName='lua:tiny:cache:sync',
9 }
View Code

 

Post service layer encapsulating operation

Good base library package, the design is good news format, we can operate the packaging business layer. which is:

 1 local cfg = require('app.config.cacheSync') --同步的配置,包括服务器
 2 local redis_message = require('libs.redis.redis_message') --上文封装的message
 3 local function async_cache(cache_name,key,op,data,timeout)
 4     local rm = redis_message:new(cfg.redis)
 5     local message = {key = key ,cache = cache_name,op=op}
 6     if data then
 7         message .data = data
 8     end
 9     if timeout then
10         message.timeout = timeout
11     end
12     rm:publish(cfg.queueName,message)
13 end
View Code

 

Package subscription operation

Subscribe to our operations in initbylua_file life cycle, we did not talk much code as follows:

. 1      local CFG = the require ( ' app.config.cacheSync ' ) - synchronizing configuration, including server
 2      local redis_message the require = ( ' libs.redis.redis_message ' ) - package hereinabove Message
 . 3      local cjson the require = ( ' cjson.safe ' )
 . 4      local function Handler (MSG)
 . 5          local Key = msg.key
 . 6          local Shared = ngx.shared [msg.cache]
 . 7          IF Shared Shared ~ = ~ = nil and NGX. null the then
 . 8              IF msg.op == 'del' then
 9                 shared:delete(key)
10             elseif msg.op == 'set' then
11                 local data = cjson.encode(msg.data)
12                 if data == nil or data == ngx.null then
13                     data = msg.data
14                 end
15                     local res ,msg = shared:set(key,data)
16                 if msg then
17                     ngx.log(ngx.ERR,msg)
18                 end
19             elseif msg.op =='expire' then
20                 shared:set(key,cjson.encode(msg.data),msg.timeout)
21             end
22         end
23         return true
24     end
25     local req_id = ngx.worker.id()
26     if req_id == 0 then
27         ngx.timer.at(0,function()
28             local message = redis_message:new(cb_cfg.redis)
29             message:subscribe(cfg.queueName,handler)
30         end)
31     end
View Code

 

OK, so far, we have successfully realized the cache synchronization openresty, and had a preliminary set of components with reusable, of course, here are a few tips:

  1. We need to set the conf file   or files nginx of error would have been reported timeout errors.lua_socket_log_errors off;

  2. Redis did not consider the issue of reliability.

Guess you like

Origin www.cnblogs.com/ashaff/p/11648577.html
Recommended