Redis 6.0 源码阅读笔记(1)-Redis 服务端启动及命令执行

前言

本文基于 Redis 6.0 版本 ,读者如有兴趣可以自行点击链接进入 github 下载源码。Redis 用 C语言编写,其源码中不乏几百行的方法,这会给习惯了面向对象编程的读者造成一定的阅读障碍,不过仔细看的话也是能看懂的。Redis 服务端启动及命令执行的大致流程如下图所示,下文将对流程进行代码分析

在这里插入图片描述

1. Redis 服务端启动流程

Redis 是典型的 单 Reactor 单线程 的事件驱动模型,其服务端的启动也就是事件循环的建立过程,服务端启动的入口为 server.c#main() 函数

  1. server.c#main() 函数很长,此处只截取几个重要的方法调用来分析

    1. initServerConfig()
      初始化 server 端的各项配置,本文重点关注其中的 populateCommandTable() 函数,这个函数将 redis 各个命令对应的 redisCommand 结构体存入了 server 的数据结构内
    2. loadServerConfig()
      载入配置文件,初始化服务端相关配置,比如在这一步会配置 redis 的监听端口
    3. initServer()
      主要创建事件循环实例,设置服务端处理 socket 事件的处理函数及处理定时事件的处理函数
    4. aeMain()
      启动事件循环处理线程,开始接受客户端连接并处理客户端命令
    int main(int argc, char **argv) {
          
          
     ......
    
     initServerConfig();
    
     ......
     
     loadServerConfig(configfile,options);
    
     ......
    
     initServer();
     
     ......
     
     aeMain(server.el);
     
     ......
    
    }   
    
  2. server.c#initServerConfig() 函数中会初始化服务端的各项配置,其中本文比较关注的是 populateCommandTable() 函数

    void initServerConfig(void) {
          
          
     
     ......
     
     server.hz = CONFIG_DEFAULT_HZ; /* Initialize it ASAP, even if it may get
                                       updated later after loading the config.
                                       This value may be used before the server
                                       is initialized. */
     ......
     
     server.commands = dictCreate(&commandTableDictType,NULL);
     server.orig_commands = dictCreate(&commandTableDictType,NULL);
     populateCommandTable();
     server.delCommand = lookupCommandByCString("del");
     server.multiCommand = lookupCommandByCString("multi");
     server.lpushCommand = lookupCommandByCString("lpush");
     server.lpopCommand = lookupCommandByCString("lpop");
     server.rpopCommand = lookupCommandByCString("rpop");
     
     ......
    }
    
  3. server.c#populateCommandTable() 函数会将源码中硬编码的命令列表解析存储到 server.commands 中。
    需注意,redisCommandTable 是一个硬编码的数组,其存储的结构如下:

    struct redisCommand redisCommandTable[] = {
          
          
    
    {
          
          "get",getCommand,2,
    "read-only fast @string",
    0,NULL,1,1,1,0,0,0},
    
    ......
    };
    

    redisCommand 是一条命令的存储结构,其中 name 属性为命令名称,proc 为函数的指针,通过该指针建立了命令名称get与处理函数getCommand的映射关系

    struct redisCommand {
          
          
    char *name;
    redisCommandProc *proc;
    int arity;
    char *sflags;   /* Flags as string representation, one char per flag. */
    uint64_t flags; /* The actual flags, obtained from the 'sflags' field. */
    /* Use a function to determine keys arguments in a command line.
    * Used for Redis Cluster redirect. */
    redisGetKeysProc *getkeys_proc;
    /* What keys should be loaded in background when calling this command? */
    int firstkey; /* The first argument that's a key (0 = no keys) */
    int lastkey;  /* The last argument that's a key */
    int keystep;  /* The step between first and last key */
    long long microseconds, calls;
    int id;     /* Command ID. This is a progressive ID starting from 0 that
                  is assigned at runtime, and is used in order to check
                  ACLs. A connection is able to execute a given command if
                  the user associated to the connection has this command
                  bit set in the bitmap of allowed commands. */
    };
    
    void populateCommandTable(void) {
          
          
     int j;
     int numcommands = sizeof(redisCommandTable)/sizeof(struct redisCommand);
    
     for (j = 0; j < numcommands; j++) {
          
          
         struct redisCommand *c = redisCommandTable+j;
         int retval1, retval2;
    
         /* Translate the command string flags description into an actual
          * set of flags. */
         if (populateCommandTableParseFlags(c,c->sflags) == C_ERR)
             serverPanic("Unsupported command flag");
    
         c->id = ACLGetCommandID(c->name); /* Assign the ID used for ACL. */
         retval1 = dictAdd(server.commands, sdsnew(c->name), c);
         /* Populate an additional dictionary that will be unaffected
          * by rename-command statements in redis.conf. */
         retval2 = dictAdd(server.orig_commands, sdsnew(c->name), c);
         serverAssert(retval1 == DICT_OK && retval2 == DICT_OK);
     }
    }
    
  4. config.c#loadServerConfig() 函数主要加载服务端启动时的配置文件,将配置文件中的内容读取出来以字符串形式存储,再调用 loadServerConfigFromString() 函数将字符串解析成 server 的配置属性

    void loadServerConfig(char *filename, char *options) {
          
          
     sds config = sdsempty();
     char buf[CONFIG_MAX_LINE+1];
    
     /* Load the file content */
     if (filename) {
          
          
         FILE *fp;
    
         if (filename[0] == '-' && filename[1] == '\0') {
          
          
             fp = stdin;
         } else {
          
          
             if ((fp = fopen(filename,"r")) == NULL) {
          
          
                 serverLog(LL_WARNING,
                     "Fatal error, can't open config file '%s'", filename);
                 exit(1);
             }
         }
         while(fgets(buf,CONFIG_MAX_LINE+1,fp) != NULL)
             config = sdscat(config,buf);
         if (fp != stdin) fclose(fp);
     }
     /* Append the additional options */
     if (options) {
          
          
         config = sdscat(config,"\n");
         config = sdscat(config,options);
     }
     loadServerConfigFromString(config);
     sdsfree(config);
    }
    
  5. server.c#initServer() 函数体很长,不过主要的逻辑可以归纳为以下几步。这其中监听端口创建 redisDb 数据结构等所有步骤都比较简单,本文仅以 aeCreateFileEvent() 函数实现为例进行解析

    1. 初始化 server 服务端的各项配置属性,包括定时任务执行频率,客户端链表结构,Slave 节点链表结构等
    2. 调用 aeCreateEventLoop() 函数创建事件循环实例
    3. 调用 listenToPort() 函数绑定服务端 Socket 监听端口
    4. 初始化 redis 的 db 数据结构 redisDb,默认有 16 个
    5. 调用 aeCreateTimeEvent() 函数指定定时事件的处理函数为 serverCron(),其中包括主从节点及集群模式下各个节点的定时通信处理
    6. 调用 aeCreateFileEvent() 函数指定 Socket 可读事件的处理函数为 acceptTcpHandler(),用于处理新建连接
    7. 调用 aeSetBeforeSleepProc() 函数设置每次事件处理之前需要进行的操作为 beforeSleep(),该函数会有对过期数据的淘汰
    void initServer(void) {
          
          
    
     ......
     
     /* Initialization after setting defaults from the config system. */
     server.aof_state = server.aof_enabled ? AOF_ON : AOF_OFF;
     server.hz = server.config_hz;
     server.pid = getpid();
     server.current_client = NULL;
     server.fixed_time_expire = 0;
     server.clients = listCreate();
     server.clients_index = raxNew();
     server.clients_to_close = listCreate();
     server.slaves = listCreate();
     server.monitors = listCreate();
     server.clients_pending_write = listCreate();
     server.clients_pending_read = listCreate();
     server.clients_timeout_table = raxNew();
     server.slaveseldb = -1; /* Force to emit the first SELECT command. */
     server.unblocked_clients = listCreate();
     server.ready_keys = listCreate();
     server.clients_waiting_acks = listCreate();
     server.get_ack_from_slaves = 0;
     server.clients_paused = 0;
     server.events_processed_while_blocked = 0;
     server.system_memory_size = zmalloc_get_memory_size();
     
     ......
     
     server.el = aeCreateEventLoop(server.maxclients+CONFIG_FDSET_INCR);
     if (server.el == NULL) {
          
          
         serverLog(LL_WARNING,
             "Failed creating the event loop. Error message: '%s'",
             strerror(errno));
         exit(1);
     }
     server.db = zmalloc(sizeof(redisDb)*server.dbnum);
    
     /* Open the TCP listening socket for the user commands. */
     if (server.port != 0 &&
         listenToPort(server.port,server.ipfd,&server.ipfd_count) == C_ERR)
         exit(1);
     if (server.tls_port != 0 &&
         listenToPort(server.tls_port,server.tlsfd,&server.tlsfd_count) == C_ERR)
         exit(1);
    
     /* Open the listening Unix domain socket. */
     if (server.unixsocket != NULL) {
          
          
         unlink(server.unixsocket); /* don't care if this fails */
         server.sofd = anetUnixServer(server.neterr,server.unixsocket,
             server.unixsocketperm, server.tcp_backlog);
         if (server.sofd == ANET_ERR) {
          
          
             serverLog(LL_WARNING, "Opening Unix socket: %s", server.neterr);
             exit(1);
         }
         anetNonBlock(NULL,server.sofd);
     }
    
     /* Abort if there are no listening sockets at all. */
     if (server.ipfd_count == 0 && server.tlsfd_count == 0 && server.sofd < 0) {
          
          
         serverLog(LL_WARNING, "Configured to not listen anywhere, exiting.");
         exit(1);
     }
    
     /* Create the Redis databases, and initialize other internal state. */
     for (j = 0; j < server.dbnum; j++) {
          
          
         server.db[j].dict = dictCreate(&dbDictType,NULL);
         server.db[j].expires = dictCreate(&keyptrDictType,NULL);
         server.db[j].expires_cursor = 0;
         server.db[j].blocking_keys = dictCreate(&keylistDictType,NULL);
         server.db[j].ready_keys = dictCreate(&objectKeyPointerValueDictType,NULL);
         server.db[j].watched_keys = dictCreate(&keylistDictType,NULL);
         server.db[j].id = j;
         server.db[j].avg_ttl = 0;
         server.db[j].defrag_later = listCreate();
         listSetFreeMethod(server.db[j].defrag_later,(void (*)(void*))sdsfree);
     }
     
     ......
    
     /* Create the timer callback, this is our way to process many background
      * operations incrementally, like clients timeout, eviction of unaccessed
      * expired keys and so forth. */
     if (aeCreateTimeEvent(server.el, 1, serverCron, NULL, NULL) == AE_ERR) {
          
          
         serverPanic("Can't create event loop timers.");
         exit(1);
     }
    
     /* Create an event handler for accepting new connections in TCP and Unix
      * domain sockets. */
     for (j = 0; j < server.ipfd_count; j++) {
          
          
         if (aeCreateFileEvent(server.el, server.ipfd[j], AE_READABLE,
             acceptTcpHandler,NULL) == AE_ERR)
             {
          
          
                 serverPanic(
                     "Unrecoverable error creating server.ipfd file event.");
             }
     }
     
     ......
     
     /* Register before and after sleep handlers (note this needs to be done
      * before loading persistence since it is used by processEventsWhileBlocked. */
     aeSetBeforeSleepProc(server.el,beforeSleep);
     aeSetAfterSleepProc(server.el,afterSleep);
    
     /* Open the AOF file if needed. */
     if (server.aof_state == AOF_ON) {
          
          
         server.aof_fd = open(server.aof_filename,
                                O_WRONLY|O_APPEND|O_CREAT,0644);
         if (server.aof_fd == -1) {
          
          
             serverLog(LL_WARNING, "Can't open the append-only file: %s",
                 strerror(errno));
             exit(1);
         }
     }
    
     /* 32 bit instances are limited to 4GB of address space, so if there is
      * no explicit limit in the user provided configuration we set a limit
      * at 3 GB using maxmemory with 'noeviction' policy'. This avoids
      * useless crashes of the Redis instance for out of memory. */
     if (server.arch_bits == 32 && server.maxmemory == 0) {
          
          
         serverLog(LL_WARNING,"Warning: 32 bit instance detected but no memory limit set. Setting 3 GB maxmemory limit with 'noeviction' policy now.");
         server.maxmemory = 3072LL*(1024*1024); /* 3 GB */
         server.maxmemory_policy = MAXMEMORY_NO_EVICTION;
     }
    
     if (server.cluster_enabled) clusterInit();
     replicationScriptCacheInit();
     scriptingInit(1);
     slowlogInit();
     latencyMonitorInit();
    }
    
  6. ae.c#aeCreateFileEvent() 函数内部逻辑很简单,只是根据传入的 AE_READABLE 标志将函数 acceptTcpHandler() 赋给了文件事件的读处理指针 rfileProc

    int aeCreateFileEvent(aeEventLoop *eventLoop, int fd, int mask,
         aeFileProc *proc, void *clientData)
    {
          
          
     if (fd >= eventLoop->setsize) {
          
          
         errno = ERANGE;
         return AE_ERR;
     }
     aeFileEvent *fe = &eventLoop->events[fd];
    
     if (aeApiAddEvent(eventLoop, fd, mask) == -1)
         return AE_ERR;
     fe->mask |= mask;
     if (mask & AE_READABLE) fe->rfileProc = proc;
     if (mask & AE_WRITABLE) fe->wfileProc = proc;
     fe->clientData = clientData;
     if (fd > eventLoop->maxfd)
         eventLoop->maxfd = fd;
     return AE_OK;
    }
    
  7. server.c#aeMain() 在服务端初始化完成后会开启 while 循环启动事件处理,其核心的函数为 aeProcessEvents()

    aeProcessEvents() 函数会处理两种事件,分别是定时触发的事件Socket I/O触发的读写事件。因为 redis 是单线程的,这两种事件不可以同时处理,所以该函数中有一种时间分片的策略,简单来说就是首先计算当前时间距离最近的定时事件触发时的时间差 T,然后调用 aeApiPoll()设置轮询 Socket 的超时时间为 T,在超时时间内只处理 Socket 读写事件,超时时间到了后再调用 processTimeEvents() 函数处理定时事件

    void aeMain(aeEventLoop *eventLoop) {
          
          
     eventLoop->stop = 0;
     while (!eventLoop->stop) {
          
          
         aeProcessEvents(eventLoop, AE_ALL_EVENTS|
                                    AE_CALL_BEFORE_SLEEP|
                                    AE_CALL_AFTER_SLEEP);
     }
    }
    
    int aeProcessEvents(aeEventLoop *eventLoop, int flags)
    {
          
          
     int processed = 0, numevents;
    
     /* Nothing to do? return ASAP */
     if (!(flags & AE_TIME_EVENTS) && !(flags & AE_FILE_EVENTS)) return 0;
    
     /* Note that we want call select() even if there are no
      * file events to process as long as we want to process time
      * events, in order to sleep until the next time event is ready
      * to fire. */
     if (eventLoop->maxfd != -1 ||
         ((flags & AE_TIME_EVENTS) && !(flags & AE_DONT_WAIT))) {
          
          
         int j;
         aeTimeEvent *shortest = NULL;
         struct timeval tv, *tvp;
    
         if (flags & AE_TIME_EVENTS && !(flags & AE_DONT_WAIT))
             shortest = aeSearchNearestTimer(eventLoop);
         if (shortest) {
          
          
             long now_sec, now_ms;
    
             aeGetTime(&now_sec, &now_ms);
             tvp = &tv;
    
             /* How many milliseconds we need to wait for the next
              * time event to fire? */
             long long ms =
                 (shortest->when_sec - now_sec)*1000 +
                 shortest->when_ms - now_ms;
    
             if (ms > 0) {
          
          
                 tvp->tv_sec = ms/1000;
                 tvp->tv_usec = (ms % 1000)*1000;
             } else {
          
          
                 tvp->tv_sec = 0;
                 tvp->tv_usec = 0;
             }
         } else {
          
          
             /* If we have to check for events but need to return
              * ASAP because of AE_DONT_WAIT we need to set the timeout
              * to zero */
             if (flags & AE_DONT_WAIT) {
          
          
                 tv.tv_sec = tv.tv_usec = 0;
                 tvp = &tv;
             } else {
          
          
                 /* Otherwise we can block */
                 tvp = NULL; /* wait forever */
             }
         }
    
         if (eventLoop->flags & AE_DONT_WAIT) {
          
          
             tv.tv_sec = tv.tv_usec = 0;
             tvp = &tv;
         }
    
         if (eventLoop->beforesleep != NULL && flags & AE_CALL_BEFORE_SLEEP)
             eventLoop->beforesleep(eventLoop);
    
         /* Call the multiplexing API, will return only on timeout or when
          * some event fires. */
         numevents = aeApiPoll(eventLoop, tvp);
    
         /* After sleep callback. */
         if (eventLoop->aftersleep != NULL && flags & AE_CALL_AFTER_SLEEP)
             eventLoop->aftersleep(eventLoop);
    
         for (j = 0; j < numevents; j++) {
          
          
             aeFileEvent *fe = &eventLoop->events[eventLoop->fired[j].fd];
             int mask = eventLoop->fired[j].mask;
             int fd = eventLoop->fired[j].fd;
             int fired = 0; /* Number of events fired for current fd. */
    
             /* Normally we execute the readable event first, and the writable
              * event laster. This is useful as sometimes we may be able
              * to serve the reply of a query immediately after processing the
              * query.
              *
              * However if AE_BARRIER is set in the mask, our application is
              * asking us to do the reverse: never fire the writable event
              * after the readable. In such a case, we invert the calls.
              * This is useful when, for instance, we want to do things
              * in the beforeSleep() hook, like fsynching a file to disk,
              * before replying to a client. */
             int invert = fe->mask & AE_BARRIER;
    
             /* Note the "fe->mask & mask & ..." code: maybe an already
              * processed event removed an element that fired and we still
              * didn't processed, so we check if the event is still valid.
              *
              * Fire the readable event if the call sequence is not
              * inverted. */
             if (!invert && fe->mask & mask & AE_READABLE) {
          
          
                 fe->rfileProc(eventLoop,fd,fe->clientData,mask);
                 fired++;
                 fe = &eventLoop->events[fd]; /* Refresh in case of resize. */
             }
    
             /* Fire the writable event. */
             if (fe->mask & mask & AE_WRITABLE) {
          
          
                 if (!fired || fe->wfileProc != fe->rfileProc) {
          
          
                     fe->wfileProc(eventLoop,fd,fe->clientData,mask);
                     fired++;
                 }
             }
    
             /* If we have to invert the call, fire the readable event now
              * after the writable one. */
             if (invert) {
          
          
                 fe = &eventLoop->events[fd]; /* Refresh in case of resize. */
                 if ((fe->mask & mask & AE_READABLE) &&
                     (!fired || fe->wfileProc != fe->rfileProc))
                 {
          
          
                     fe->rfileProc(eventLoop,fd,fe->clientData,mask);
                     fired++;
                 }
             }
    
             processed++;
         }
     }
     /* Check time events */
     if (flags & AE_TIME_EVENTS)
         processed += processTimeEvents(eventLoop);
    
     return processed; /* return the number of processed file/time events */
    }
    

2. Redis 命令执行流程

  1. Redis 服务端启动完成后就可以接受客户端的连接了,当客户端连接请求到来时,会在服务端的事件轮询时轮询出可读事件进行处理,这样就会触发启动过程中设置的处理函数 networking.c#acceptTcpHandler(),在这个函数中调用 connCreateAcceptedSocket()创建客户端与服务端之间的 Socket, acceptCommonHandler()函数将对新建的 Socket 进行相关设置处理

    void acceptTcpHandler(aeEventLoop *el, int fd, void *privdata, int mask) {
          
          
     int cport, cfd, max = MAX_ACCEPTS_PER_CALL;
     char cip[NET_IP_STR_LEN];
     UNUSED(el);
     UNUSED(mask);
     UNUSED(privdata);
    
     while(max--) {
          
          
         cfd = anetTcpAccept(server.neterr, fd, cip, sizeof(cip), &cport);
         if (cfd == ANET_ERR) {
          
          
             if (errno != EWOULDBLOCK)
                 serverLog(LL_WARNING,
                     "Accepting client connection: %s", server.neterr);
             return;
         }
         serverLog(LL_VERBOSE,"Accepted %s:%d", cip, cport);
         acceptCommonHandler(connCreateAcceptedSocket(cfd),0,cip);
     }
    }
    
  2. networking.c#acceptCommonHandler() 会调用 createClient() 新建一个 client 对象作为客户端的抽象,以便维护服务端与客户端之间的连接

    static void acceptCommonHandler(connection *conn, int flags, char *ip) {
          
          
     
     ......
    
     /* Create connection and client */
     if ((c = createClient(conn)) == NULL) {
          
          
         char conninfo[100];
         serverLog(LL_WARNING,
             "Error registering fd event for the new client: %s (conn: %s)",
             connGetLastError(conn),
             connGetInfo(conn, conninfo, sizeof(conninfo)));
         connClose(conn); /* May be already closed, just ignore errors */
         return;
     }
    
     /* Last chance to keep flags */
     c->flags |= flags;
    
     /* Initiate accept.
      *
      * Note that connAccept() is free to do two things here:
      * 1. Call clientAcceptHandler() immediately;
      * 2. Schedule a future call to clientAcceptHandler().
      *
      * Because of that, we must do nothing else afterwards.
      */
     if (connAccept(conn, clientAcceptHandler) == C_ERR) {
          
          
         char conninfo[100];
         if (connGetState(conn) == CONN_STATE_ERROR)
             serverLog(LL_WARNING,
                     "Error accepting a client connection: %s (conn: %s)",
                     connGetLastError(conn), connGetInfo(conn, conninfo, sizeof(conninfo)));
         freeClient(connGetPrivateData(conn));
         return;
     }
    
  3. networking.c#createClient() 函数体比较长,不过主要逻辑可以归纳为以下几步:

    1. 如果连接已经建立成功,调用 connSetReadHandler() 函数将该连接上读处理器设置为 readQueryFromClient()
    2. selectDb() 函数将客户端要操作的数据库默认设置为下标为 0 的数据库
    3. 设置客户端对象 client 的其他属性,调用 linkClient() 将新建的客户端加入到服务端维护的客户端链表尾部
    client *createClient(connection *conn) {
          
          
     client *c = zmalloc(sizeof(client));
    
     /* passing NULL as conn it is possible to create a non connected client.
      * This is useful since all the commands needs to be executed
      * in the context of a client. When commands are executed in other
      * contexts (for instance a Lua script) we need a non connected client. */
     if (conn) {
          
          
         connNonBlock(conn);
         connEnableTcpNoDelay(conn);
         if (server.tcpkeepalive)
             connKeepAlive(conn,server.tcpkeepalive);
         connSetReadHandler(conn, readQueryFromClient);
         connSetPrivateData(conn, c);
     }
    
     selectDb(c,0);
     uint64_t client_id = ++server.next_client_id;
     c->id = client_id;
     c->resp = 2;
     c->conn = conn;
     c->name = NULL;
     c->bufpos = 0;
     c->qb_pos = 0;
     c->querybuf = sdsempty();
     c->pending_querybuf = sdsempty();
     c->querybuf_peak = 0;
     c->reqtype = 0;
     c->argc = 0;
     c->argv = NULL;
     c->cmd = c->lastcmd = NULL;
     c->user = DefaultUser;
     c->multibulklen = 0;
     c->bulklen = -1;
     c->sentlen = 0;
     c->flags = 0;
     c->ctime = c->lastinteraction = server.unixtime;
     /* If the default user does not require authentication, the user is
      * directly authenticated. */
     c->authenticated = (c->user->flags & USER_FLAG_NOPASS) &&
                        !(c->user->flags & USER_FLAG_DISABLED);
     c->replstate = REPL_STATE_NONE;
     c->repl_put_online_on_ack = 0;
     c->reploff = 0;
     c->read_reploff = 0;
     c->repl_ack_off = 0;
     c->repl_ack_time = 0;
     c->slave_listening_port = 0;
     c->slave_ip[0] = '\0';
     c->slave_capa = SLAVE_CAPA_NONE;
     c->reply = listCreate();
     c->reply_bytes = 0;
     c->obuf_soft_limit_reached_time = 0;
     listSetFreeMethod(c->reply,freeClientReplyValue);
     listSetDupMethod(c->reply,dupClientReplyValue);
     c->btype = BLOCKED_NONE;
     c->bpop.timeout = 0;
     c->bpop.keys = dictCreate(&objectKeyHeapPointerValueDictType,NULL);
     c->bpop.target = NULL;
     c->bpop.xread_group = NULL;
     c->bpop.xread_consumer = NULL;
     c->bpop.xread_group_noack = 0;
     c->bpop.numreplicas = 0;
     c->bpop.reploffset = 0;
     c->woff = 0;
     c->watched_keys = listCreate();
     c->pubsub_channels = dictCreate(&objectKeyPointerValueDictType,NULL);
     c->pubsub_patterns = listCreate();
     c->peerid = NULL;
     c->client_list_node = NULL;
     c->client_tracking_redirection = 0;
     c->client_tracking_prefixes = NULL;
     c->client_cron_last_memory_usage = 0;
     c->client_cron_last_memory_type = CLIENT_TYPE_NORMAL;
     c->auth_callback = NULL;
     c->auth_callback_privdata = NULL;
     c->auth_module = NULL;
     listSetFreeMethod(c->pubsub_patterns,decrRefCountVoid);
     listSetMatchMethod(c->pubsub_patterns,listMatchObjects);
     if (conn) linkClient(c);
     initClientMultiState(c);
     return c;
    }
    
  4. connection.h#connSetReadHandler()函数会设置连接上的读处理函数,不过因为这里使用了函数指针,所以其最终调用到的方法其实是connection.c#connSocketSetReadHandler()。可以看到 connSocketSetReadHandler() 函数主要做了两件事:

    1. 首先调用conn->read_handler = func 设置了连接上的读处理器为传入的函数 readQueryFromClient()
    2. 调用 aeCreateFileEvent() 注册了一个事件循环中的文件事件,将函数指针 ae_handler 指向的函数 connSocketEventHandler() 作为读事件处理器
    // connection.h
    static inline int connSetReadHandler(connection *conn, ConnectionCallbackFunc func) {
          
          
     return conn->type->set_read_handler(conn, func);
    }
     // connection.c
    ConnectionType CT_Socket = {
          
          
     .ae_handler = connSocketEventHandler,
     .close = connSocketClose,
     .write = connSocketWrite,
     .read = connSocketRead,
     .accept = connSocketAccept,
     .connect = connSocketConnect,
     .set_write_handler = connSocketSetWriteHandler,
     .set_read_handler = connSocketSetReadHandler,
     .get_last_error = connSocketGetLastError,
     .blocking_connect = connSocketBlockingConnect,
     .sync_write = connSocketSyncWrite,
     .sync_read = connSocketSyncRead,
     .sync_readline = connSocketSyncReadLine
    };
    
    static int connSocketSetReadHandler(connection *conn, ConnectionCallbackFunc func) {
          
          
     if (func == conn->read_handler) return C_OK;
    
     conn->read_handler = func;
     if (!conn->read_handler)
         aeDeleteFileEvent(server.el,conn->fd,AE_READABLE);
     else
         if (aeCreateFileEvent(server.el,conn->fd,
                     AE_READABLE,conn->type->ae_handler,conn) == AE_ERR) return C_ERR;
     return C_OK;
    }
    
  5. connection.c#connSocketEventHandler() 函数将在客户端发送命令后,服务端轮询到可读事件时触发。可以看到,其内部逻辑是根据事件类型触发不同的处理函数,则对于读事件将触发连接上的读处理函数 readQueryFromClient()

    static void connSocketEventHandler(struct aeEventLoop *el, int fd, void *clientData, int mask)
    {
          
          
     
     ......
    
     /* Normally we execute the readable event first, and the writable
      * event later. This is useful as sometimes we may be able
      * to serve the reply of a query immediately after processing the
      * query.
      *
      * However if WRITE_BARRIER is set in the mask, our application is
      * asking us to do the reverse: never fire the writable event
      * after the readable. In such a case, we invert the calls.
      * This is useful when, for instance, we want to do things
      * in the beforeSleep() hook, like fsync'ing a file to disk,
      * before replying to a client. */
     int invert = conn->flags & CONN_FLAG_WRITE_BARRIER;
    
     int call_write = (mask & AE_WRITABLE) && conn->write_handler;
     int call_read = (mask & AE_READABLE) && conn->read_handler;
    
     /* Handle normal I/O flows */
     if (!invert && call_read) {
          
          
         if (!callHandler(conn, conn->read_handler)) return;
     }
     /* Fire the writable event. */
     if (call_write) {
          
          
         if (!callHandler(conn, conn->write_handler)) return;
     }
     /* If we have to invert the call, fire the readable event now
      * after the writable one. */
     if (invert && call_read) {
          
          
         if (!callHandler(conn, conn->read_handler)) return;
     }
    }
    
  6. networking.c#readQueryFromClient()函数从 Socket 中读取出客户端传输的数据,然后调用 processInputBuffer()函数将这些数据处理成字符串命令以便进行进一步处理。processInputBuffer()函数内部逻辑如下:

    1. 根据请求类型处理客户端传输过来的数据,processInlineBuffer() 处理 telnet 发送的 PING 等内联命令,processMultibulkBuffer() 处理批量命令包括 get/set 等
    2. processCommandAndResetClient() 函数开始执行解析出来的命令字符串
    void processInputBuffer(client *c) {
          
          
     /* Keep processing while there is something in the input buffer */
     while(c->qb_pos < sdslen(c->querybuf)) {
          
          
         /* Return if clients are paused. */
         if (!(c->flags & CLIENT_SLAVE) && clientsArePaused()) break;
    
         /* Immediately abort if the client is in the middle of something. */
         if (c->flags & CLIENT_BLOCKED) break;
    
         /* Don't process more buffers from clients that have already pending
          * commands to execute in c->argv. */
         if (c->flags & CLIENT_PENDING_COMMAND) break;
    
         /* Don't process input from the master while there is a busy script
          * condition on the slave. We want just to accumulate the replication
          * stream (instead of replying -BUSY like we do with other clients) and
          * later resume the processing. */
         if (server.lua_timedout && c->flags & CLIENT_MASTER) break;
    
         /* CLIENT_CLOSE_AFTER_REPLY closes the connection once the reply is
          * written to the client. Make sure to not let the reply grow after
          * this flag has been set (i.e. don't process more commands).
          *
          * The same applies for clients we want to terminate ASAP. */
         if (c->flags & (CLIENT_CLOSE_AFTER_REPLY|CLIENT_CLOSE_ASAP)) break;
    
         /* Determine request type when unknown. */
         if (!c->reqtype) {
          
          
             if (c->querybuf[c->qb_pos] == '*') {
          
          
                 c->reqtype = PROTO_REQ_MULTIBULK;
             } else {
          
          
                 c->reqtype = PROTO_REQ_INLINE;
             }
         }
    
         if (c->reqtype == PROTO_REQ_INLINE) {
          
          
             if (processInlineBuffer(c) != C_OK) break;
             /* If the Gopher mode and we got zero or one argument, process
              * the request in Gopher mode. */
             if (server.gopher_enabled &&
                 ((c->argc == 1 && ((char*)(c->argv[0]->ptr))[0] == '/') ||
                   c->argc == 0))
             {
          
          
                 processGopherRequest(c);
                 resetClient(c);
                 c->flags |= CLIENT_CLOSE_AFTER_REPLY;
                 break;
             }
         } else if (c->reqtype == PROTO_REQ_MULTIBULK) {
          
          
             if (processMultibulkBuffer(c) != C_OK) break;
         } else {
          
          
             serverPanic("Unknown request type");
         }
    
         /* Multibulk processing could see a <= 0 length. */
         if (c->argc == 0) {
          
          
             resetClient(c);
         } else {
          
          
             /* If we are in the context of an I/O thread, we can't really
              * execute the command here. All we can do is to flag the client
              * as one that needs to process the command. */
             if (c->flags & CLIENT_PENDING_READ) {
          
          
                 c->flags |= CLIENT_PENDING_COMMAND;
                 break;
             }
    
             /* We are finally ready to execute the command. */
             if (processCommandAndResetClient(c) == C_ERR) {
          
          
                 /* If the client is no longer valid, we avoid exiting this
                  * loop and trimming the client buffer later. So we return
                  * ASAP in that case. */
                 return;
             }
         }
     }
    
     /* Trim to pos */
     if (c->qb_pos) {
          
          
         sdsrange(c->querybuf,c->qb_pos,-1);
         c->qb_pos = 0;
     }
    }
    
  7. networking.c#processCommandAndResetClient() 逻辑其实很简单,重点是调用processCommand() 函数,processCommand() 函数源码如下

    1. 首先调用 lookupCommand() 函数将客户端传入的命令名称入参,从服务端启动过程中存储的命令列表中找到对应的 redisCommand
    2. 如果 redis 以 cluster 集群模式启动,需要调用 getNodeByQuery() 找到 key 所在的 slot 是否应该交给当前服务端处理,不是的话调用 clusterRedirectClient() 告诉客户端应该去找哪个服务端
    3. 异常情况的检测,包括是否超过内存限制、是否合法命令等
    4. 如果是事务执行命令, 除 EXEC 、 DISCARD 、 MULTI 和 WATCH 命令之外其他命令都会被函数 queueMultiCommand() 入队到事务队列中处理
    5. 如果是常规命令,调用 call() 函数执行命令
    int processCommand(client *c) {
          
          
     
     ......
    
     /* Now lookup the command and check ASAP about trivial error conditions
      * such as wrong arity, bad command name and so forth. */
     c->cmd = c->lastcmd = lookupCommand(c->argv[0]->ptr);
    
     ......
     
     /* If cluster is enabled perform the cluster redirection here.
      * However we don't perform the redirection if:
      * 1) The sender of this command is our master.
      * 2) The command has no key arguments. */
     if (server.cluster_enabled &&
         !(c->flags & CLIENT_MASTER) &&
         !(c->flags & CLIENT_LUA &&
           server.lua_caller->flags & CLIENT_MASTER) &&
         !(c->cmd->getkeys_proc == NULL && c->cmd->firstkey == 0 &&
           c->cmd->proc != execCommand))
     {
          
          
         int hashslot;
         int error_code;
         clusterNode *n = getNodeByQuery(c,c->cmd,c->argv,c->argc,
                                         &hashslot,&error_code);
         if (n == NULL || n != server.cluster->myself) {
          
          
             if (c->cmd->proc == execCommand) {
          
          
                 discardTransaction(c);
             } else {
          
          
                 flagTransaction(c);
             }
             clusterRedirectClient(c,n,hashslot,error_code);
             return C_OK;
         }
     }
    
     ......
     
     /* Exec the command */
     if (c->flags & CLIENT_MULTI &&
         c->cmd->proc != execCommand && c->cmd->proc != discardCommand &&
         c->cmd->proc != multiCommand && c->cmd->proc != watchCommand)
     {
          
          
         queueMultiCommand(c);
         addReply(c,shared.queued);
     } else {
          
          
         call(c,CMD_CALL_FULL);
         c->woff = server.master_repl_offset;
         if (listLength(server.ready_keys))
             handleClientsBlockedOnKeys();
     }
     return C_OK;
    }
    
  8. server.c#call() 函数的逻辑主要分为两个步骤,下文命令执行以 set 命令为例

    1. c->cmd->proc( c ) 会调用客户端命令对应的 redisCommand 的处理方法,以 set 命令为例,会调用到 t_string.c#setCommand()方法
    2. 调用 propagate()函数同步数据到 AOF 文件和 slave节点。该函数会根据服务端相关配置决定同步策略,其中调用 feedAppendOnlyFile() 函数会同步命令到AOF文件,replicationFeedSlaves() 同步命令到 Slave 节点
    void call(client *c, int flags) {
          
          
     
     ......
    
     /* Call the command. */
     dirty = server.dirty;
     updateCachedTime(0);
     start = server.ustime;
     c->cmd->proc(c);
     duration = ustime()-start;
     dirty = server.dirty-dirty;
     if (dirty < 0) dirty = 0;
    
     ......
    
     /* Propagate the command into the AOF and replication link */
     if (flags & CMD_CALL_PROPAGATE &&
         (c->flags & CLIENT_PREVENT_PROP) != CLIENT_PREVENT_PROP)
     {
          
          
         int propagate_flags = PROPAGATE_NONE;
    
         /* Check if the command operated changes in the data set. If so
          * set for replication / AOF propagation. */
         if (dirty) propagate_flags |= (PROPAGATE_AOF|PROPAGATE_REPL);
    
         /* If the client forced AOF / replication of the command, set
          * the flags regardless of the command effects on the data set. */
         if (c->flags & CLIENT_FORCE_REPL) propagate_flags |= PROPAGATE_REPL;
         if (c->flags & CLIENT_FORCE_AOF) propagate_flags |= PROPAGATE_AOF;
    
         /* However prevent AOF / replication propagation if the command
          * implementations called preventCommandPropagation() or similar,
          * or if we don't have the call() flags to do so. */
         if (c->flags & CLIENT_PREVENT_REPL_PROP ||
             !(flags & CMD_CALL_PROPAGATE_REPL))
                 propagate_flags &= ~PROPAGATE_REPL;
         if (c->flags & CLIENT_PREVENT_AOF_PROP ||
             !(flags & CMD_CALL_PROPAGATE_AOF))
                 propagate_flags &= ~PROPAGATE_AOF;
    
         /* Call propagate() only if at least one of AOF / replication
          * propagation is needed. Note that modules commands handle replication
          * in an explicit way, so we never replicate them automatically. */
         if (propagate_flags != PROPAGATE_NONE && !(c->cmd->flags & CMD_MODULE))
             propagate(c->cmd,c->db->id,c->argv,c->argc,propagate_flags);
     }
    
     /* Restore the old replication flags, since call() can be executed
      * recursively. */
     c->flags &= ~(CLIENT_FORCE_AOF|CLIENT_FORCE_REPL|CLIENT_PREVENT_PROP);
     c->flags |= client_old_flags &
         (CLIENT_FORCE_AOF|CLIENT_FORCE_REPL|CLIENT_PREVENT_PROP);
    
     /* Handle the alsoPropagate() API to handle commands that want to propagate
      * multiple separated commands. Note that alsoPropagate() is not affected
      * by CLIENT_PREVENT_PROP flag. */
     if (server.also_propagate.numops) {
          
          
         int j;
         redisOp *rop;
    
         if (flags & CMD_CALL_PROPAGATE) {
          
          
             int multi_emitted = 0;
             /* Wrap the commands in server.also_propagate array,
              * but don't wrap it if we are already in MULTI context,
              * in case the nested MULTI/EXEC.
              *
              * And if the array contains only one command, no need to
              * wrap it, since the single command is atomic. */
             if (server.also_propagate.numops > 1 &&
                 !(c->cmd->flags & CMD_MODULE) &&
                 !(c->flags & CLIENT_MULTI) &&
                 !(flags & CMD_CALL_NOWRAP))
             {
          
          
                 execCommandPropagateMulti(c);
                 multi_emitted = 1;
             }
    
             for (j = 0; j < server.also_propagate.numops; j++) {
          
          
                 rop = &server.also_propagate.ops[j];
                 int target = rop->target;
                 /* Whatever the command wish is, we honor the call() flags. */
                 if (!(flags&CMD_CALL_PROPAGATE_AOF)) target &= ~PROPAGATE_AOF;
                 if (!(flags&CMD_CALL_PROPAGATE_REPL)) target &= ~PROPAGATE_REPL;
                 if (target)
                     propagate(rop->cmd,rop->dbid,rop->argv,rop->argc,target);
             }
    
             if (multi_emitted) {
          
          
                 execCommandPropagateExec(c);
             }
         }
         redisOpArrayFree(&server.also_propagate);
     }
     server.also_propagate = prev_also_propagate;
    
     /* If the client has keys tracking enabled for client side caching,
      * make sure to remember the keys it fetched via this command. */
     if (c->cmd->flags & CMD_READONLY) {
          
          
         client *caller = (c->flags & CLIENT_LUA && server.lua_caller) ?
                             server.lua_caller : c;
         if (caller->flags & CLIENT_TRACKING &&
             !(caller->flags & CLIENT_TRACKING_BCAST))
         {
          
          
             trackingRememberKeys(caller);
         }
     }
    
     server.fixed_time_expire--;
     server.stat_numcommands++;
    }
    
  9. t_string.c#setCommand() 函数的核心是调用 setGenericCommand() 函数, 该函数源码如下,可以看到其内部逻辑首先会检查命令参数,之后调用 genericSetKey() 将数据添加到 redisDb 中。至此 redis 命令执行结束,networking.c#addReply() 函数将把服务端对客户端的响应写入到缓冲区,发送给客户端

    void setGenericCommand(client *c, int flags, robj *key, robj *val, robj *expire, int unit, robj *ok_reply, robj *abort_reply) {
          
          
     long long milliseconds = 0; /* initialized to avoid any harmness warning */
    
     if (expire) {
          
          
         if (getLongLongFromObjectOrReply(c, expire, &milliseconds, NULL) != C_OK)
             return;
         if (milliseconds <= 0) {
          
          
             addReplyErrorFormat(c,"invalid expire time in %s",c->cmd->name);
             return;
         }
         if (unit == UNIT_SECONDS) milliseconds *= 1000;
     }
    
     if ((flags & OBJ_SET_NX && lookupKeyWrite(c->db,key) != NULL) ||
         (flags & OBJ_SET_XX && lookupKeyWrite(c->db,key) == NULL))
     {
          
          
         addReply(c, abort_reply ? abort_reply : shared.null[c->resp]);
         return;
     }
     genericSetKey(c,c->db,key,val,flags & OBJ_SET_KEEPTTL,1);
     server.dirty++;
     if (expire) setExpire(c,c->db,key,mstime()+milliseconds);
     notifyKeyspaceEvent(NOTIFY_STRING,"set",key,c->db->id);
     if (expire) notifyKeyspaceEvent(NOTIFY_GENERIC,
         "expire",key,c->db->id);
     addReply(c, ok_reply ? ok_reply : shared.ok);
    }
    
  10. 函数db.c#genericSetKey() 的核心处理主要分为了以下几个步骤:

    1. 调用 lookupKeyWrite() 函数查找 redis 数据库中是否存在指定的 key,因为 redis 的数据库结构可以看成是 HashMap,故其查找方式与 Java 中 HashMap 实现的方式相同。如指定 key 存在则调用 dbOverwrite() 函数覆盖指定 key 的 value ,如不存在则调用dbAdd()将 set 命令传入的 key-value 添加到数据库中
    2. incrRefCount() 函数将 value 的引用计数加1
    void genericSetKey(client *c, redisDb *db, robj *key, robj *val, int keepttl, int signal) {
          
          
    if (lookupKeyWrite(db,key) == NULL) {
          
          
        dbAdd(db,key,val);
    } else {
          
          
        dbOverwrite(db,key,val);
    }
    incrRefCount(val);
    if (!keepttl) removeExpire(db,key);
    if (signal) signalModifiedKey(c,db,key);
    }
    

猜你喜欢

转载自blog.csdn.net/weixin_45505313/article/details/107816424