Source code of Java essential skills (nginx current limiting module of Nginx source code research)

Overview:

There are three powerful tools for high-concurrency systems: caching, downgrading and current limiting;

The purpose of current limiting is to protect the system by limiting the rate of concurrent access/requests. Once the limit rate is reached, service can be denied (directed to the error page), queued (second kill), and downgraded (return to bottom data or default data);

Common current limits in high-concurrency systems include: limit the total number of concurrent connections (database connection pool), limit the number of instantaneous concurrency (such as nginx's limit_conn module, used to limit the number of instantaneous concurrent connections), limit the average rate within the time window (nginx limit_req Module to limit the average rate per second);

In addition, the current can be limited according to the number of network connections, network traffic, CPU or memory load, etc.

The industry's mainstream current limiting algorithm

1. Current limiting algorithm

The simplest and rude current limiting algorithm is the counter method, and the more commonly used leaky bucket algorithm and token bucket algorithm;

1.1 Counter

The counter method is the simplest and the easiest to implement in the current limiting algorithm. For example, we stipulate that for the A interface, our number of visits per minute cannot exceed 100.

Then we can set a counter counter, its effective time is 1 minute (that is, the counter every minute will be reset to 0), whenever a request comes,

(The atomic class AtomicLong [guarantee consistent single JVM] and add volatile [thread to perceive the latest value in time]) counter is increased by 1. If the value of counter is greater than 100, it means that the number of requests is too many;

Although this algorithm is simple, there is a very fatal problem, that is, the critical problem.

As shown in the figure below, 100 requests are reached just before 1:00, the 1:00 counter is reset, and 100 requests are reached just after 1:00. Obviously, the counter will not exceed 100 and all requests will not be intercepted;

However, the number of requests in this period of time has reached 200, which is far more than 100.

1.2 Leaky Bucket Algorithm

As shown in the figure below, there is a leaky bucket with a fixed capacity, and water drops flow out at a constant fixed rate; if the bucket is empty, no water drops will flow out; the flow rate of the water flowing into the leaky bucket is arbitrary, and the flow rate is uniform; if it flows in If the water exceeds the capacity of the bucket, the inflowing water will overflow (be discarded);

It can be seen that the leaky bucket algorithm inherently limits the request speed, which can be used for traffic shaping and current limiting control;

 

1.3 Token Bucket Algorithm

The token bucket is a bucket that stores fixed-capacity tokens. Tokens are added to the bucket at a fixed rate r; at most b tokens can be stored in the bucket. When the bucket is full, the newly added tokens are discarded;

When a request arrives, it will try to get the token from the bucket; if it has, it is allowed to continue processing the request; if it does not, it will wait in line or discard it directly;

It can be found that the outflow rate of the leaky bucket algorithm is constant or 0 (no request comes over), while the outflow rate of the token bucket algorithm may be greater than the rate r;

The counting semaphore Semaphore of Java Concurrency Toolkit JUI uses this algorithm to achieve current limiting.

 

2.nginx basic knowledge

Nginx mainly has two current limiting methods: current limiting according to the number of connections (ngx_http_limit_conn_module) and current limiting according to request rate (ngx_http_limit_req_module);

Before learning the current limiting module, you also need to understand the process of nginx's HTTP request processing, nginx event processing process, etc.;

2.1 HTTP request processing process

Nginx divides the HTTP request processing process into 11 stages. Most HTTP modules will add their own handlers to a certain stage (4 of them cannot add custom handlers). When nginx processes HTTP requests, it will call all of them one by one. handler;

 NGX_HTTP_PREACCESS_PHASE, //Access control, the current-limiting module will register the handler to this stage (this article focuses on current-limiting, don’t look at others, just be familiar with the general process)! ! !

typedef enum {
 NGX_HTTP_POST_READ_PHASE = 0, //目前只有realip模块会注册handler(nginx作为代理服务器时有用,后端以此获取客户端原始ip)
 
 NGX_HTTP_SERVER_REWRITE_PHASE, //server块中配置了rewrite指令,重写url
 
 NGX_HTTP_FIND_CONFIG_PHASE, //查找匹配location;不能自定义handler;
 NGX_HTTP_REWRITE_PHASE,  //location块中配置了rewrite指令,重写url
 NGX_HTTP_POST_REWRITE_PHASE, //检查是否发生了url重写,如果有,重新回到FIND_CONFIG阶段;不能自定义handler;
 
 NGX_HTTP_PREACCESS_PHASE,  //访问控制,限流模块会注册handler到此阶段
 
 NGX_HTTP_ACCESS_PHASE,  //访问权限控制
 NGX_HTTP_POST_ACCESS_PHASE, //根据访问权限控制阶段做相应处理;不能自定义handler;
 
 NGX_HTTP_TRY_FILES_PHASE,  //只有配置了try_files指令,才会有此阶段;不能自定义handler;
 NGX_HTTP_CONTENT_PHASE,  //内容产生阶段,返回响应给客户端
 
 NGX_HTTP_LOG_PHASE   //日志记录
} ngx_http_phases;

Nginx uses the structure ngx_module_s to represent a module, where the field ctx is a pointer to the module context structure; the HTTP module context structure of nginx is as follows (the fields of the context structure are all function pointers):

 This structure is the most basic data structure of the entire Nginx modular architecture. It describes the basic attributes that a module in the Nginx program should include, and the structure is defined in tengine/src/core/ngx_conf_file.h

Its structure is defined as follows, and the comments in it are functional descriptions: (you need to be familiar with the structure types of the c language, similar to java classes)


struct ngx_module_s {  
    ngx_uint_t            ctx_index;      
    /*分类的模块计数器 
    nginx模块可以分为四种:core、event、http和mail 
    每个模块都会各自计数,ctx_index就是每个模块在其所属类组的计数*/  
      
    ngx_uint_t            index;          
    /*一个模块计数器,按照每个模块在ngx_modules[]数组中的声明顺序,从0开始依次给每个模块赋值*/  
  
    ngx_uint_t            spare0;  
    ngx_uint_t            spare1;  
    ngx_uint_t            spare2;  
    ngx_uint_t            spare3;
      ngx_uint_t            version;      //nginx模块版本  
  
    void                 *ctx;            
    /*模块的上下文,不同种类的模块有不同的上下文,因此实现了四种结构体*/  
      
    ngx_command_t        *commands;  
    /*命令定义地址 
    模块的指令集 
    每一个指令在源码中对应着一个ngx_command_t结构变量*/  
      
    ngx_uint_t            type;         //模块类型,用于区分core event http和mail  
  
    ngx_int_t           (*init_master)(ngx_log_t *log);         //初始化master时执行  
  
    ngx_int_t           (*init_module)(ngx_cycle_t *cycle);     //初始化module时执行
   ngx_int_t           (*init_process)(ngx_cycle_t *cycle);    //初始化process时执行  
    ngx_int_t           (*init_thread)(ngx_cycle_t *cycle);     //初始化thread时执行  
    void                (*exit_thread)(ngx_cycle_t *cycle);     //退出thread时执行  
    void                (*exit_process)(ngx_cycle_t *cycle);    //退出process时执行  
  
    void                (*exit_master)(ngx_cycle_t *cycle);     //退出master时执行  
  
//以下功能不明  
    uintptr_t             spare_hook0;  
    uintptr_t             spare_hook1;  
    uintptr_t             spare_hook2;  
    uintptr_t             spare_hook3;  
    uintptr_t             spare_hook4;  
    uintptr_t             spare_hook5;  
    uintptr_t             spare_hook6;  
    uintptr_t             spare_hook7;  
};  
  
typedef struct ngx_module_s      ngx_module_t;  

Nginx defines a lot of modules. Each module has its own type. You can define some operation functions of your own. By assigning these functions to function pointers in the corresponding type structure, you can register different callback function interfaces. In order to achieve different functions, a bit similar to java polymorphism

typedef struct {
 ngx_int_t (*preconfiguration)(ngx_conf_t *cf);
 ngx_int_t (*postconfiguration)(ngx_conf_t *cf); //此方法注册handler到相应阶段
 
 void  *(*create_main_conf)(ngx_conf_t *cf); //http块中的主配置
 char  *(*init_main_conf)(ngx_conf_t *cf, void *conf);
 
 void  *(*create_srv_conf)(ngx_conf_t *cf); //server配置
 char  *(*merge_srv_conf)(ngx_conf_t *cf, void *prev, void *conf);
 
 void  *(*create_loc_conf)(ngx_conf_t *cf); //location配置
 char  *(*merge_loc_conf)(ngx_conf_t *cf, void *prev, void *conf);
} ngx_http_module_t;

Taking the ngx_http_limit_req_module module as an example, the postconfiguration method is simply implemented as follows: (this article focuses on current limiting)

static ngx_int_t ngx_http_limit_req_init(ngx_conf_t *cf)
{
 h = ngx_array_push(&cmcf->phases[NGX_HTTP_PREACCESS_PHASE].handlers);
 
 *h = ngx_http_limit_req_handler; //ngx_http_limit_req_module模块的限流方法;nginx处理HTTP请求时,都会调用此方法判断应该继续执行还是拒绝请求
 
 return NGX_OK;
}

2.2 A brief introduction to nginx event processing

Assuming that nginx uses epoll [different Linux distributions are different, centos is the IO model of epoll].

Nginx needs to register all concerned fd (file describe file descriptor [Linux based on file to represent network io resources]) to epoll, and add the method statement as follows:

static ngx_int_t ngx_epoll_add_event(ngx_event_t *ev, ngx_int_t event, ngx_uint_t flags);

The first parameter of the method is the ngx_event_t structure pointer, which represents a read or write event of interest; nginx may set a timeout timer for the event so that it can handle the event timeout; the definition is as follows:

struct ngx_event_s {
 
 ngx_event_handler_pt handler; //函数指针:事件的处理函数
 
 ngx_rbtree_node_t timer;  //超时定时器,存储在红黑树中(节点的key即为事件的超时时间)
 
 unsigned   timedout:1; //记录事件是否超时
 
};

Generally, epoll_wait is called cyclically to listen to all fd and process the read and write events that occur; epoll_wait is a blocking call, and the last parameter timeout is the timeout time, that is, the maximum blocking timeout time if no event occurs, the method will return;

When nginx sets the timeout timeout, it will find the recently outdated node from the red-black tree recording the timeout timer mentioned above, and use it as the timeout time of epoll_wait, as shown in the following code;

ngx_msec_t ngx_event_find_timer(void)
{
 node = ngx_rbtree_min(root, sentinel);
 timer = (ngx_msec_int_t) (node->key - ngx_current_msec);
 
 return (ngx_msec_t) (timer > 0 ? timer : 0);
}

At the same time, at the end of each cycle, nginx will check whether any event has expired from the red-black tree. If it expires, mark timeout=1 and call the event handler;

void ngx_event_expire_timers(void)
{
 for ( ;; ) {
  node = ngx_rbtree_min(root, sentinel);
 
  if ((ngx_msec_int_t) (node->key - ngx_current_msec) <= 0) { //当前事件已经超时
   ev = (ngx_event_t *) ((char *) node - offsetof(ngx_event_t, timer));
 
   ev->timedout = 1;
 
   ev->handler(ev);
 
   continue;
  }
 
  break;
 }
}

Nginx realizes the processing of socket events and timing events through the above method;

ngx_http_limit_req_module module analysis

The ngx_http_limit_req_module module limits the request flow, that is, limits the user's request rate within a certain period of time; and uses the token bucket algorithm;

3.1 Configuration instructions

The ngx_http_limit_req_module module provides some configuration instructions for users to configure current limiting strategies

//每个配置指令主要包含两个字段:名称,解析配置的处理方法
static ngx_command_t ngx_http_limit_req_commands[] = {
 
 //一般用法:limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
 //$binary_remote_addr表示远程客户端IP;
 //zone配置一个存储空间(需要分配空间记录每个客户端的访问速率,超时空间限制使用lru算法淘汰;注意此空间是在共享内存分配的,所有worker进程都能访问)
 //rate表示限制速率,此例为1qps
 { ngx_string("limit_req_zone"),
  ngx_http_limit_req_zone,
  },
 
 //用法:limit_req zone=one burst=5 nodelay;
 //zone指定使用哪一个共享空间
 //超出此速率的请求是直接丢弃吗?burst配置用于处理突发流量,表示最大排队请求数目,当客户端请求速率超过限流速率时,请求会排队等待;而超出burst的才会被直接拒绝;
 //nodelay必须与burst一起使用;此时排队等待的请求会被优先处理;否则假如这些请求依然按照限流速度处理,可能等到服务器处理完成后,客户端早已超时
 { ngx_string("limit_req"),
  ngx_http_limit_req,
  },
 
 //当请求被限流时,日志记录级别;用法:limit_req_log_level info | notice | warn | error;
 { ngx_string("limit_req_log_level"),
  ngx_conf_set_enum_slot,
  },
 
 //当请求被限流时,给客户端返回的状态码;用法:limit_req_status 503
 { ngx_string("limit_req_status"),
  ngx_conf_set_num_slot,
 },
};

Note: $binary_remote_addr is a variable provided by nginx, users can directly use it in the configuration file; nginx also provides many variables, just look for the ngx_http_core_variables array in the ngx_http_variable.c file:

static ngx_http_variable_t ngx_http_core_variables[] = {
 
 { ngx_string("http_host"), NULL, ngx_http_variable_header,
  offsetof(ngx_http_request_t, headers_in.host), 0, 0 },
 
 { ngx_string("http_user_agent"), NULL, ngx_http_variable_header,
  offsetof(ngx_http_request_t, headers_in.user_agent), 0, 0 },
 …………
}

3.2 Source code analysis

ngx_http_limit_req_module will register the ngx_http_limit_req_handler method to the NGX_HTTP_PREACCESS_PHASE phase of HTTP processing in the post configuration process;

ngx_http_limit_req_handler will execute the leaky bucket algorithm to determine whether the configured current limit rate is exceeded, so as to discard or queue or pass;

When the user requests for the first time, a new record (mainly records access count, access time) is added, and the hash value of the client IP address (configured with $binary_remote_addr) is stored as the key in the red-black tree (quick search), and at the same time Stored in the LRU queue (when the storage space is insufficient, the record is eliminated and deleted from the tail each time); when the user requests again, the record will be searched and updated from the red-black tree, and the record will be moved to the head of the LRU queue;

3.2.1 Data structure

limit_req_zone configures the storage space (name and size) required by the current-limiting algorithm, current-limiting speed, and current-limiting variables (client IP, etc.). The structure is as follows:

typedef struct {
 ngx_http_limit_req_shctx_t *sh;
 ngx_slab_pool_t    *shpool;//内存池
 ngx_uint_t     rate; //限流速度(qps乘以1000存储)
 ngx_int_t     index; //变量索引(nginx提供了一系列变量,用户配置的限流变量索引)
 ngx_str_t     var; //限流变量名称
 ngx_http_limit_req_node_t *node;
} ngx_http_limit_req_ctx_t;
 
//同时会初始化共享存储空间
struct ngx_shm_zone_s {
 void      *data; //data指向ngx_http_limit_req_ctx_t结构
 ngx_shm_t     shm; //共享空间
 ngx_shm_zone_init_pt  init; //初始化方法函数指针
 void      *tag; //指向ngx_http_limit_req_module结构体
};

limit_req configures the storage space used by the current limit, the size of the queue, and whether it is urgently processed. The structure is as follows:

typedef struct {
 ngx_shm_zone_t    *shm_zone; //共享存储空间
  
 ngx_uint_t     burst;  //队列大小
 ngx_uint_t     nodelay; //有请求排队时是否紧急处理,与burst配合使用(如果配置,则会紧急处理排队请求,否则依然按照限流速度处理)
} ngx_http_limit_req_limit_t;

 As mentioned earlier, user access records will be stored in the red-black tree and LRU queue at the same time. The structure is as follows:

//记录结构体
typedef struct {
 u_char      color;
 u_char      dummy;
 u_short      len; //数据长度
 ngx_queue_t     queue; 
 ngx_msec_t     last; //上次访问时间
  
 ngx_uint_t     excess; //当前剩余待处理的请求数(nginx用此实现令牌桶限流算法)
 ngx_uint_t     count; //此类记录请求的总数
 u_char      data[1];//数据内容(先按照key(hash值)查找,再比较数据内容是否相等)
} ngx_http_limit_req_node_t;
 
//红黑树节点,key为用户配置限流变量的hash值;
struct ngx_rbtree_node_s {
 ngx_rbtree_key_t  key;
 ngx_rbtree_node_t  *left;
 ngx_rbtree_node_t  *right;
 ngx_rbtree_node_t  *parent;
 u_char     color;
 u_char     data;
};
 
 
typedef struct {
 ngx_rbtree_t     rbtree; //红黑树
 ngx_rbtree_node_t    sentinel; //NIL节点
 ngx_queue_t     queue; //LRU队列
} ngx_http_limit_req_shctx_t;
 
//队列只有prev和next指针
struct ngx_queue_s {
 ngx_queue_t *prev;
 ngx_queue_t *next;
};

 

Thinking 1: ngx_http_limit_req_node_t records form a doubly linked list through prev and next pointers to realize the LRU queue; the newly accessed node will always be inserted into the head of the linked list, and the node will be deleted from the tail when eliminated;

ngx_http_limit_req_ctx_t *ctx;
ngx_queue_t    *q;
 
q = ngx_queue_last(&ctx->sh->queue);
 
lr = ngx_queue_data(q, ngx_http_limit_req_node_t, queue);//此方法由ngx_queue_t获取ngx_http_limit_req_node_t结构首地址,实现如下:
 
#define ngx_queue_data(q, type, link) (type *) ((u_char *) q - offsetof(type, link)) //queue字段地址减去其在结构体中偏移,为结构体首地址

Thinking 2: The current limiting algorithm first uses the key to find the red-black tree node to find the corresponding record. How does the red-black tree node relate to the record ngx_http_limit_req_node_t structure? The following code can be found in the ngx_http_limit_req_module module:

 

size = offsetof(ngx_rbtree_node_t, color) //新建记录分配内存,计算所需空间大小
  + offsetof(ngx_http_limit_req_node_t, data)
  + len;
 
node = ngx_slab_alloc_locked(ctx->shpool, size);
 
node->key = hash;
 
lr = (ngx_http_limit_req_node_t *) &node->color; //color为u_char类型,为什么能强制转换为ngx_http_limit_req_node_t指针类型呢?
 
lr->len = (u_char) len;
lr->excess = 0;
 
ngx_memcpy(lr->data, data, len);
 
ngx_rbtree_insert(&ctx->sh->rbtree, node);
 
ngx_queue_insert_head(&ctx->sh->queue, &lr->queue);

By analyzing the above code, the color and data fields of the ngx_rbtree_node_s structure are actually meaningless. The life form of the structure is different from the final storage form. nginx finally uses the following storage form to store each record;

3.2.2 Current limiting algorithm

As mentioned above, the ngx_http_limit_req_handler method will be registered to the NGX_HTTP_PREACCESS_PHASE phase of HTTP processing in the post configuration process;

Therefore, when processing HTTP requests, the ngx_http_limit_req_handler method will be executed to determine whether current limiting is required;

3.2.2.1 Implementation of leaky bucket algorithm

The user may configure several current limiting at the same time, so for HTTP requests, nginx needs to traverse all the limiting policies to determine whether current limiting is required;

The ngx_http_limit_req_lookup method implements the leaky bucket algorithm, and the method returns 3 results:

  • NGX_BUSY: The request rate exceeds the current limit configuration, and the request is rejected;
  • NGX_AGAIN: The request has passed the current current limiting strategy verification, and continues to verify the next current limiting strategy;
  • NGX_OK: The request has passed the verification of all limited flow policies, and the next stage can be executed;
  • NGX_ERROR: Error
//limit,限流策略;hash,记录key的hash值;data,记录key的数据内容;len,记录key的数据长度;ep,待处理请求数目;account,是否是最后一条限流策略
static ngx_int_t ngx_http_limit_req_lookup(ngx_http_limit_req_limit_t *limit, ngx_uint_t hash, u_char *data, size_t len, ngx_uint_t *ep, ngx_uint_t account)
{
 //红黑树查找指定界定
 while (node != sentinel) {
 
  if (hash < node->key) {
   node = node->left;
   continue;
  }
 
  if (hash > node->key) {
   node = node->right;
   continue;
  }
 
  //hash值相等,比较数据是否相等
  lr = (ngx_http_limit_req_node_t *) &node->color;
 
  rc = ngx_memn2cmp(data, lr->data, len, (size_t) lr->len);
  //查找到
  if (rc == 0) {
   ngx_queue_remove(&lr->queue);
   ngx_queue_insert_head(&ctx->sh->queue, &lr->queue); //将记录移动到LRU队列头部
  
   ms = (ngx_msec_int_t) (now - lr->last); //当前时间减去上次访问时间
 
   excess = lr->excess - ctx->rate * ngx_abs(ms) / 1000 + 1000; //待处理请求数-限流速率*时间段+1个请求(速率,请求数等都乘以1000了)
 
   if (excess < 0) {
    excess = 0;
   }
 
   *ep = excess;
 
   //待处理数目超过burst(等待队列大小),返回NGX_BUSY拒绝请求(没有配置burst时,值为0)
   if ((ngx_uint_t) excess > limit->burst) {
    return NGX_BUSY;
   }
 
   if (account) { //如果是最后一条限流策略,则更新上次访问时间,待处理请求数目,返回NGX_OK
    lr->excess = excess;
    lr->last = now;
    return NGX_OK;
   }
   //访问次数递增
   lr->count++;
 
   ctx->node = lr;
 
   return NGX_AGAIN; //非最后一条限流策略,返回NGX_AGAIN,继续校验下一条限流策略
  }
 
  node = (rc < 0) ? node->left : node->right;
 }
 
 //假如没有查找到节点,需要新建一条记录
 *ep = 0;
 //存储空间大小计算方法参照3.2.1节数据结构
 size = offsetof(ngx_rbtree_node_t, color)
   + offsetof(ngx_http_limit_req_node_t, data)
   + len;
 //尝试淘汰记录(LRU)
 ngx_http_limit_req_expire(ctx, 1);
 
  
 node = ngx_slab_alloc_locked(ctx->shpool, size);//分配空间
 if (node == NULL) { //空间不足,分配失败
  ngx_http_limit_req_expire(ctx, 0); //强制淘汰记录
 
  node = ngx_slab_alloc_locked(ctx->shpool, size); //分配空间
  if (node == NULL) { //分配失败,返回NGX_ERROR
   return NGX_ERROR;
  }
 }
 
 node->key = hash; //赋值
 lr = (ngx_http_limit_req_node_t *) &node->color;
 lr->len = (u_char) len;
 lr->excess = 0;
 ngx_memcpy(lr->data, data, len);
 
 ngx_rbtree_insert(&ctx->sh->rbtree, node); //插入记录到红黑树与LRU队列
 ngx_queue_insert_head(&ctx->sh->queue, &lr->queue);
 
 if (account) { //如果是最后一条限流策略,则更新上次访问时间,待处理请求数目,返回NGX_OK
  lr->last = now;
  lr->count = 0;
  return NGX_OK;
 }
 
 lr->last = 0;
 lr->count = 1;
 
 ctx->node = lr;
 
 return NGX_AGAIN; //非最后一条限流策略,返回NGX_AGAIN,继续校验下一条限流策略
  
}

For example, if the burst configuration is 0, the initial number of pending requests is excess; the token generation period is T; as shown in the figure below

 

3.2.2.2 LRU elimination strategy

In the algorithm described above, ngx_http_limit_req_expire will be executed to eliminate a record, and it will be deleted from the end of the LRU queue each time;

The second parameter n, when n==0, delete the last record, and then try to delete one or two records; when n==1, try to delete one or two records; the code implementation is as follows:

 

 

 

static void ngx_http_limit_req_expire(ngx_http_limit_req_ctx_t *ctx, ngx_uint_t n)
{
 //最多删除3条记录
 while (n < 3) {
  //尾部节点
  q = ngx_queue_last(&ctx->sh->queue);
  //获取记录
  lr = ngx_queue_data(q, ngx_http_limit_req_node_t, queue);
   
  //注意:当为0时,无法进入if代码块,因此一定会删除尾部节点;当n不为0时,进入if代码块,校验是否可以删除
  if (n++ != 0) {
 
   ms = (ngx_msec_int_t) (now - lr->last);
   ms = ngx_abs(ms);
   //短时间内被访问,不能删除,直接返回
   if (ms < 60000) {
    return;
   }
    
   //有待处理请求,不能删除,直接返回
   excess = lr->excess - ctx->rate * ms / 1000;
   if (excess > 0) {
    return;
   }
  }
 
  //删除
  ngx_queue_remove(q);
 
  node = (ngx_rbtree_node_t *)
     ((u_char *) lr - offsetof(ngx_rbtree_node_t, color));
 
  ngx_rbtree_delete(&ctx->sh->rbtree, node);
 
  ngx_slab_free_locked(ctx->shpool, node);
 }
}

3.2.2.3 burst implementation

Burst is to deal with burst traffic. When occasional burst traffic arrives, the server should be allowed to process more requests;

When burst is 0, requests that exceed the current limit rate will be rejected; when burst is greater than 0, requests that exceed the current limit rate will be queued for processing instead of directly rejected;

How to realize the queuing process? And nginx also needs to process the queued requests regularly;

Section 2.2 mentioned that events have a timer, nginx realizes the queuing and timing processing of requests through the cooperation of events and timers;

The ngx_http_limit_req_handler method has the following code:

//计算当前请求还需要排队多久才能处理
delay = ngx_http_limit_req_account(limits, n, &excess, &limit);

//添加可读事件
if (ngx_handle_read_event(r->connection->read, 0) != NGX_OK) {
 return NGX_HTTP_INTERNAL_SERVER_ERROR;
}

r->read_event_handler = ngx_http_test_reading;
r->write_event_handler = ngx_http_limit_req_delay; //可写事件处理函数
ngx_add_timer(r->connection->write, delay); //可写事件添加定时器(超时之前是不能往客户端返回的)

The method of calculating delay is very simple, which is to traverse all current limiting strategies, calculate the time required to process all pending requests, and return the maximum value;

if (limits[n].nodelay) { //配置了nodelay时,请求不会被延时处理,delay为0
 continue;
}
 
delay = excess * 1000 / ctx->rate;
 
if (delay > max_delay) {
 max_delay = delay;
 *ep = excess;
 *limit = &limits[n];
}

Take a brief look at the implementation of the writable event processing function ngx_http_limit_req_delay

 

static void ngx_http_limit_req_delay(ngx_http_request_t *r)
{
 
 wev = r->connection->write;
 
 if (!wev->timedout) { //没有超时不会处理
 
  if (ngx_handle_write_event(wev, 0) != NGX_OK) {
   ngx_http_finalize_request(r, NGX_HTTP_INTERNAL_SERVER_ERROR);
  }
 
  return;
 }
 
 wev->timedout = 0;
 
 r->read_event_handler = ngx_http_block_reading;
 r->write_event_handler = ngx_http_core_run_phases;
 
 ngx_http_core_run_phases(r); //超时了,继续处理HTTP请求
}

The above is the source code implementation details of the Nginx current limiting module

 

Guess you like

Origin blog.csdn.net/Coder_Boy_/article/details/110479206