How to Modify Nginx Source Code to Implement Worker Process Isolation

background

Recently, our online gateway was replaced by APISIX, and we also encountered some problems. One of the more difficult problems to solve is the process isolation problem of APISIX.

APISIX Interaction of Different Kinds of Requests

The first thing we encountered was the problem that the APISIX Prometheus plug-in affected the normal business interface response when there was too much monitoring data. When the Prometheus plug-in is enabled, the monitoring information collected by APISIX can be obtained through the HTTP interface and displayed on a specific dashboard.

curl http://172.30.xxx.xxx:9091/apisix/prometheus/metrics
复制代码

The business system connected to our gateway is very complicated, with 4000+ routes. Every time the Prometheus plug-in is pulled, the number of metrics exceeds 500,000 and the size exceeds 80M+. This part of the information needs to be assembled and sent at the lua layer. The CPU usage of the worker process processing this request is very high, and the processing time exceeds 2s, resulting in a delay of 2s+ for the worker process to process normal business requests.

The measure that came to mind at that time was to modify the Prometheus plug-in to reduce the range and number of acquisition and transmission, and temporarily bypass this problem. After analyzing the information collected by the Prometheus plug-in, the number of collected data is as follows.

407171 apisix_http_latency_bucket
29150 apisix_http_latency_sum
29150 apisix_http_latency_count
20024 apisix_bandwidth
17707 apisix_http_status
  11 apisix_etcd_modify_indexes
   6 apisix_nginx_http_current_connections
   1 apisix_node_info
复制代码

Combined with the actual needs of our business, some information has been removed and some delays have been reduced.

Then through github issue consultation ( github.com/apache/apis… ), I found that APISIX provides this function in the commercial version. Because I still want to use the open source version directly, this problem can be bypassed for the time being, so I didn't continue to study it further.

However, another problem was encountered later, that is, the Admin API processing was not processed in time at the peak of the business. We use the Admin API to perform version switching. During a peak business period, the APISIX load was high, which affected the Admin-related interfaces, resulting in occasional timeout failures for version switching.

这里的原因显而易见,影响是双向的:前面的 Prometheus 插件是 APISIX 内部请求影响了正常业务请求。这里的是反过来的,正常业务请求影响了 APISIX 内部的请求。因此把 APISIX 内部的请求和正常业务请求隔离开就显得至关重要,于是花了一点时间实现了这个功能。

上述对应会生成如下的 nginx.conf 配置示例文件如下。

// 9091 端口处理 Prometheus 插件接口请求
server {
    listen 0.0.0.0:9091;

    access_log off;

    location / {
        content_by_lua_block {
            local prometheus = require("apisix.plugins.prometheus.exporter")
            prometheus.export_metrics()
        }
    }
}

// 9180 端口处理 admin 接口
server {
    listen 0.0.0.0:9180;
    location /apisix/admin {
        content_by_lua_block {
            apisix.http_admin()
        }
    }
}
// 正常处理 80 和 443 的业务请求
server {
    listen 0.0.0.0:80;
    listen 0.0.0.0:443 ssl;
    server_name _;

    location / {
        proxy_pass  $upstream_scheme://apisix_backend$upstream_uri;

    access_by_lua_block {
        apisix.http_access_phase()
    }
}
复制代码

修改 Nginx 源码实现进程隔离

对于 OpenResty 比较了解的同学应该知道,OpenResty 在 Nginx 的基础上进行了扩展,增加了 privilege

privileged agent 特权进程不监听任何端口,不对外提供任何服务,主要用于定时任务等。

我们需要做的是增加 1 个或者多个 woker 进程,专门处理 APISIX 内部的请求即可。

Nginx 采用多进程模式,master 进程会调用 bind、listen 监听套接字。fork 函数创建的 worker 进程会复制这些 listen 状态的 socket 句柄。

Nginx 源码中创建 worker 子进程的伪代码如下:

void
ngx_master_process_cycle(ngx_cycle_t *cycle) {
    ngx_setproctitle("master process");
    ngx_start_worker_processes()
        for (i = 0; i < n; i++) { // 根据 cpu 核心数创建子进程
            ngx_spawn_process(i, "worker process");
                pid = fork();
                ngx_worker_process_cycle()
                    ngx_setproctitle("worker process")
                    for(;;) { // worker 子进程的无限循环 
                        // ...
                    }
        }
    }
    for(;;) {
        // ... master 进程的无限循环 
    }
}
复制代码

我们要做修改就是在 for 循环中多启动 1 个或 N 个子进程,专门用来处理特定端口的请求。

这里的 demo 以启动 1 个 worker process 为例,修改 ngx_start_worker_processes 的逻辑如下,多启动一个 worker process,命令名为 "isolation process" 表示内部隔离进程。

static void
ngx_start_worker_processes(ngx_cycle_t *cycle, ngx_int_t n, ngx_int_t type)
{
    ngx_int_t  i;
    // ...
    for (i = 0; i < n + 1; i++) { // 这里将 n 改为了 n+1,多启动一个进程

        if (i == 0) { // 将子进程组中的第一个作为隔离进程
            ngx_spawn_process(cycle, ngx_worker_process_cycle,
                              (void *) (intptr_t) i, "isolation process", type);
        } else {
            ngx_spawn_process(cycle, ngx_worker_process_cycle,
                              (void *) (intptr_t) i, "worker process", type);
        }
    }
    // ...
}
复制代码

随后在 ngx_worker_process_cycle 的逻辑对第 0 号 worker 做特殊处理,这里的 demo 使用 18080、18081、18082 作为隔离端口示意。

static void
ngx_worker_process_cycle(ngx_cycle_t *cycle, void *data)
{
    ngx_int_t worker = (intptr_t) data;
    
    int ports[3];
    ports[0] = 18080;
    ports[1] = 18081;
    ports[2] = 18082; 
    ngx_worker_process_init(cycle, worker);

    if (worker == 0) { // 处理 0 号 worker 
        ngx_setproctitle("isolation process");
        ngx_close_not_isolation_listening_sockets(cycle, ports, 3);
    } else { // 处理非 0 号 worker
        ngx_setproctitle("worker process");
        ngx_close_isolation_listening_sockets(cycle, ports, 3);
    }
}
复制代码

这里新写了两个方法

  • ngx_close_not_isolation_listening_sockets:只保留隔离端口的监听,取消其它端口监听
  • ngx_close_isolation_listening_sockets:关闭隔离端口的监听,只保留正常业务监听端口,也就是处理正常业务

ngx_close_not_isolation_listening_sockets 精简后的代码如下:

// used in isolation process
void
ngx_close_not_isolation_listening_sockets(ngx_cycle_t *cycle, int isolation_ports[], int port_num)
{
    ngx_connection_t  *c;
    int port_match = 0;
    ngx_listening_t* ls = cycle->listening.elts;
    for (int i = 0; i < cycle->listening.nelts; i++) {

        c = ls[i].connection;
        // 从 sockaddr 结构体中获取端口号
        in_port_t port = ngx_inet_get_port(ls[i].sockaddr) ;
        // 判断当前端口号是否是需要隔离的端口
        int is_isolation_port = check_isolation_port(port, isolation_ports, port_num);

        // 如果不是隔离端口,则取消监听事情的处理
        if (c && !is_isolation_port) {
            // 调用 epoll_ctl 移除事件监听
            ngx_del_event(c->read, NGX_READ_EVENT, 0);
            ngx_free_connection(c);
            c->fd = (ngx_socket_t) -1;
        }

        if (!is_isolation_port) {
            port_match++;
            ngx_close_socket(ls[i].fd); // close 当前 fd
            ls[i].fd = (ngx_socket_t) -1;
        }
    }
    cycle->listening.nelts -= port_match;
}
复制代码

对应的 ngx_close_isolation_listening_sockets 关闭所有的隔离端口,只保留正常业务端口监听,简化后的代码如下。

void
ngx_close_isolation_listening_sockets(ngx_cycle_t *cycle, int isolation_ports[], int port_num)
{
    ngx_connection_t  *c;
    int port_match;

    port_match = 0;
    ngx_listening_t   * ls = cycle->listening.elts;

    for (int i = 0; i < cycle->listening.nelts; i++) {
        c = ls[i].connection;
        in_port_t port = ngx_inet_get_port(ls[i].sockaddr) ;
        int is_isolation_port = check_isolation_port(port, isolation_ports, port_num);

        // 如果是隔离端口,关闭监听
        if (c && is_isolation_port) { 
            ngx_del_event(c->read, NGX_READ_EVENT, 0);
            ngx_free_connection(c);
            c->fd = (ngx_socket_t) -1;
        }

        if (is_isolation_port) {
            port_match++;   
            ngx_close_socket(ls[i].fd); // 关闭 fd
            ls[i].fd = (ngx_socket_t) -1;
        }
    }
    cle->listening.nelts -= port_match;
}
复制代码

如此一来,我们就实现了 Nginx 基于端口的进程隔离。

效果验证

这里我们使用 18080~18082 端口作为隔离端口验证,其它端口作为正常业务端端口。为了模拟请求占用较高 CPU 的情况,这里我们用 lua 来计算多次 sqrt,以更好的验证 Nginx 的 worker 负载均衡。

server {
        listen 18080; // 18081,18082 配置一样
        server_name localhost;

        location / {
            content_by_lua_block {
                 local sum = 0;
                 for i = 1,10000000,1 do
                    sum = sum + math.sqrt(i)
                 end
                 ngx.say(sum)
            }
        }
}

server {
    listen 28080;
    server_name localhost;

    location / {
        content_by_lua_block {
             local sum = 0;
             for i = 1,10000000,1 do
                sum = sum + math.sqrt(i)
             end
             ngx.say(sum)
        }
    }
}
复制代码

首先来记录一下当前 worker 进程情况。

可以看到现在已经启动了 1 个内部隔离 worker 进程(pid=3355),4 个普通 worker 进程(pid=3356~3359)。

首先我们可以看通过端口监听来确定我们的改动是否生效。

可以看到隔离进程 3355 进程监听了 18080、18081、18082,普通进程 3356 等进程监听了 20880、20881 端口。

使用 ab 请求 18080 端口,看看是否只会把 3355 进程 CPU 跑满。

ab -n 10000 -c 10 localhost:18080

top -p 3355,3356,3357,3358,3359
复制代码

可以看到此时只有 3355 这个 isolation process 被跑满。

接下来看看非隔离端口请求,是否只会跑满其它四个 woker process。

ab -n 10000 -c 10 localhost:28080

top -p 3355,3356,3357,3358,3359
复制代码

符合预期,只会跑满 4 个普通 worker 进程(pid=3356~3359),此时 3355 的 cpu 使用率为 0。

到此,我们就通过修改 Nginx 源码实现了特定基于端口号的进程隔离方案。此 demo 中的端口号是写死的,我们实际使用的时候是通过 lua 代码传入的。

init_by_lua_block {
    local process = require "ngx.process"

    local ports = {18080, 18081, 18083}
    local ok, err = process.enable_isolation_process(ports)
    if not ok then
       ngx.log(ngx.ERR, "enable enable_isolation_process failed")
       return
    else
       ngx.log(ngx.ERR, "enable enable_isolation_process success")
    end
}
复制代码

这里需要 lua 通过 ffi 传入到 OpenResty 中,这里不是本文的重点,就不展开讲述。

后记

这个方案有一点 hack,能比较好的解决当前我们遇到的问题,但是也是有成本的,需要维护自己的 OpenResty 代码分支,喜欢折腾的同学或者实在需要此特性可以试试。

上述方案只是我对 Nginx 源码的粗浅了解做的改动,如果有使用不当的地方欢迎跟我反馈。

Guess you like

Origin juejin.im/post/7157991584326713352