Nginx passive and active health check health check

1. passive health check

Nginx health check module comes with: ngx_http_upstream_module, basic health check can be done with the following configuration:

upstream Cluster { 
    Server 172.16 . 0.23 : 80   max_fails = 1 fail_timeout = 10s; 
    Server 172.16 . 0.24 : 80   max_fails = 1 fail_timeout = 10s; 
   # max_fails = 1 and fail_timeout = 10s represented in a unit period of the 10s minutes, reached 1 connection fails, then the node will be marked as unavailable, and waits for the next cycle (as is often the same fail_timeout) once again to the request, determines whether or not whether the connection is successful.
   # Fail_timeout for the 10s, max_fails 1 times.
  } Server { the listen
80 ; server_name xxxxxxx.com; LOCATION / { proxy_pass HTTP: // Cluster; } }

Nginx Only when there is access only after the launch of the back-end node detection. If this request, the node fails exactly, Nginx will still be transferred to the requesting node failure, then onward transmission to a healthy node processing. So it will not affect the normal conduct this request. But it will affect the efficiency, because once more forward, and comes with the module can not be done early warning.

2. active health check (need to use third-party modules)

Proactive health checks, nignx timed initiative to ping the back end of the list of services, a service when abnormal occurs, the service is removed from the list of Health, found that when a service is restored, and the service can be added back to the list of health in. Taobao has an open source implementation nginx_upstream_check_module module
's official website: http://tengine.taobao.org/document_cn/http_upstream_check_cn.html

http {
    upstream cluster1 {
        # simple round-robin
        server 192.168.0.1:80;
        server 192.168.0.2:80;

        check interval=3000 rise=2 fall=5 timeout=1000 type=http;
        check_http_send "HEAD / HTTP/1.0\r\n\r\n";
        check_http_expect_alive http_2xx http_3xx;
    }

    upstream cluster2 {
        # simple round-robin
        server 192.168.0.3:80;
        server 192.168.0.4:80;

        check interval=3000 rise=2 fall=5 timeout=1000 type=http;
        check_keepalive_requests 100;
        check_http_send "HEAD / HTTP/1.1\r\nConnection: keep-alive\r\n\r\n";
        check_http_expect_alive http_2xx http_3xx;
    }

    server {
        listen 80;

        location /1 {
            proxy_pass http://cluster1;
        }

        location /2 {
            proxy_pass http://cluster2;
        }

        location /status {
            check_status;

            access_log   off;
            allow SOME.IP.ADD.RESS;
            deny all;
        }
    }
}

3. integrate third-party module deployment

3.1, download nginx_upstream_check_module module

    # Enter nginx installation directory 
    cd / usr / local / nginx 

    # Download nginx_upstream_check_module module 
    wget HTTPS: // codeload.github.com/yaoweibin/nginx_upstream_check_module/zip/master 
    # wget HTTPS: // github.com/yaoweibin/nginx_upstream_check_module/archive/ master.zip 
    
    # unpack 
    the unzip Master 

cd nginx - 1.12 # nginx into the source directory 
# -p0, the "current path" - p1, is "a path"
 Patch . -p1 <../nginx_upstream_check_module-master/check_1 11.5 +. Patch 
#nginx -V can view the original configuration output ./configure --prefix = / usr / local /Nginx 
# upstream_check module increases 
. / usr local Nginx Configure --prefix --add-Module1 nginx_upstream_check_module- = .. = / / / / Master
 the make 
usr local Nginx / sbin Nginx / / / / - if there is a problem under examination t # 

Note Nginx version version check and there is a limit . 1 .12 or higher nginx, patches for check_1. 11.5 +. patch specific reference HTTPS GitHub: // github.com/yaoweibin/nginx_upstream_check_module

3.2 modify the configuration file, so nginx_upstream_check_module module to take effect

http {
    upstream cluster1 {
        # simple round-robin
        server 192.168.0.1:80;
        server 192.168.0.2:80;

        check interval=3000 rise=2 fall=5 timeout=1000 type=http;
        check_http_send "HEAD / HTTP/1.0\r\n\r\n";
        check_http_expect_alive http_2xx http_3xx;
    }

    upstream cluster2 {
        # simple round-robin
        server 192.168.0.3:80;
        server 192.168.0.4:80;

        check interval=3000 rise=2 fall=5 timeout=1000 type=http;
        check_keepalive_requests 100;
        check_http_send "HEAD / HTTP/1.1\r\nConnection: keep-alive\r\n\r\n";
        check_http_expect_alive http_2xx http_3xx;
    }

    server {
        listen 80;

        location /1 {
            proxy_pass http://cluster1;
        }

        location /2 {
            proxy_pass http://cluster2;
        }

        location /status {
            check_status;

            access_log   off;
            allow SOME.IP.ADD.RESS;
            deny all;
        }
    }
}

3.3 reload nginx

Visit http: // nginx / nstatus
people to turn off one of the nodes refresh http: // nginx / nstatus

 

Health check when udp reverse proxy issues, another big God was amended in nginx_upstream_check_module above basis, to achieve a health check at the time of the proxy tcp and udp fourth layer.

https://github.com/zhouchangxun/ngx_healthcheck_module

Guess you like

Origin www.cnblogs.com/linyouyi/p/11502282.html