Tengine reverse proxy status detection
Install Tengine:
Compile and install./configure --prefix=/usr/loca/nginx
make && make install
Configure upstream server:
#user nobody;
user nginx nginx;
worker_processes 1;
error_log logs/error.log crit;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#error_log "pipe:rollback logs/error_log interval=1d baknum=7 maxsize=2G";
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
# load modules compiled as Dynamic Shared Object (DSO)
#
#dso {
# load ngx_http_fastcgi_module.so;
# load ngx_http_rewrite_module.so;
#}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/access.log main;
#access_log "pipe:rollback logs/access_log interval=1d baknum=7 maxsize=2G" main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
# server {
# listen 80;
# server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
#access_log "pipe:rollback logs/host.access_log interval=1d baknum=7 maxsize=2G" main;
# location / {
# root html;
# index index.html index.htm;
# }
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
#error_page 500 502 503 504 /50x.html;
#location = /50x.html {
# root html;
#}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
#}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
upstream tomcat {
ip_hash;
server 192.168.137.201:8080;
server 192.168.137.202:8080;
check interval=3000 rise=2 fall=5 timeout=1000 type=http;
check_http_send "HEAD / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
upstream tomcat-1 {
ip_hash;
server 192.168.137.201:8081;
server 192.168.137.202:8081;
check interval=3000 rise=2 fall=5 timeout=1000 type=http; #Status detection
check_http_send "HEAD / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 80;
server_name 127.0.0.1;
index index.jsp index.html;
location / {
proxy_pass http://tomcat; #Use the backend web server in this format
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
#location /status {
#
check_status;
#
access_log off;
#}
#
location /nginx_status {
#
stub_status on;
#
access_log off;
#
}
}
server {
listen 8000;
server_name 127.0.0.1;
index index.jsp index.html;
location / {
proxy_pass http://tomcat-1; #Use the backend web server in this format
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}
include /usr/local/nginx/conf/vhosts/*.conf;
}
The state detection module can provide Tengine with the function of active backend server health check.
This module is not enabled by default before Tengine-1.4.0, it can be enabled when configuring compile options: ./configure --with-http_upstream_check_module
Edit /etc/nginx/nginx.conf
http {
upstream cluster1 {
# simple round-robin
server 192.168.30.116:80;
#server 192.168.0.2:80;
check interval=3000 rise=2 fall=5 timeout=1000 type=http;
check_http_send "HEAD / HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
upstream cluster2 {
# simple round-robin
server 192.168.30.113:80;
server 192.168.30.114:80;
check interval=3000 rise=2 fall=5 timeout=1000 type=http;
check_keepalive_requests 100;
check_http_send "HEAD / HTTP/1.1\r\nConnection: keep-alive\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 80;
location /1 {
proxy_pass http://cluster1;
}
location /2 {
proxy_pass http://cluster2;
}
location /status {
check_status;
access_log off;
allow SOME.IP.ADD.RESS;
deny all;
}
}
}
The meaning of the parameters after the command is:
interval: The interval for sending health check packets to the backend.
fall(fall_count): If the number of consecutive failures reaches fall_count, the server is considered down.
rise(rise_count): The server is considered up if the number of consecutive successes reaches the rise_count.
timeout: The timeout for the backend health request.
default_down: Set the initial state of the server, if it is true, it means the default is down, if it is false, it is up. The default value is true, that is, the server considers it to be unavailable at first, and it will not be considered healthy until the health check package reaches a certain number of successful times.
type: the type of the health check package, now supports the following types
tcp: Simple tcp connection, if the connection is successful, the backend is normal.
ssl_hello: Send an initial SSL hello packet and accept the server's SSL hello packet.
http: Send an HTTP request, and determine whether the backend is alive or not by the status of the backend's reply packet.
mysql: Connect to the mysql server, and determine whether the backend is alive by receiving the greeting package from the server.
ajp: Send the Cping packet of the AJP protocol to the backend, and judge whether the backend is alive by receiving the Cpong packet.
port: Specifies the check port of the backend server. You can specify the port of the backend server that is different from the real service. For example, the backend provides applications on port 443. You can check the status of port 80 to determine the health of the backend. The default is 0, which means the same port as the backend server provides real services. This option appeared in Tengine-1.4.0.
check_http_send http_packet:
This directive can configure the content of the request sent by the http health check package. To reduce the amount of transmitted data, the "HEAD" method is recommended.
When using a long connection for health check, you need to add a keep-alive request header to this command, such as: "HEAD / HTTP/1.1\r\nConnection: keep-alive\r\n\r\n". At the same time, in the case of using the "GET" method, the size of the request uri should not be too large to ensure that the transmission can be completed within one interval, otherwise it will be regarded as a backend server or network abnormality by the health check module.
check_http_expect_alive: This directive specifies the success status of the HTTP reply. By default, the status of 2XX and 3XX is considered healthy.
check_status:
Displays the server's health status page. This directive needs to be configured in the http block.
After Tengine-1.4.0, you can configure the format of the displayed page. Supported formats are: html, csv, json. The default type is html.