nginx proxy案例详解

ngx_http_upstream_module模块

     ngx_http_upstream_module模块用于定义可以由proxy_pass、fastcgi_pass、uwsgi_pass、scgi_pass和memcached_pass指令引用的服务器组。
    
1、upstream name { ... }
    定义后端服务器组,会引入一个新的上下文;Context: http
    
    upstream httpdsrvs {
        server ...
        server...
        ...
    }
    
2、server address [parameters];
    在upstream上下文中server成员,以及相关的参数;Context:   upstream
    
    address的表示格式:
        unix:/PATH/TO/SOME_SOCK_FILE
        IP[:PORT]
        HOSTNAME[:PORT]
        
    parameters:
        weight=number
            权重,默认为1;
        max_fails=number
            失败尝试最大次数;超出此处指定的次数时,server将被标记为不可用;
        fail_timeout=time
            设置将服务器标记为不可用状态的超时时长;
        max_conns
            当前的服务器的最大并发连接数;
        backup
            将服务器标记为“备用”,即所有服务器均不可用时此服务器才启用;
        down
            标记为“不可用”;

      示例:
      server 192.168.10.11:80 weight=3 max_conns=10;
      server 192.168.10.12:80 max_fails=3 fail_timeout=10;
      server 127.0.0.1:80 backup;


3、least_conn;
    最少连接调度算法,当server拥有不同的权重时其为wlc;
    
4、    ip_hash;
    源地址hash调度方法;
    
5、hash key [consistent];
    基于指定的key的hash表来实现对请求的调度,此处的key可以直接文本、变量或二者的组合;
    
    作用:将请求分类,同一类请求将发往同一个upstream server;
    
    If the consistent parameter is specified the ketama consistent hashing method will be used instead.
        
    示例:
        hash $request_uri consistent;
        hash $remote_addr;
        
6、keepalive connections;
    为每个worker进程保留的空闲的长连接数量;

 实验环境:
Upsteam Server1 node1 192.168.10.11
Upsteam Server2 node2 192.168.10.12
nigix_proxy Server  node3 外网网卡ens192:192.168.170.10 内网网卡ens224:192.168.10.254

实验拓扑图如下:

1.保证所有主机的主机名都能被解析,同时保证ntp时间是同步的,同时关闭selinux和防火墙功能。
2.保证客户端本地域名解析,这里以node2为例,

[root@node2 ~]# vi /etc/hosts
192.168.170.10 www.mylinux.com

 示例1: 编辑配置文件定义upstream组并调用组名,实现nginx proxy轮询调度

[root@node3 ~]# vi /etc/nginx/conf.d/ilinx.conf
server {
	listen 80;
	server_name www.mylinux.com;
	location / {
		root /data/nginx/html;
		proxy_pass http://backend1;
	}
}
upstream backend1 {
    server 192.168.10.11:80;
    server 192.168.10.12:80;
}
[root@node3 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node3 ~]# nginx -s reload

客户端命令测试:结果是轮询调度

[root@node2 ~]#  for i in {1..10}; do curl http://www.mylinux.com/index.html; done
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server2</h1>

示例2:健康状态检测

修改配置文件添加权重比例为3:1 并指定node2故障超时时间为10s

[root@node2 ~]# vi /etc/nginx/conf.d/nginx.conf
upstream backend1 {
    server 192.168.10.11:80 weight=3; 
    server 192.168.10.12:80 max_fails=3 fail_timeout=10;
    server 127.0.0.1:80 backup;
}
配置故障测试页面
[root@node2 ~]# vi /data/nginx/html/index.html
<h1>Maintance Time</h1>
[root@node2 ~]# nginx -s reload
客户端命令测试:可以看到权重比例为3:1 
[root@node2 ~]# for i in {1..10}; do curl http://www.mylinux.com/index.html; done
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Maintance Time</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server1</h1>

示例3:健康状态检测设置软状态

重启服务
# vi /etc/nginx/conf.d/nginx.conf
upstream backend1 {
    server 192.168.10.11:80 weight=3; 
    server 192.168.10.12:80 max_fails=3 fail_timeout=10;
    server 127.0.0.1:80 backup;
}
[root@node3 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node3 ~]# nginx -s reload

测试upstream server都正常
[root@node4 ~]# for i in {1..10}; do curl http://www.mylinux.com/index.html; done
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server1</h1>
[root@node4 ~]#

Upstream server 都停止http服务
[root@node1 ~]# systemctl stop httpd
[root@node2 ~]# systemctl stop httpd

客户端命令测试:可以看到后端服务器都发生故障
[root@node2 ~]# for i in {1..10}; do curl http://www.mylinux.com/index.html; done
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>
<h1>Maintance Time</h1>

示例4:配置基于ip地址做hash绑定

[root@node3 ~]# vi /etc/nginx/conf.d/nginx.conf
upstream backend1 {
    ip_hash;
    server 192.168.10.11:80 weight=3; 
    server 192.168.10.12:80 max_fails=3 fail_timeout=10;
}
[root@node3 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node3 ~]# nginx -s reload

客户端命令行测试:基于源ip地址做hash绑定始终发往同一个upstream server2上

[root@node2 ~]# for i in {1..10}; do curl http://www.mylinux.com/index.html; done
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>
<h1>Upsteam Server2</h1>

示例4:针对客户端访问的uri来做的绑定。这样客户端访问同一个uri的时候,会被分配到同一个服务器上去。

[root@node3 ~]# vi /etc/nginx/conf.d/nginx.conf
upstream backend1 {
    hash $remote_addr;
    server 192.168.10.11:80 weight=3; 
    server 192.168.10.12:80 max_fails=3 fail_timeout=10;
}
[root@node3 ~]# nginx -t
[root@node3 ~]# nginx -s reload

客户端命令行测试:客户端访问同一个uri的时候,会被分配到同一个服务器上去。

[root@node2 ~]# for i in {1..10}; do curl http://www.mylinux.com/index.html; done
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>
<h1>Upsteam Server1</h1>

示例5:对请求的同一个页面始终绑定到同一个后端主机上

[root@node3 ~]# vi /etc/nginx/nginx.conf
upstream backend1 {
    hash $request_uri consistent;
    server 192.168.10.11:80 weight=3; 
    server 192.168.10.12:80 max_fails=3 fail_timeout=10;
    #server 127.0.0.1:80 backup;
}
[root@node3 ~]# nginx -t
[root@node3 ~]# nginx -s reload

 Upstream server1

配置默认测试页,以便于客户端区分

[root@node1 ~]# for i in {1..10}; do echo "test page $i on us1" > /var/www/html/test$i.html; done

 Upstream server2

配置默认测试页,以便于客户端区分

[root@node2 ~]# for i in {1..10}; do echo "test page $i on us2" > /var/www/html/test$i.html; done

客户端命令行测试:客户端请求的同一个页面始终绑定到同一个后端主机上。

[root@node4 ~]# for i in {1..10}; do for j in {1..3}; do curl http://www.mylinux.com/test$i.html; done; done
test page 1 on us1
test page 1 on us1
test page 1 on us1
test page 2 on us1
test page 2 on us1
test page 2 on us1
test page 3 on us1
test page 3 on us1
test page 3 on us1
test page 4 on us1
test page 4 on us1
test page 4 on us1
test page 5 on us2
test page 5 on us2
test page 5 on us2
test page 6 on us2
test page 6 on us2
test page 6 on us2
test page 7 on us2
test page 7 on us2
test page 7 on us2
test page 8 on us1
test page 8 on us1
test page 8 on us1
test page 9 on us1
test page 9 on us1
test page 9 on us1
test page 10 on us1
test page 10 on us1
test page 10 on us1

示例6:配置nginx proxy实现ssh服务轮询调度

[root@node3 ~]# vi /etc/nginx/nginx.conf
stream {
   upstream sshsrvs {
        server 192.168.10.11:22;
        server 192.168.10.12:22;
        }
   server {
        listen 2223;
        proxy_pass sshsrvs;
        proxy_timeout 30;
        }
[root@node3 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@node3 ~]# 
[root@node3 ~]# systemctl restart nginx
[root@node3 ~]#

客户端node4测试通过ssh连接

[root@node4 ~]# ssh -p 2223 [email protected]
[email protected]'s password: 
Last login: Sat Jan 12 10:06:09 2019 from 172.17.1.35
[root@node2 ~]# 
[root@node2 ~]# ifconfig
ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.170.10  netmask 255.255.255.0  broadcast 192.168.170.255
        inet6 fe80::c248:63a1:8980:3a52  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::e4f9:9271:bee2:4b74  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:a3:d3:9d  txqueuelen 1000  (Ethernet)
        RX packets 114509  bytes 62537323 (59.6 MiB)
        RX errors 0  dropped 752  overruns 0  frame 0
        TX packets 26061  bytes 2084601 (1.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens224: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.12  netmask 255.255.255.0  broadcast 192.168.10.255
        inet6 fe80::7939:f2e4:4478:2370  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:a3:6b:da  txqueuelen 1000  (Ethernet)
        RX packets 76127  bytes 17390736 (16.5 MiB)
        RX errors 0  dropped 751  overruns 0  frame 0
        TX packets 2759  bytes 421565 (411.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

客户端node5测试通过ssh连接

[root@node5 ~]# ssh -p 2223 [email protected]
The authenticity of host '[192.168.170.9]:2223 ([192.168.170.9]:2223)' can't be established.
ECDSA key fingerprint is SHA256:ph7qUGHxmdPtYkXCbxolOLOERtICqxvsn5vNVWo/tGg.
ECDSA key fingerprint is MD5:5a:80:04:d4:8e:6d:f5:15:2f:da:18:4e:45:9a:f4:2f.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[192.168.170.9]:2223' (ECDSA) to the list of known hosts.
[email protected]'s password: 
Last login: Sun Aug 12 19:48:02 2018 from gateway
[root@node1 ~]# 
[root@node1 ~]# ifconfig
ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.170.11  netmask 255.255.255.0  broadcast 192.168.170.255
        inet6 fe80::250:56ff:fea3:50cf  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:a3:50:cf  txqueuelen 1000  (Ethernet)
        RX packets 5282795  bytes 1506587961 (1.4 GiB)
        RX errors 0  dropped 1565  overruns 0  frame 0
        TX packets 4000414  bytes 4911241147 (4.5 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens224: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.10.11  netmask 255.255.255.0  broadcast 192.168.10.255
        inet6 fe80::250:56ff:fea3:da8c  prefixlen 64  scopeid 0x20<link>
        ether 00:50:56:a3:da:8c  txqueuelen 1000  (Ethernet)
        RX packets 215213  bytes 22356712 (21.3 MiB)
        RX errors 0  dropped 1561  overruns 0  frame 0
        TX packets 6206  bytes 260964 (254.8 KiB)

由此可以看出客户端ssh连接是轮询调度的,这就是nginx proxy使用方法。

猜你喜欢

转载自blog.csdn.net/qq_22193519/article/details/90049540