LVS#Nginx + Keepalived7層ロードバランシング

================================================== =

Nginx + keepalivedは、7層の負荷分散実験ガイダンスを実装します

Nginxは、アップストリームモジュールを介してロードバランシングを実装します

アップストリームでサポートされる負荷分散アルゴリズム

ホストリスト:

CPU名 ip システム 使用する
プロキシマスター 172.16.147.155 centos7.5 主な負荷
プロキシスレーブ 172.16.147.156 centos7.5 アクティブおよびスタンバイ
実サーバー1 172.16.147.153 Centos7.5 web1
実サーバー2 172.16.147.154 centos7.5 Web2
プロキシ用VIP 172.16.147.100
轮询(默认):可以通过weight指定轮询的权重,权重越大,被调度的次数越多
ip_hash:可以实现会话保持,将同一客户的IP调度到同一样后端服务器,可以解决session的问题,不能使用weight
fair:可以根据请求页面的大小和加载时间长短进行调度,使用第三方的upstream_fair模块
url_hash:按请求的url的hash进行调度,从而使每个url定向到同一服务器,使用第三方的url_hash模块
配置安装nginx 所有的机器,关闭防火墙和selinux
[root@proxy-master ~]# systemctl stop firewalld         //关闭防火墙
[root@proxy-master ~]# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/sysconfig/selinux        //关闭selinux,重启生效
[root@proxy-master ~]# setenforce 0                //关闭selinux,临时生效

安装nginx, 全部4台
[root@proxy-master ~]# cd /etc/yum.repos.d/
[root@proxy-master yum.repos.d]# vim nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1
[root@proxy-master yum.repos.d]# yum install yum-utils -y
[root@proxy-master yum.repos.d]# yum install nginx -y

同様のサービス

调度到不同组后端服务器
网站分区进行调度

一、实施过程 
1、选择两台nginx服务器作为代理服务器。
2、给两台代理服务器安装keepalived制作高可用生成VIP
3、配置nginx的负载均衡
# 两台配置完全一样
[root@proxy-master ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    
    
    worker_connections 1024;
}
http {
    
    
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    include /etc/nginx/conf.d/*.conf;
    upstream backend {
    
    
    server 172.16.147.154:80 weight=1 max_fails=3 fail_timeout=20s;
    server 172.16.147.153:80 weight=1 max_fails=3 fail_timeout=20s;
    }
    server {
    
    
        listen       80;
        server_name  localhost;
        location / {
    
    
        proxy_pass http://backend;
        proxy_set_header Host $host:$proxy_port;
        proxy_set_header X-Forwarded-For $remote_addr;
        }
    }
}

さまざまな種類のサービス

调度到不同组后端服务器
网站分区进行调度
=================================================================================

拓扑结构

							[vip: 20.20.20.20]

						[LB1 Nginx]		[LB2 Nginx]
					    192.168.1.2		192.168.1.3

		[index]		[milis]		 [videos]	   [images]  	  [news]
		 1.11		 1.21		   1.31			  1.41		   1.51
		 1.12		 1.22		   1.32			  1.42		   1.52
		 1.13		 1.23		   1.33			  1.43		   1.53
		 ...		 ...		    ...			  ...		    ...
		 /web     /web/milis    /web/videos     /web/images   /web/news
	  index.html  index.html     index.html      index.html   index.html

一、实施过程 
根据站点分区进行调度
http {
    
    
    upstream index {
    
    
        server 192.168.1.11:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.12:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.13:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
    upstream milis {
    
    
        server 192.168.1.21:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.22:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.23:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
     upstream videos {
    
    
        server 192.168.1.31:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.32:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.33:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
     upstream images {
    
    
        server 192.168.1.41:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.42:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.43:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
      upstream news {
    
    
        server 192.168.1.51:80 weight=1 max_fails=2 fail_timeout=2;
        server 192.168.1.52:80 weight=2 max_fails=2 fail_timeout=2;
        server 192.168.1.53:80 weight=2 max_fails=2 fail_timeout=2;
       }
       
     server {
    
    
          	location / {
    
    
      		proxy_pass http://index;
      		}
      		
      		location  /news {
    
    
      		proxy_pass http://news;
      		}
      		
      		location /milis {
    
    
      		proxy_pass http://milis;
      		}
      		
      		location ~* \.(wmv|mp4|rmvb)$ {
    
    
      		proxy_pass http://videos;
      		}
      		
      		location ~* \.(png|gif|jpg)$ {
    
    
      		proxy_pass http://images;
      		}
}
 

KeepalivedはスケジューラーHAを実装します

注:主/备调度器均能够实现正常调度
1. 主/备调度器安装软件
[root@proxy-master ~]# yum install -y keepalived
[root@proxy-slave ~]# yum install -y keepalived
[root@proxy-master ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@proxy-master ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    
    
   router_id directory1   #辅助改为directory2
}

vrrp_instance VI_1 {
    
    
    state MASTER        #定义主还是备
    interface ens33     #VIP绑定接口
    virtual_router_id 80  #整个集群的调度器一致
    priority 100         #back改为50
    advert_int 1        #检查间隔,默认1s
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        172.16.147.100/24   # vip
    }
}

[root@proxy-slave ~]# cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
[root@proxy-slave ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    
    
   router_id directory2
}

vrrp_instance VI_1 {
    
    
    state BACKUP    #设置为backup
    interface ens33
    nopreempt        #设置到back上面,不抢占资源
    virtual_router_id 80
    priority 50   #辅助改为50
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        172.16.147.100/24
    }
}
3. 启动KeepAlived(主备均启动)
[root@proxy-master ~]# systemctl enable keepalived
[root@proxy-slave ~]# systemctl start keepalived
[root@proxy-master ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 172.16.147.100/32 scope global lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ec:8a:fe brd ff:ff:ff:ff:ff:ff
    inet 172.16.147.155/24 brd 172.16.147.255 scope global noprefixroute dynamic ens33
       valid_lft 1115sec preferred_lft 1115sec
    inet 172.16.147.101/24 scope global secondary ens33
       valid_lft forever preferred_lft forever

到此:
可以解决心跳故障keepalived
不能解决Nginx服务故障
4. 扩展对调度器Nginx健康检查(可选)两台都设置
思路:
让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Nginx失败,则关闭本机的Keepalived
(1) script
[root@proxy-master ~]# vim /etc/keepalived/check_nginx_status.sh
#!/bin/bash												        
/usr/bin/curl -I http://localhost &>/dev/null	
if [ $? -ne 0 ];then										    
#	/etc/init.d/keepalived stop
	systemctl stop keepalived
fi															        	
[root@proxy-master ~]# chmod a+x /etc/keepalived/check_nginx_status.sh

(2). keepalived使用script
! Configuration File for keepalived

global_defs {
    
    
   router_id director1
}
vrrp_script check_nginx {
    
    
   script "/etc/keepalived/check_nginx_status.sh"
   interval 5
}

vrrp_instance VI_1 {
    
    
    state MASTER
    interface ens33
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        192.168.246.16/24
    }
    track_script {
    
    
        check_nginx
    }
}
注:必须先启动nginx,再启动keepalived

Nginx + keepalivedは、7層の負荷分散実験操作を実現します

4つのサーバーをnginxとともにインストールする必要があります。プロキシマスターとプロキシスレーブは実サーバーの負荷分散であり、スレーブはマスターで非常に利用可能です。

1.Dockerコンテナの準備

ops proxy-master proxy-master
ops proxy-slave proxy-slave
ops real-server-1 real-server-1
ops real-server-2 real-server-2

2.実サーバー1と実サーバー2の構成

配置一下不同的网站发布界面就行

ここに写真の説明を挿入

ここに写真の説明を挿入

3.プロキシマスターとプロキシスレーブの構成

ここに写真の説明を挿入

负载均衡配置
[root@proxy-master /]# vim /etc/nginx/nginx.conf

ここに写真の説明を挿入

高可用配置
proxy-master
[root@proxy-master /]# yum -y install keepalived
[root@proxy-master /]# cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

#原先的配置文件全部删掉或者bak,好的习惯是完成配置之后将注释全部去掉,因为有的配置文件的注释不是#这里作笔记就不删除了
#全局定义,这里的router_id随便,只需要和高可用不同就行
global_defs {
    
    
   router_id directory1   #辅助改为directory2
}
#vrrp实例

vrrp_instance VI_1 {
    
    
    state MASTER          #定义主还是备
    interface ens33       #VIP绑定接口
    virtual_router_id 80   #整个集群的调度器一致
    priority 100         #优先级,back改为50,master是100
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        172.17.0.10/16   # vip
    }
}

proxy-slave
[root@proxy-master /]# yum -y install keepalived
[root@proxy-slave /]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
    
    
   router_id directory2
}

vrrp_instance VI_1 {
    
    
    state BACKUP    #设置为backup
    interface ens33
    nopreempt        #设置到back上面,不抢占资源
    virtual_router_id 80
    priority 50   #辅助改为50
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        172.16.147.100/24
    }
}

この時点で、完了しても、
dockerコンテナにはモジュールipvs
fuckがないため、戻ってkvmまたはvmwareを使用してください。

4.補足、外部スクリプトの使用

keepalivedに特定の時間に外部スクリプトを実行させます。スクリプトの機能は、負荷分散されたNginxが失敗したとき、つまり関連するサービスが失敗したときにipdriftを実行することです。

4. 扩展对调度器Nginx健康检查(可选)两台都设置
思路:
让Keepalived以一定时间间隔执行一个外部脚本,脚本的功能是当Nginx失败,则关闭本机的Keepalived
(1) script
[root@proxy-master ~]# vim /etc/keepalived/check_nginx_status.sh
#!/bin/bash												        
/usr/bin/curl -I http://localhost &>/dev/null	
if [ $? -ne 0 ];then										    
#	/etc/init.d/keepalived stop
	systemctl stop keepalived
fi															        	
[root@proxy-master ~]# chmod a+x /etc/keepalived/check_nginx_status.sh

(2). keepalived使用script
! Configuration File for keepalived

global_defs {
    
    
   router_id director1
}
vrrp_script check_nginx {
    
    
   script "/etc/keepalived/check_nginx_status.sh"
   interval 5
}

vrrp_instance VI_1 {
    
    
    state MASTER
    interface ens33
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        192.168.246.16/24
    }
    track_script {
    
    
        check_nginx
    }
}
注:必须先启动nginx,再启动keepalived

おすすめ

転載: blog.csdn.net/kakaops_qing/article/details/109134519