Centos 7 load balancing configuration in detail the DR (direct routing) mode

The DR (direct routing) is one of three of the load balancing mode, the most used one mode, the pattern on the introduction, reference may Bowen: the LVS Detailed load balancing cluster ,

DR operating mode diagram is as follows:

Centos 7 load balancing configuration in detail the DR (direct routing) mode

The principle of this model has been written down in the blog post linked above. Now set up a direct load balancing DR mode cluster.

Environment are as follows:

Centos 7 load balancing configuration in detail the DR (direct routing) mode

Above this environment, the need to address issues the following points:

1, all web nodes and schedulers are configured on the VIP: Client Access VIP (virtual IP address of the cluster), the

If the dispatcher forwards the request to the web node, then go directly to the web in response to the client node, the client receives

After the data packet, the source address is found the packet is not received 200.0.0.254, it will return to the web server discards

Packets, in order to solve this problem, it is necessary to configure the virtual interface on all nodes and a scheduler web on 200.0.0.254

This address, and by adding a route, VIP access data limit locally, in order to avoid communication disorder.

2, to solve the problem concerning the node web ARP response: configuration 200.0.0.254 web on all nodes and a scheduler

After this address, when the client access 200.0.0.254 this address, all of the web

Node has the address, it will go to an ARP response, so this way, the client might cause a little

The scheduler had direct access to the web node, and this way, the dispatcher will be no meaning of existence, and naturally can not reach

Load balancing effect, so the need to close the web node of the ARP response, broadcast 200.0.0.254 at this address

When, just let the dispatcher to respond, web node does not respond to the broadcast.

3, to solve the ICMP redirect-kernel scheduler optimization problem: Linux kernel has a ICMP optimization,

That is, when the client first visit to the dispatcher, the dispatcher forwards the request to a particular web node, at this time, Linux

ICMP own optimization will find, and web client node can communicate directly, and then sends a data

Package, tell client, after all access 200.0.0.254 packets directly to the web node can be, so it

After all the access requests are sent directly to a node of a web, rather than through a scheduler, and certainly not so

To can not achieve load balancing effects. It is necessary to shut down Linux kernel parameters ICMP redirect response.

Configuration process is as follows:

First, the load balancer configuration:

1, configure virtual IP address (VIP)

[root@LVS network-scripts]# cp ifcfg-ens33 ifcfg-ens33:0               #在虚接口配置VIP
[root@LVS network-scripts]# vim ifcfg-ens33:0           #改动以下配置项
           .............
IPADDR=200.0.0.254
NETMASK=255.255.255.0           #必须写子网掩码信息
NAME=ens33:0              #注意改网卡名称
DEVICE=ens33:0
ONBOOT=yes
[root@LVS network-scripts]# systemctl restart network            #重启网卡使更改生效
[root@LVS network-scripts]# ifconfig        #查询相关IP是否配置正确
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 200.0.0.1  netmask 255.255.255.0  broadcast 200.0.0.255
        inet6 fe80::2e1e:d068:9c41:c688  prefixlen 64  scopeid 0x20<link>
                           ...........................

ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 200.0.0.254  netmask 255.255.255.0  broadcast 200.0.0.255
        ether 00:0c:29:77:2c:03  txqueuelen 1000  (Ethernet)

2, the adjustment / proc corresponding parameters:

[root@LVS ~]# vim /etc/sysctl.conf             #写入下面三行
                 ................
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@LVS ~]# sysctl -p              #刷新一下配置
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

3, configure the load allocation policy:

[root@LVS ~]# modprobe ip_vs         #加载ip_vs模块
[root@LVS ~]# yum -y install ipvsadm           #安装ipvsadm工具
[root@LVS ~]# ipvsadm -C              #清除原有策略
[root@LVS ~]# ipvsadm -A -t 200.0.0.254:80 -s rr        #配置群集VIP及添加相关节点
[root@LVS ~]# ipvsadm -a -t 200.0.0.254:80 -r 200.0.0.2:80 -g -w 1
[root@LVS ~]# ipvsadm -a -t 200.0.0.254:80 -r 200.0.0.3:80 -g -w 1
[root@LVS ~]# ipvsadm-save                        #保存策略
[root@LVS ~]# ipvsadm-save > /etc/sysconfig/ipvsadm           #导出策略备份
[root@LVS ~]# ipvsadm -ln             #确认群集当前策略
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  200.0.0.254:80 rr
  -> 200.0.0.2:80                 Route   1      0          0         
  -> 200.0.0.3:80                 Route   1      0          0         

Second, the configuration web server node:

Source node address VIP address of a web server transmits only the web in response to the data packet, the client does not need to listen for an access request (listener and distributed by the scheduler). Thus using virtual interface lo: 0 VIP packets to host address, and add a route record, VIP access restricted locally.

1, configure virtual IP address (VIP):

[root@web1 ~]# cd /etc/sysconfig/network-scripts/
[root@web1 network-scripts]# cp ifcfg-lo ifcfg-lo:0
[root@web1 network-scripts]# vim ifcfg-lo:0      #编辑该文件,只保留以下四行,并配置VIP
DEVICE=lo:0
IPADDR=200.0.0.254
NETMASK=255.255.255.255               #注意:子网掩码必须是全为1。也就是4个255。
ONBOOT=yes
[root@LVS network-scripts]# systemctl restart network            #重启网卡使更改生效
[root@LVS network-scripts]# ifconfig        #查询VIP是否配置正确 
                ............................
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 200.0.0.254  netmask 255.255.255.255
        loop  txqueuelen 1000  (Local Loopback)
[root@web1 ~]# route add -host 200.0.0.254 dev lo:0              #添加VIP本地访问路由记录
[root@web1 ~]# vim /etc/rc.local               #设置开机自动添加这条路由记录              
                ................................
/sbin/route add -host 200.0.0.254 dev lo:0

2, the adjustment / proc response parameters:

[root@web1 ~]#                  #调整/proc响应参数,写入下面六行
                    ...................
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@web1 ~]# sysctl -p                #刷新一下
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

3, install and start the service httpd (apache or Nginx can be built according to demand choice):

[root@web1 ~]# yum -y install httpd             #安装http服务
[root@web1 ~]# echo 1111111111111 > /var/www/html/index.html          
#准备测试网页,等看到负载均衡的效果后,再挂载共享存储设备。

Repeat the above three steps, configure additional web server nodes (I will be here another web page file node changed to: 2222222222222222).

Three, client access VIP, to test the LVS cluster:

Centos 7 load balancing configuration in detail the DR (direct routing) mode

Centos 7 load balancing configuration in detail the DR (direct routing) mode

If access to the same page, in the case of exclusion errors on the configuration, you can open multiple web pages, or wait a bit longer to refresh, because it might be a time to remain connected, so there will be a delay.

Fourth, configure NFS shared storage:

After testing a cluster effect, you need to deploy shared storage, so that all web host that can provide the same page file to the client configuration procedure is already written in the end of this post: Centos 7 based on NAT (address translation) mode load balancing configuration in detail .

Guess you like

Origin blog.51cto.com/14154700/2415944