Load balancing configuration in detail the DR (direct routing) mode

The DR (direct routing) is one of three of the load balancing mode, the most used one mode, description of this model, reference may Bowen: https://blog.51cto.com/14227204/2436891
environment are as follows:
Load balancing configuration in detail the DR (direct routing) mode
1 All web nodes and schedulers are configured on the VIP: client access VIP (virtual cluster IP address), if

The scheduler node forwards the request to the web, and the web directly to the response to the client node, the client receives

After the data packet, the source address is found the packet is not received 200.0.0.254, it will return to the web server discards

Packets, in order to solve this problem, it is necessary to configure the virtual interface on all nodes and a scheduler web on 200.0.0.254

This address, and by adding a route, VIP access data limit locally, in order to avoid communication disorder.
.
2, the node address issues related web ARP response: on all the nodes and scheduling on web 200.0.0.254

After this address, when the client access 200.0.0.254 this address, all nodes have the address of the web, so

ARP response will be to carry out, so this way, might cause the client to skip directly to access the web scheduler node

, And this way, meaning there is no existence of the scheduler, and naturally not reach the load balancing effect, so the need

Close the web node of the ARP response, broadcast 200.0.0.254 at this address, just let the dispatcher to respond, web

Node does not respond to the broadcast.
.

3, to solve the ICMP redirect-kernel scheduler optimization problem: Linux kernel has a ICMP optimization,

That is, when the client first visit to the dispatcher, the dispatcher forwards the request to a particular web node, at this time, Linux

ICMP own optimization will find, and web client node can communicate directly, and then sends a data

Package, tell client, after all access 200.0.0.254 packets directly to the web node can be, so it

After all the access requests are sent directly to a node of a web, rather than through a scheduler, and certainly not so

To can not achieve load balancing effects. It is necessary to shut down Linux kernel parameters ICMP redirect response.
Preparation:
1, in addition to self-configure the VIP address
2, software packages must be
arranged as follows:
First, the load balancer configuration:
using a virtual interface for the NIC binding manner VIP address ens33, in order to respond to access the cluster.

[root@localhost /]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-ens33:0          
[root@localhost network-scripts]# vim ifcfg-ens33:0              # 修改ens33:0,配置VIP
....................
IPADDR=200.0.0.254                                # 修改如下四条,注意网卡名称一致
NETMASK=255.255.255.0
NAME=ens33:0
DEVICE=ens33:0
....................
[root@localhost network-scripts]# ifup ens33:0                 # 启动虚接口
[root@localhost network-scripts]# ifconfig ens33:0                 # 查看相关配置是否成功
ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 200.0.0.254  netmask 255.255.255.0  broadcast 200.0.0.255
        ether 00:0c:29:f1:61:28  txqueuelen 1000  (Ethernet)
root@localhost network-scripts]# ifconfig 
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 200.0.0.1  netmask 255.255.255.0  broadcast 200.0.0.255
        inet6 fe80::595f:84d:a379:7b6e  prefixlen 64  scopeid 0x20<link>

Adjustment / proc response parameters (Linux kernel redirects off parameter response):

[root@localhost /]# vim /etc/sysctl.conf                   # 写入如下三行
................
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@localhost /]# sysctl -p                      # 刷新配置使之生效
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

Configure load balancing policies:

[root@localhost /]# modprobe ip_vs                         # 加载 ip_vs 模块
[root@localhost /]# yum -y install ipvsadm                # 安装 ipvsadm 工具
[root@localhost /]# ipvsadm -C                                  # 清除原有策略
[root@localhost /]# ipvsadm -A -t 200.0.0.254:80 -s rr           # 配置群集VIP及添加相关节点
[root@localhost /]# ipvsadm -a -t 200.0.0.254:80 -r 200.0.0.2:80 -g -w 1
[root@localhost /]# ipvsadm -a -t 200.0.0.254:80 -r 200.0.0.3:80 -g -w 1
[root@localhost /]# ipvsadm-save                             # 保存策略
-A -t localhost.localdomain:http -s rr
-a -t localhost.localdomain:http -r 200.0.0.2:http -g -w 1
-a -t localhost.localdomain:http -r 200.0.0.3:http -g -w 1
[root@localhost /]# ipvsadm-save > /etc/sysconfig/ipvsadm             # 导出以备份
[root@localhost /]# ipvsadm -ln                                # 确认群集当前策略
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  200.0.0.254:80 rr
  -> 200.0.0.2:80                 Route   1      0          0         
  -> 200.0.0.3:80                 Route   1      0          0         

Second, the configuration web server node:
Y VIP address of a web as the node server transmits only the web in response to the source address of the packet does not need to listen to the client's access request (listener and distributed by the scheduler). Thus using virtual interface lo: 0 VIP packets to host address, and add a route record, VIP access restricted locally.

[root@web1 /]# cd /etc/sysconfig/network-scripts/
[root@web1 network-scripts]# cp ifcfg-lo ifcfg-lo:0
[root@web1 network-scripts]# vim ifcfg-lo:0              # 编辑此文件
.................
DEVICE=lo:0                             # 切记修改网卡名称
IPADDR=200.0.0.254               # 配置 VIP 
NETMASK=255.255.255.255                  # 子网掩码需全为1
ONBOOT=yes
[root@web1 network-scripts]# ifup lo:0                # 启动虚接口
[root@web1 network-scripts]# ifconfig lo:0           # 确认以生效
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 200.0.0.254  netmask 255.255.255.255
        loop  txqueuelen 1  (Local Loopback)
[root@web1 /]# route add -host 200.0.0.254 dev lo:0               # 添加 VIP 本地访问路由
[root@web1 /]# route -n
200.0.0.254     0.0.0.0         255.255.255.255 UH    0      0        0 lo
[root@web1 /]# vim /etc/rc.local                         #设置为开机自动添加此条路由
/sbin/route add -host 200.0.0.254 dev lo:0

Adjustment / proc response parameters:

[root@web1 /]# vim /etc/sysctl.conf 
................
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@web1 /]# sysctl -p              # 刷新使配置生效
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

Install httpd and create a test page:

[root@web1 /]# yum -y install httpd              # 安装httpd
[root@web1 /]# echo test1.com > /var/www/html/index.html         # 创建测试文件
[root@web1 /]# systemctl start httpd
[root@web1 /]# systemctl enable httpd

Repeat the above steps to configure other node server, the same virtual interface, like / proc, the same httpd (in order to facilitate the verification was successful, and here I will file another home instead of test2.com)
Load balancing configuration in detail the DR (direct routing) mode
Load balancing configuration in detail the DR (direct routing) mode
if access to the same page, in case of an error on the negative configuration, can open multiple web pages, or wait a little longer refresh, because it may have a retention time of connection, so there will be delayed
four, NFS shared storage configuration:
testing the effect of the cluster, you need to deploy shared storage, so that all nodes can provide the same web page file to the client configuration procedure is already written in the end of this post: https://blog.51cto.com/14227204/2437018

Guess you like

Origin blog.51cto.com/14227204/2437934