LVS load-balancing cluster of building DR mode, you can now do! ! !

Related concepts regarding LVS load-balancing cluster can refer Bowen: LVS load balancing cluster Detailed

I. Case Summary

-DR LVS load balancing mode mode: LVS load balancer as access entry of the cluster, but is not used as a gateway, the node server in the pool in all of their respective access internet, transmits to the client (Internet) packets do not require a response from the web after lvs load balancer.

-DR LVS load balancing mode mode:
advantages: the load balancer is only responsible for the request package to a physical server, and the server physical response packet directly to the user. Therefore, the load balancer can handle a very great amount of requests in this way, a load balancing can serve more than 100 physical servers, the load balancer is no longer the bottleneck of the system. Use VS-DR way, if your load balancer has a full-duplex network card 100M of it, you can make the entire Virtual Server can achieve throughput of 1G. Even higher;
inadequate: however, this approach requires RIP all the DIR and are in the same broadcast domain; does not support remote disaster recovery.
Summary: LVS-DR is the highest performance of the three modes a mode than in a load mode LVS-NAT RS serve more usually about 100, higher requirements on network environment, up to a daily application is modes of operation.

Second, the case of the environment

Due to the experimental environment, there is no need to get so large topology diagram, two Web service nodes with 10 nodes Web server meaning is the same, and the configuration method is the same, so the experimental environment to deploy two Web server nodes. Experimental topology is as follows:

LVS load-balancing cluster of building DR mode, you can now do!  !  !

LVS load balancing mode -DR mode features:

  1. Each Web server node must LVS load schedule in the same network (same broadcast domain);
  2. RIP LVS load scheduler can use private IP addresses, a public network address may be used to facilitate the configuration;
  3. Support does not support port mapping;
  4. LVS load must uninx scheduler may use an operating system (OS) is; and VIP LVS load balancer needs to Yang system arp, you need to configure Loopback;
  5. LVS load balancer is only responsible for processing incoming requests, a response packet directly to the server node Web client;
  6. Web server node can not point to the gateway DIP, the front end of the gateway used directly respond to the request message;

Third, knowledge points to explain

Why LVS load balancer with a VIP Web server nodes in the same network segment?
A: DR mode because only modify the package to find the MAC address of the Web server node by ARP broadcast situation, it requires LVS be the same network segment, that is, when the IP VIP and Web server load balancer nodes in Mount VIP check the operating mode of LVS, if the DR mode, need to confirm whether the IP can hang in this LVS below.

Kernel redirection: VIP address lvs needs to share the load balancer and nodes, and should close the linux kernel parameter redirect response. How to achieve?
A: The need routers send ICMP redirect to the source, there are two situations:

  • When the router from an interface receives a packet, but also the data packets sent from the same interface to the destination is the interface router receives packets destined for export destination is the will to send ICMP source redirect to inform the peer data packets directly sent to its next hop, but do not re-sent to their own.
  • Source IP packet and its own next-hop IP address when forwarding is the same network segment, it will send ICMP redirect to the source, the other party notice directly to the data packet sent to its next hop.
    Note: While the router to send ICMP redirect data sources, will forward data packets received, and does not interrupt network.

Diagram is as follows:
LVS load-balancing cluster of building DR mode, you can now do!  !  !

过程分析如下:
1.server2如果要与internet通讯,首先是要把报文发送给server1的,因为server2的网关指向server1的;
2.server1收到报文并检查它的路由表,发现router是发送改报文的下一跳。当它把报文发送给router时,server1检测到这个报文的发送出去的接口与接收到的接口是相同的,这样ICMP重定向就触发了;
3.server1认为server2应该把默认路由指向router,所以就发送ICMP重定向报文给server2;

在真实的节点服务器上要给lo:0设置vip,并调整内核的arp响应参数以阻止更新VIP的mac地址,避免发生冲突,如何实现?

示意图如下:
LVS load-balancing cluster of building DR mode, you can now do!  !  !
在配置LVS负载均衡架构的时候需要在Web节点服务器上抑制ARP,
具体是arp_ignore=1,arp_announce=2
arp_ignore: (回应ARP),选项如下:

0:回应任何网口上收到的对任何本机IP地址的ARP查询请求(默认)
1:只回应Target IP是接收网口的IP的ARP查询请求
2:只回应Target IP是接收网口的IP的ARP查询请求,且Sender IP必须与该网口属于同一网段
4-7:保留未使用
8:不回应所有的arp查询

arp_announce: (宣告ARP),选项如下:

0:使用发送(或转发)的数据包的源IP作为发送ARP请求的Sender IP(默认) ;(可使用ping -I 验证)
1:IP数据包的目的IP属于本地某个接口的网段时,Sender IP则使用IP数据包源IP,不属于则按2处理;
2:忽略数据包的源IP,使用能与目标主机会话的最佳地址来作为发送ARP的Sender IP,优先选择对外接口的主IP;(loopback不是对外接口)
注:ARP表没有网关对应的条目时,在发送IP数据包前会触发 arp_announce;
Sender MAC跟系统无关,Sender MAC=源MAC,源MAC由物理地址决定,网络非法进入除外。

关于lvs-dr模式下一些疑问:
LVS负载均衡群集DR模式如何处理请求报文的,会修改IP包内容吗?
答:DR模式的LVS负载均衡群集本身不会关心IP层以上的信息,即使是端口号也是tcp/ip协议栈去判断是否正确,DR模式的LVS负载均衡群集本身主要做这么几个事:
①接收client的请求,根据你设定的负载均衡算法选取一台Web节点服务器的ip;
②以选取的这个ip对应的mac地址作为目标mac,然后重新将IP包封装成帧转发给这台Web节点服务器;
③在hash table中记录连接信息。
DR模式的LVS负载均衡群集做的事情很少,也很简单,所以它的效率很高,不比硬件负载均衡设备差多少,数据包、数据帧的大致流向是这样的:client –> LVS –> Web 节点服务器 –> client

Web节点服务器为什么要在lo接口上配置VIP?在出口网卡上配置VIP可以吗?
答:既然要让Web节点服务器能够处理目标地址为VIP的IP包,首先必须要让Web节点服务器能接收到这个包。在lo网卡上配置VIP能够完成接收包并将结果返回client。不可以将VIP设置在出口网卡上,否则会响应客户端的ARP request,造成ARP紊乱,以至于整个LVS群集环境都不能正常工作。

Web节点服务器为什么要抑制ARP?
答:对所有的物理网卡设置ARP仰制。对仰制所有的物理网卡设置ARP抑制是为了让客户端发送的请求顺利转交给LVS负载调度器以及防止整个LVS环境ARP混乱,不然容易导致整个lvs不能工作。

DR模式的LVS负载调度器与Web节点服务器为什么要在同一网段中?
答:DR模式的LVS负载调度器是在数据链路层来实现的,即VIP必须能够接受LVS负载调度器的arp请求,如果不在同一网段则会隔离arp,这样arp请求就不能转发到指定的Web节点服务器上,所以LVS负载调度器必须和Web节点服务器在同一网段里面。

为什么LVS负载调度器上ens33接口除了VIP另外还要配一个IP(即VIP)?
答:如果是用了keepalived等工具做HA或者LB,则在健康检查时需要用到VIP。 没有健康检查机制的HA或者LB则没有存在的实际意义。

DR模式的LVS负载调度器路由转发功能需要开启吗?
答:不需要。因为LVS负载调度器跟Web节点服务器是同一个网段,无需开启转发。

VIP subnet mask LVS load scheduler must be a 255.255.255.255 do?
LVS load balancer in DR mode, VIP subnet mask LVS load balancer is not necessary to 255.255.255.255, LVS load balancer's VIP was originally to be like a normal IP address as External Circular, do not do so special.

DR LVS load balancing cluster mode principle is very complicated, but the implementation process is very simple.

Fourth, the implementation of case

1. Configure LVS load balancer

(1) configure virtual IP address (VIP)

Manner using virtual interface (ens33: 0) binding to the VIP address ens33 NIC, in order to respond to access the cluster. Configuration is as follows:

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-ens33:0
自行修改网卡配置文件
[root@localhost ~]# ifconfig ens33
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.1  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::a5f4:262:9bbe:d04  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:00:11:89  txqueuelen 1000  (Ethernet)
        RX packets 2  bytes 501 (501.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 35  bytes 4894 (4.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost ~]# ifconfig ens33:0
ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.254  netmask 255.255.255.0  broadcast 192.168.1.255
        ether 00:0c:29:00:11:89  txqueuelen 1000  (Ethernet)

(2) adjusting / proc response parameters

For DR cluster mode, since LVS load balancer and each node needs to share the VIP address, the Linux kernel should close in response to the redirection parameters. Configuration is as follows:

[root@localhost ~]# vim /etc/sysctl.conf
         ………………                  //省略部分内容,添加以下内容
net.ipv4.conf.all.send_redirects  =  0     
net.ipv4.conf.default.send_redirects  =  0
net.ipv4.conf.ens33.send_redirects  = 0     
[root@localhost ~]# sysctl -p
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

(3) configuring the load allocation policy

[root@localhost ~]# ipvsadm -C 
[root@localhost ~]# ipvsadm -A -t 192.168.1.254:80 -s rr
[root@localhost ~]# ipvsadm -a -t 192.168.1.254:80 -r 192.168.1.2 -g -w 1
[root@localhost ~]# ipvsadm -a -t 192.168.1.254:80 -r 192.168.1.3 -g -w 1
[root@localhost ~]# ipvsadm-save
-A -t localhost.localdomain:http -s rr
-a -t localhost.localdomain:http -r 192.168.1.2:http -g -w 1
-a -t localhost.localdomain:http -r 192.168.1.3:http -g -w 1

2. Configure the Web server node

When using the DR mode, the node server need to configure the VIP address, and adjusting the kernel parameter to prevent ARP response VIP update the MAC address, to avoid conflict. In addition, similar to the configuration and NAT mode Web server node.

Both servers configured the same as the content! Take this example of a Web server node.

(1) configure virtual IP address

In each Web server node also needs to have a VIP address 192.168.1.254, but this address is used only as the transmission source address of the Web response packet, the client does not need to listen for an access request (to change the listen and distributed by the scheduler). Thus using virtual interface lo: 0 to bear VIP address and add a default route for native recorded data to limit access to the local VIP, in order to avoid confusion of communication.

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-lo ifcfg-lo:0
[root@localhost network-scripts]# vim ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.1.254
NETMASK=255.255.255.255                                    //掩码必须全为1
ONBOOT=yes
NAME=loopback:0
[root@localhost network-scripts]# ifdown lo;ifup lo
[root@localhost network-scripts]# ifconfig lo:0
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.1.254  netmask 255.255.255.255
        loop  txqueuelen 1  (Local Loopback)
[root@localhost ~]# vim /etc/rc.local                                       //永久添加一条路由信息
                      ………………             //省略部分内容,添加以下内容
/sbin/route add -host 192.168.1.254 dev lo:0
[root@localhost ~]# route add -host 192.168.1.254 dev lo:0                //临时添加一条路由信息

(2) adjusting / proc kernel parameters

[root@localhost ~]# vim /etc/sysctl.conf
                           ………………                 //省略部分内容,添加以下内容
net.ipv4.conf.all.arp_ignore  =  1
net.ipv4.conf.all.arp_announce  =  2
net.ipv4.conf.default.arp_ignore  =  1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore  =  1
net.ipv4.conf.lo.arp_announce  = 2
[root@localhost ~]# sysctl -p
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

(3) install httpd, to create a test page

[root@localhost ~]# yum -y install httpd
[root@localhost ~]# mount 192.168.1.4:/var/www/html /var/www/html
[root@localhost ~]# systemctl start httpd

3 Test LVS cluster

Client access 192.168.1.254, to see the effect on the LVS cluster server!

[root@localhost ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.254:80 rr
  -> 192.168.1.2:80               Route   1      1          0         
  -> 192.168.1.3:80               Route   1      1          0 

Results achieved, the experiment is complete! ! !

-------- end of this article so far, thanks for reading --------

Guess you like

Origin blog.51cto.com/14157628/2437805