Cluster architecture-the working principle and application of direct routing mode LVS-DR

1. LVS-DR working principle

1.1 Overview of DR mode

  • Load balancing cluster working mode-direct routing (Direct Routing), referred to as DR mode. It adopts a semi-open network structure, which is similar to the structure of the TUN model, but the nodes are not scattered everywhere, but are located on the same physical network as the scheduler;
  • The load scheduler is connected to each node server through the local network, and there is no need to establish a dedicated IP tunnel.

1.2 ARP problem in LVS-DR

Insert picture description here

  • In the LVS-DR load balancing cluster, the load balancer and the node server must be configured with the same VIP address

1.2.1 ARP problem 1

Having the same IP address in the local area network will inevitably cause disorder of ARP communication between servers

  • When the ARP broadcast is sent to the LVS-DR cluster, because the load balancer and the node server are connected to the same network, they will both receive the ARP broadcast;
  • Only the front-end load balancer responds, other node servers should not respond to ARP broadcasts

1.2.2 ARP problem 2

Process the node server so that it does not respond to ARP requests for VIP

  • Use the virtual interface lo:0 to carry the VIP address;
  • Set the kernel parameter arp ignore=1: the system only responds to ARP requests whose destination IP is the local IP.

1.2.3 ARP problem 3

  • RealServer returns the message (the source IP is VIP) and is forwarded by the router. When re-encapsulating the message, the MAC address of the router must be obtained first;
  • When sending an ARP request, Linux uses the source IP address of the IP packet (that is, VIP) as the source IP address in the ARP request packet by default, instead of using the IP address of the sending interface.

1.2.4 ARP problem 4

  • After the router receives the ARP request, it will update the ARP table entry;
  • The MAC address of the original VIP corresponding to the Director will be updated to the MAC address of the VIP corresponding to the RealServer.
    Insert picture description here

1.2.5 ARP problem 5

  • According to the ARP table entry, the router forwards the new request message to the RealServer, causing the Director's VIP to become invalid.

1.3 How to solve the two ARP problems of ARP

Modify the /etc/sysctl.conf file

  • Process the node server so that it does not respond to ARP requests for VIP
    net.ipv4.conf.lo.arp_ignore = 1
    net.ipv4.conf.all.arp_ignore = 1
  • The system does not use the source address of the IP packet to set the source address of the ARP request, but selects the IP address of the sending interface
    net.ipv4.conf.lo.arp_announce = 2
    net.ipv4.conf.all.arp_announce = 2

2. LVS-DR data packet flow analysis

  • Client sends a request to the target VIP, and Director (load balancer) receives it;
  • Director selects RealServer_1 according to the load balancing algorithm, does not modify or encapsulate the IP message, but changes the MAC address of the data frame to the MAC address of RealServer_1, and then sends it on the LAN;
  • After receiving the frame and decapsulating, RealServer_1 finds that the target IP matches the local machine (RealServer is bound to VIP in advance), and then processes the message. Then re-encapsulate the message and send it to the LAN;
  • Client will receive the reply message. Client thinks that it gets normal service, but does not know which server handles it.
    Note: If it crosses the network segment, the message will be returned to the user via the Internet via the router.

3. Construct LVS-DR mode load balancing cluster

3.1 Case environment

LVS load balancing cluster—direct routing mode (LVS-DR) environment

(1) One LVS load dispatch server

  • IP address: 192.168.70.10

(2) Two Web servers

  • IP address: 192.168.70.11 (SERVER AA)
  • IP address: 192.168.70.12 (SERVER AB)
    Note: The gateway of the web server here does not need to point to the dispatcher network card
  • lo:0 (VIP) address: 192.168.70.200

(3) NFS shared server

  • IP address: 192.168.70.13

(4) One client terminal for testing and verification

  • IP address: 192.168.70.14
  • Note: Generally, the client and LVS-DR are not in the same network segment. This is for testing. We set the client and LVS-DR in the same network segment.

Note: It is necessary to ensure that the same network segment can communicate with each other

3.2 Experimental purpose

  • The win10 client accesses the virtual interface address 192.168.70.200, directly routes through DR, and polls access to the Apache1 and Apache2 hosts;
  • Build nfs network file storage service.

3.3 Project steps

3.3.1 Configure NFS storage server

[root@nfs ~]# rpm -qa | grep rpcbind		//默认虚拟机已安装rpcbind模块
rpcbind-0.2.0-42.el7.x86_64
[root@nfs ~]# yum -y install nfs-utils	//确认是否安装nfs-utils软件包
已加载插件:fastestmirror, langpacks
base                                                     | 3.6 kB     00:00     
Loading mirror speeds from cached hostfile
 * base: 
软件包 1:nfs-utils-1.3.0-0.48.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@nfs ~]# mkdir /opt/web1
[root@nfs ~]# mkdir /opt/web2
[root@nfs ~]# echo "<h1>this is web1.</h1>" > /opt/web1/index.html
[root@nfs ~]# echo "<h1>this is web2.</h1>" > /opt/web2/index.html
[root@nfs ~]# vi /etc/exports
/opt/web1 192.168.70.11/32 (ro)
/opt/web2 192.168.70.12/32 (ro)
[root@nfs ~]# systemctl restart nfs
[root@nfs ~]# systemctl restart rpcbind
[root@nfs ~]# showmount -e
Export list for nfs:
/opt/web2 (everyone)
/opt/web1 (everyone)

3.3.2 Configure Web Site Server

  • Configuration on Web1
[root@web1 ~]# yum -y install httpd
[root@web1 ~]# showmount -e 192.168.70.13
Export list for 192.168.70.13:
/opt/web2 (everyone)
/opt/web1 (everyone)
[root@web1 ~]# mount 192.168.70.13:/opt/web1 /var/www/html
[root@web1 ~]# systemctl restart httpd
[root@web1 ~]# netstat -anpt | grep httpd
tcp6       0      0 :::80                   :::*                    LISTEN      55954/httpd   
[root@web1 ~]# vi web1.sh
#!/bin/bash
#lvs dr模式 web1
ifconfig lo:0 192.168.70.200 broadcast 192.168.70.200 netmask 255.255.255.255 up
route add -host 192.168.70.200 dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &>/dev/null

[root@web1 ~]# sh web1.sh
[root@web1 ~]# ifconfig		//查看虚拟端口
[root@web1 ~]# route -n 	//查看路由
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.70.10   0.0.0.0         UG    100    0        0 ens33
192.168.70.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.70.200  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

Insert picture description here
Insert picture description here

  • Configuration on Web2
[root@web2 ~]# yum -y install httpd
[root@web2 ~]# mount 192.168.70.13:/opt/web2 /var/www/html
[root@web2 ~]# systemctl start httpd
[root@web2 ~]# netstat -anpt | grep httpd
tcp6       0      0 :::80                   :::*                    LISTEN      54695/httpd 
[root@web2 ~]# vi web2.sh
#!/bin/bash
#lvs_dr模式 web2
ifconfig lo:0 192.168.70.200 broadcast 192.168.70.200 netmask 255.255.255.255 up
route add -host 192.168.70.200 dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &>/dev/null

[root@web2 ~]# sh web2.sh
[root@web2 ~]# ifconfig		//查看虚拟端口
[root@web2 ~]# route -n 	//查看路由
kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.70.10   0.0.0.0         UG    100    0        0 ens33
192.168.70.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.70.200  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

Insert picture description here
Insert picture description here

3.3.3 LVS-DR scheduler configuration

  • Load the ip_vs module
[root@lvs_dr ~]# modprobe ip_vs     	  '//加载ip_vs模块'
[root@lvs_dr ~]# cat /proc/net/ip_vs      '//查看ip_vs版本信息'
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  • Install ipvsadm
[root@lvs_dr ~]# yum -y install ipvsadm
  • Write a script to configure LVS-DR
[root@lvs_dr ~]# vim lvs_dr.sh
#!/bin/bash
ifconfig ens33:0 192.168.70.200 broadcast 192.168.70.200 netmask 255.255.255.255 up   
											  '//添加虚拟地址'
route add -host 192.168.70.200 dev ens33:0    '//ens33:0 添加路由'
ipvsadm -C
ipvsadm -A -t 192.168.70.200:80 -s rr
ipvsadm -a -t 192.168.70.200:80 -r 192.168.70.11:80 -g
ipvsadm -a -t 192.168.70.200:80 -r 192.168.70.12:80 -g
ipvsadm -Ln   '//查看节点状态'

[root@lvs_dr ~]# sh lvs_dr.sh
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.200:80 rr
  -> 192.168.70.11:80             Route   1      0          0         
  -> 192.168.70.12:80             Route   1      0          0  

3.4 Verification results

  • The client can access on the browser
    Insert picture description here
    Insert picture description here
  • View real scheduling details
[root@lvs_dr ~]# ipvsadm -lnc		'//查看真实调度明细'
IPVS connection entries
pro expire state       source             virtual            destination
TCP 14:56  ESTABLISHED 192.168.70.1:62081 192.168.70.200:80  192.168.70.11:80
TCP 01:25  FIN_WAIT    192.168.70.1:62080 192.168.70.200:80  192.168.70.12:80

Expand

NAT与DR模式区别:
NAT模式的群集采用单一出入口,一个公网IP地址;
而DR模式的群集采用单一入口+多路出口,需要多个公网IP地址。

Guess you like

Origin blog.csdn.net/weixin_42449832/article/details/110873125