Build LVS load balancing cluster-direct routing mode (LVS-DR)


1. LVS-DR packet flow analysis

In order to facilitate the principle analysis, put the Client and the cluster machine on the same network, the route of the data packet flow is 1-2-3-4
mark

  1. Client sends a request to the target VIP, Director (load balancer) receives
    • At this time, the source MAC address is the MAC address of the Client
    • The target MAC address is the MAC address of the scheduler Director
  1. Director selects RealServer based on load balancing algorithm
    • Do not modify or encapsulate the IP message, but change the MAC address of the data frame to the MAC address of RealServer, and then send it on the LAN
    • At this time, the source MAC address is the MAC address of Director, and the destination MAC address is the MAC address of RealServer.
  1. RealServer received this frame
    • After decapsulation, it is found that the target IP matches the local machine (RealServer is bound with VIP in advance), so the message is processed
    • Then re-encapsulate the message, send the response message to the physical network card through the lo interface, and then send it out
    • At this time, the source MAC address is the MAC address of RealServer, and the destination MAC address is the MAC address of Client
  1. Client will receive the reply message
    • Client thinks that it gets the normal service, but does not know which server handles it
    • Note: If it crosses the network segment, the message will be returned to the user via the Internet through the router

2. ARP issues in LVS-DR

In the LVS-DR load balancing cluster, the load balancing and node servers must be configured with the same VIP address
mark

  • However, having the same IP address in the local area network will inevitably cause the disorder of the ARP communication between the servers:
    • When the ARP broadcast is sent to the LVS-DR cluster, because the load balancer and the node server are connected to the same network, they will both receive the ARP broadcast
    • Only the front-end load balancer responds, other node servers should not respond to ARP broadcasts
  • At this time, we can process the node server so that it does not respond to ARP requests for VIP:
    • Use virtual interface lo:0 to carry VIP address
    • Set the kernel parameter arp_ignore=1 (the system only responds to ARP requests whose destination IP is the local IP)
  • RealServer returns the packet (the source IP is VIP) and is forwarded by the router. When re-encapsulating the packet, you need to obtain the MAC address of the router first
  • And when sending an ARP request, Linux defaults to use the source IP address of the IP packet (ie VIP) as the source IP address in the ARP request packet instead of the IP address of the sending interface
  • After the router receives the ARP request, it will update the ARP table entry
  • The original VIP corresponding to the Director's MAC address will be updated to the VIP corresponding to the MAC address of the RealServer.
    According to the ARP table entry, the router will forward the new request message to the RealServer, causing the Director's VIP to become invalid

The solution to the above problem: process
the node server and set the kernel parameter arp_announce=2 (the system does not use the source address of the IP packet to set the source address of the ARP request, but chooses the IP address of the sending interface)

  • Solutions to the two problems of ARP
#修改 /etc/sysctl.conf 文件

net.ipv4.conf.lo.arp_ignore=1
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_ignore=1
net.ipv4.conf.all.arp_announce=2

Three, LVS load balancing cluster-DR mode

1. Data packet flow analysis

  1. The client sends a request to the Director Server (load balancer), and the requested data packet (source IP is CIP, destination IP is VIP) reaches the kernel space
  2. Director Server and Real Server are in the same network, and data is transmitted through the second data link layer
  3. The kernel space judges that the target IP of the data packet is the local VIP. At this time, IPVS (IP Virtual Server) compares whether the service requested by the data packet is a cluster service, and re-encapsulates the data packet if it is a cluster service. Modify the source MAC address to the MAC address of the Director Server and modify the destination MAC address to the MAC address of the Real Server. The source and destination IP addresses have not changed, and then send the data packet to the Real Server
  4. The MAC address of the request message arriving at the Real Server is its own MAC address, and the message is received; the data packet re-encapsulates the message (the source IP address is VIP, the destination IP is CIP), and the response message is sent to the lo interface The physical network card then sends out
  5. Real Server directly transmits the response message to the client

2. Features of DR mode

  • Director Server and Real Server must be in the same physical network
  • Real Server can use a private address or a public network address; if a public network address is used, RIP can be directly accessed through the Internet
  • Director Server serves as the access portal of the cluster, but not as a gateway
  • All request messages go through Director Server, but reply response messages cannot go through Director Server
  • The gateway of Real Server is not allowed to point to the Director Server IP, that is, the data packets sent by Real Server are not allowed to pass through the Director Server
  • Configure the VIP IP address on the lo interface on the Real Server

Fourth, deploy LVS-DR load balancing cluster

1 Overview

  • In a cluster in DR mode, the LVS load scheduler serves as the access portal for the cluster, but not as a gateway
  • All nodes in the server pool are connected to the Internet, and the Web response data packets sent to the client do not need to go through the LVS load scheduler
    mark
  • In this way, inbound and outbound access data are processed separately, so the LVS load scheduler and all node servers need to be configured with VIP addresses in order to respond to the access of the entire cluster
  • Considering the security of data storage, shared storage devices will be placed in the internal private network

2. Environment

  • Host: Win10 professional workstation version
  • VMware:16 Pro(16.1.0)
  • CentOS 7
  • Network adapter: all in NAT mode
  • Network card configuration: get IP statically
  • YUM source: local
  • Win client: 192.168.126.10
  • DR server (load scheduler) (CentOS 7-1): 192.168.126.11
  • Web server 1 (CentOS 7-2): 192.168.126.12
  • Web server 2 (CentOS 7-3): 192.168.126.13
  • NFS server (CentOS 7-4): 192.168.126.14
  • VIP:192.168.126.166

3. Configure the load scheduler

CentOS 7-1 (192.168.126.11)

  1. ready
systemctl stop firewalld.service 
systemctl disable firewalld.service 
setenforce 0

modprobe ip_vs
#加载ip_vs模块
cat /proc/net/ip_vs
#查看ip_vs版本信息

yum install -y ipvsadm
#安装软件包

mark

  1. Configure virtual IP address) (VIP)

Use virtual interface to bind VIP address to network card to respond to cluster access

cd /etc/sysconfig/network-scripts/
cp ifcfg-ens33 ifcfg-ens33:0

vim ifcfg-ens33:0
#清空原有配置,添加以下内容
DEVICE=ens33:0
ONBOOT=yes
IPADDR=192.168.126.88
NETMASK=255.255.255.255


ifup ens33:0
#开启虚拟 ip

ifconfig ens33:0
#查看虚拟 ip

mark

  1. Adjust proc response parameters

For the DR cluster, since the LVS load scheduler and each node need to share the VIP address, the redirection parameter response of the Linux kernel should be turned off

vim /etc/sysctl.conf
#添加以下内容
net.ipv4.ip_forward = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0


sysctl -p

mark

  1. Configure load distribution strategy
ipvsadm-save > /etc/sysconfig/ipvsadm
systemctl start ipvsadm

ipvsadm -C
#清除原有策略

ipvsadm -A -t 192.168.126.166:80 -s rr
ipvsadm -a -t 192.168.126.166:80 -r 192.168.126.12:80 -g
ipvsadm -a -t 192.168.126.166:80 -r 192.168.126.13:80 -g
#若使用隧道模式,则结尾处 -g 替换为 -i

ipvsadm

ipvsadm -ln
#查看节点状态,Route代表 DR模式

mark

4. Deploy NFS share mount

CentOS 7-4 (192.168.126.14)

systemctl stop firewalld.service 
systemctl disable firewalld.service 
setenforce 0

yum install -y nfs-utils rpcbind

systemctl start nfs.service 
systemctl start rpcbind.service
systemctl enable nfs.service 
systemctl enable rpcbind.service

mkdir /opt/xcf /opt/zxc
chmod 777 /opt/xcf/ /opt/zxc/

vim /etc/exports
/usr/share *(ro,sync)
/opt/xcf 192.168.126.0/24(rw,sync)
/opt/zxc 192.168.126.0/24(rw,sync)

exportfs -rv
showmount -e

5. Configure the node server

CentOS 7-2 (192.168.126.12) 与 CentOS 7-3 (192.168.126.13)

  • When using the DR mode, the node server also needs to configure the VIP address, and adjust the kernel's ARP response parameters to organize the update of the VIP MAC address to avoid conflicts
  • In addition, the configuration of the web service is similar to that of NAT
  1. ready
systemctl stop firewalld.service 
systemctl disable firewalld.service 
setenforce 0

#将两个节点服务器的网关和DNS注释掉后重启网卡
#如果有网关服务器则指向网关服务器
  1. Configure virtual IP address
#此地址仅用做发送 Web 响应数据包的源地址,并不需要监听客户机的访问请求(改由调度器监听并分发)
#因此使用虚接口 lo:0 来承载 VIP 地址,并为本机添加一条路有记录,将访问 VIP 的数据限制在本地,以避免通信紊乱

cd /etc/sysconfig/network-scripts/
cp ifcfg-lo ifcfg-lo:0

vim ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.126.166
NETMASK=255.255.255.255
#注意,此处子网掩码必须全为1
#NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
#BROADCAST=127.255.255.255
ONBOOT=yes
#NAME=loopback


ifup lo:0
ifconfig lo:0

route add -host 192.168.126.166 dev lo:0
#禁锢路由
route -n
#查看路由

vim /etc/rc.local
#添加VIP本地访问路由
/sbin/route add -host 192.168.126.166 dev lo:0

chmod +x /etc/rc.d/rc.local

mark
mark
mark
mark

  1. Adjust the kernel's ARP response parameters to prevent updating the VIP's MAC address and avoid conflicts

Adjust /proc response parameters

vim /etc/sysctl.conf
......
net.ipv4.conf.lo.arp_ignore = 1
#系统只响应目的IP为本地IP的ARP请求
net.ipv4.conf.lo.arp_announce = 2
#系统不使用IP包的源地址来设置ARP请求的源地址,而选择发送接口的IP地址
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2


sysctl -p

或
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

mark
mark

  1. Mount the shared directory
yum install -y nfs-utils rpcbind httpd
systemctl start rpcbind
systemctl start httpd

mount.nfs 192.168.126.14:/opt/xcf /var/www/html/
echo 'Hello xcf~' > /var/www/html/index.html

#设为自动挂载
vim /etc/fstab

192.168.126.14:/opt/xcf /var/www/html nfs defaults,_netdev 0 0

mount -a

mark
mark
mark

  1. The other Web2 node server operates in the same way, just change the parameters of the shared directory.

6. Test the LVS cluster

  • Use Win10 as the test client (the default gateway points to 192.168.126.166), and directly access http://192.168.126.166 from the Internet

  • Able to see the content of the webpage provided by the real server-if the webpages of each node are different, the webpages seen by different clients will be different (refresh several times, wait a while)
    mark
    mark

  • In the LVS load scheduler, the current load distribution can be observed by viewing the node status. For the polling algorithm, the connection load obtained by each node should be roughly equivalent
    mark

Guess you like

Origin blog.csdn.net/weixin_51486343/article/details/112914769