LVS-DR cluster
LVS-DR data flow analysis
The web node and the scheduler are on the same network segment, the incoming and outgoing from the scheduler, and the outgoing from the web node
client: Client Director: Scheduler Server: Server The
scheduler's ens33:0 (ens33 virtual IP) needs to be the same as the server's lo:0 (loopback virtual IP)
1. The client sends a request to the target VIP, and the Director (load balancer) receives it
IP header and data frame header information
source MAC | dest MAC | … | source IP | dest IP | dest port | … |
---|---|---|---|---|---|---|
Client MAC | Scheduler MAC | … | Client IP | Scheduler VIP | 80 | … |
2. The Director selects Server according to the load balancing algorithm, does not modify or encapsulate the IP message, but changes the MAC address of the data frame to the MAC address of the Server, and then sends it on the LAN
IP header and data frame header information
source MAC | dest MAC | … | source IP | dest IP | dest port | … |
---|---|---|---|---|---|---|
Scheduler MAC | Server MAC | … | Client IP | Scheduler VIP | 80 | … |
3. The server receives this frame and finds that the target IP matches the machine after decapsulation (the server needs to be bound to the VIP), so it processes the message. Then re-encapsulate the message and send it to the LAN
IP header and data frame header information
source MAC | dest MAC | … | source IP | dest IP | dest port | … |
---|---|---|---|---|---|---|
Server MAC | Client MAC | … | Server IP | Client IP | 80 | … |
4. The Client will receive the reply message. Client tasks are served normally without knowing which server handles them
LVS-DR ARP problem
Question 1
In the LVS-DR load balancing cluster, the load balancer and the node server must be configured with the same VIP address
. The same IP address in the local area network will cause the ARP communication of each server to be disordered. When the APR broadcast is sent to the LVS-DR cluster, because The load balancer and the node server are both connected to the same network, and they will both receive the ARP broadcast
Solution
Process the node server so that it does not respond to the ARP request for the VIP.
Use the virtual interface lo:0 to carry the VIP address.
Set the kernel parameter arp_ignore=1.
When arp_ignore=1, the network card receives the arp request whose destination IP is the loopback IP. Found that the requested IP is not the IP on the own network card, and will not respond to arp
Question 2
According to the ARP table entry, the router forwards the new request message to the server, causing the director's VIP to become invalid
Solution
Process the node server, set the kernel parameter arp_ignore=2 The system does not use the source address of the IP packet to set the source address of the ARP request, and select the IP address of the sending interface
Attachment: The working principle of ARP.
ARP refers to an
example when the target IP address is known and the target mac address is unknown : If PC1 wants to send a message to PC2, but only knows the IP of PC2, when the mac address of PC2 is unknown, it will send a broadcast frame to the switch through ARP, and the switch will receive it. When the broadcast frame arrives, the broadcast is processed unconditionally. At this time, all hosts connected to the switch will receive the broadcast frame. They will compare whether their IP address is consistent with the target IP address. If they are inconsistent, they will discard the processing, and if they are consistent, they will reply. Add your own IP address and mac address to the other party in the process of returning the packet. When the sending room receives this data, check the recipient's mac and IP address, and put them into the ARP cache together.
LVS-DR cluster deployment
Cluster topology diagram
Deployment environment
IP settings
Virtual IP: 192.168.2.100
equipment | IP address | Subnet mask | Gateway | Network card |
---|---|---|---|---|
LVS | 192.168.2.15 | 255.255.255.0 | ens33 | |
web1 | 192.168.2.16 | 255.255.255.0 | 192.168.2.15 | ens33 |
web2 | 192.168.2.17 | 255.255.255.0 | 192.168.2.15 | ens33 |
nfs | 192.168.2.18 | 255.255.255.0 | ens33 |
NFS shared storage
1. Need to install nfs-utils, rpcbind software package
yum -y install nfs-utils rpcbind
2. Set the shared directory
(for the convenience of testing, so set two different directories to distinguish)
mkdir /opt/web1
mkdir /opt/web2
echo "<html><title>web1</title><body><h1>This is web1</h1></body></html>" >> /opt/web1/index.html
echo "<html><title>web2</title><body><h1>This is web2</h1></body></html>" >> /opt/web2/index.html
vi /etc/exports
/opt/web1 192.168.2.16(ro)
/opt/web2 192.168.2.17(ro)
Reboot
systemctl restart nfs
View the NFS shared directory published by the machine
showmount -e
web1 node
Install httpd for testing
yum -y install httpd
View the shared directory of the NFS server
showmount -e 192.168.2.18
Mount the directory to the root directory of the website
mount 192.168.2.18:/opt/web1 /var/www/html/
Check if the mount is successful
Open httpd
systemctl start httpd
Turn off the firewall
systemctl stop firewalld
setenforce 0
vi /etc/selinux/config
test
curl http://localhost
Edit script
vim web1.sh
#!/bin/bash
#Lvs-Dr模式 web1
ifconfig lo:0 192.168.2.100 broadcast 192.168.2.100 netmask 255.255.255.255 up
route add -host 192.168.2.100 dev lo:0
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &> /dev/null
Execute script
web2 node
The web2 node and web1 node are the same except for changing a few parameters
Install httpd for testing
yum -y install httpd
View the shared directory of the NFS server
showmount -e 192.168.2.18
Mount the directory to the root directory of the website
mount 192.168.2.18:/opt/web2 /var/www/html/
Check if the mount is successful
cat /var/www/html/index.html
Open httpd
systemctl start httpd
Turn off the firewall
systemctl stop firewalld
setenforce 0
vi /etc/selinux/config
test
curl http://localhost
Edit script
vim web2.sh
#!/bin/bash
#Lvs-Dr模式 web2
ifconfig lo:0 192.168.2.100 broadcast 192.168.2.100 netmask 255.255.255.255 up
route add -host 192.168.2.100 dev lo:0
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &> /dev/null
Execute script
LVS scheduler deployment
1. Manually load the ip_vs module
modprobe ip_vs
View
cat /proc/net/ip_vs
Install ipvsadm
yum -y install ipvsadm
2. Write the script
vim dr.sh
#!/bin/bash
#Lvs-dr
ifconfig ens33:0 192.168.2.100 broadcast 192.168.2.100 netmask 255.255.255.255 up
route add -host 192.168.2.100 dev ens33:0
ipvsadm -C
ipvsadm -A -t 192.168.2.100:80 -s rr
ipvsadm -a -t 192.168.2.100:80 -r 192.168.2.16:80 -g
ipvsadm -a -t 192.168.2.100:80 -r 192.168.2.17:80 -g
ipvsadm -Ln
ipvsadm options
Options | Description |
---|---|
-C | Clear all records in the kernel virtual server table |
-A | Add a virtual server |
-a | Add a real server |
-t | Used to specify the VIP address and TCP port |
-r | Used to specify RIP address and TCP port |
-s | Used to specify the load scheduling algorithm |
-m | Indicates the use of NAT cluster mode |
-g | Indicates the use of DR cluster mode |
i | Indicates the use of TUN cluster mode |
Execute script
3 access test
check the detail information
ipvsadm -Lnc