Article Directory
- Preface
- One: LVS-DR working principle
- Two: LVS-DR deployment
- 2.3 Configure two WEB nodes
-
- 2.4 Make the following settings on the two LVS active and standby servers keepalived:
- 2.5 Open the service, bind all virtual machine network cards to host-only mode, and test
- 2.6 Test LVS failure keepalived tool automatic switch function
- 2.7 The test is successful, or the node WEB server is accessed normally
Preface
One: LVS-DR working principle
1.1: Overview of DR mode
Load balancing cluster working mode-Direct Routing is
referred to as DR mode. It adopts a semi-open network structure, which is similar to the structure of the TUN mode, but the nodes are not scattered everywhere, but are located on the same physical network as the scheduler
The load scheduler is connected to each node server through the local network, without the need to establish a dedicated IP tunnel
1.2: How to analyze the flow of LVS-DR packets?
In order to facilitate the principle analysis, the Client and the cluster machine are placed on the same network, and the route of the data packet is 1-2-3-4.
1. The client sends a request to the target VIP, and the Director (load balancer) receives it. At this time, the IP header and data frame header information is:
2. Director selects Realserver_1 according to the load balancing algorithm, does not modify or encapsulate the IP message, but changes the MAC address of the data frame to the MAC address of RealServer_1, and then sends it on the LAN . The IP header and data frame header information is as follows:
3. RealServer_1 receives this frame and finds that the target IP matches the machine after decapsulation (RealServer is bound to VIP in advance), so it processes this message. Then re-encapsulate the message and send it to the LAN. At this time, the IP header and data frame header information is
4. Client will receive the reply message. Client thinks that it is getting normal service, but does not know which server handles it
Note: If it crosses the network segment, the message will be returned to the user via the router via terne
1.3: ARP problems in LVS-DR
In the LVS-DR load balancing cluster, the load balancer and the node server must be configured with the same VIP address
. Having the same IP address in the local area network will inevitably cause disorder in the ARP communication of each server
When an ARP broadcast is sent to the LVS-DR cluster, because the load balancer and the node server are connected to the same network, they will both receive the ARP broadcast.
At this time, only the front-end load balancer responds, and other node servers do not. Should respond to ARP broadcast, process
the node server so that it does not respond to ARP requests for VIP
Use the virtual interface lo:0 to carry the VIP address.
Set the kernel parameter arp_ignore=1: the system only responds to the ARP request whose destination IP is the local IP.
RealServe returns the packet (the source IP is VIP) and is forwarded by the router. It needs to be obtained first when re-encapsulating the packet. The MAC address of the router When
sending an ARP request, Linux uses the source IP address of the IP packet (ie VIP) as the source IP address in the ARP request packet by default, instead of using the IP address of the sending interface (such as ens33).
After the router receives the ARP request , Will update ARP entries
The original VIP corresponding to the Director's MAC address will be updated to the MAC address of the ⅥP corresponding to the RealServer.
At this time, the router will forward the new request message to the RealServer according to the ARP table entry, causing the Director's VIP to become invalid.
Solution
Process the node server and set the kernel parameter arp_announce=2: The system does not use the source address of the IP packet to set the source address of the ARP request, but selects the IP address of the sending interface
1.4: Methods to solve the above two ARP problems
修改/etc/sysctl.conf文件
对节点服务器进行处理,使其不响应针对VIP的ARP请求
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
系统不使用IP包的源地址来设置ARP请求的源地址,而选择发送接口的IP地址
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
Two: LVS-DR deployment
2.1: Case environment
In order to further improve the load capacity of the company's website, the company decided to expand the existing website platform and build a load balancing cluster based on LVS. Considering the access efficiency of the cluster, the administrator plans to adopt the DR mode of the LVS cluster, and the shared storage device is stored in the internal private network
LVS1:192.168.100.20 PC-2
LVS2:192.168.100.3 PC-3
Web1:192.168.100.4 PC-4
Web2:192.168.100.5 PC-5
VIP=192.168.100.10
Win 10 :192.168.100.150
2.2 First configure two LVS agents
2.2.1 Install related software keepalived (dual machine hot backup) ipvsadm (LVS control software)
root@pc-2 ~]# yum install keepalived ipvsadm -y
root@pc-3 ~]# yum install keepalived ipvsadm -y
Modify configuration files, enable routing, and disable redirection
root@pc-2 ~] Vi /etc/sysctl.conf
net.ipv4.ip_forward=1
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.default.send_redirects=0
net.ipv4.conf.ens33.send_redirects=0
~
[root@pc-2 ~]# sysctl -p //使其生效
net.ipv4.ip_forward = 1 开启路由功能
PROC响应 关闭重定向功能
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
2.2.2 Configure physical network card interface, virtual network card interface
[root@pc-2 network-scripts]# cp ifcfg-ens33 ifcfg-ens33:0
[root@pc-2 network-scripts]# vim ifcfg-ens33:0
2.2.3 Create dr script file for easy operation
cd /etc/init.d/
vim dr.sh
//Script content:
#!/bin/bash
GW=192.168.100.1
VIP=192.168.100.10
RIP1=192.168.100.4
RIP2=192.168.100.5
case "$1" in
start)
/sbin/ipvsadm --save > /etc/sysconfig/ipvsadm
systemctl start ipvsadm
/sbin/ifconfig ens33:0 $VIP broadcast $VIP netmask 255.255.255.255 broadcast $VIP up
/sbin/route add -host $VIP dev ens33:0
/sbin/ipvsadm -A -t $VIP:80 -s rr
/sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g
/sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g
echo "ipvsadm starting ---- ------[ok]"
;;
stop)
/sbin/ipvsadm -C
systemctl stop ipvsadm
ifconfig ens33:0 down
route del $VIP
echo "ipvsamd stoped--------------[ok]"
;;
status)
if [ ! -e /var/lock/subsys/ipvsadm ]; then
echo "ipvsadm stoped----------------------"
exit 1
else
echo "ipvsamd Runing------------[ok]"
fi
;;
*)
echo "Usage: $0 {start | stop | status }"
exit 1
esac
exit 0
Add permissions, note: do not execute the script yet!
chmod +x dr.sh
Configure PC-3 virtual interface
ifup ens33:0 //Enable virtual VIP
systemctl stop firewalld setenforce 0
service dr.sh start //Enable LVS service
2.3 Configure two WEB nodes
Arrangement PC-4, PC-5
2.3.1 Install httpd software
[root@pc-5 ~]# yum install -y httpd
[root@pc-4 ~]# yum install -y httpd
关闭防火墙
systemctl stop firewalld
setenforce 0
[root@pc-5 ]# vim /var/www/html/index.html
2.3.2 Configure loopback address loop 0
cd /etc/sysconfig/network-scripts/
cp ifcfg-lo ifcfg-lo:0
Vim ifcfg-lo:0
cd /etc/sysconfig/network-scripts/
cp ifcfg-lo ifcfg-lo:0
Vim ifcfg-lo:0
2.3.3 Writing web service scripts
root@pc-4 network-scripts]# cd /etc/init.d
[root@pc-4 init.d]# vim web.sh
#!/bin/bash
VIP=192.168.100.10
case "$1" in
start)
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP // 关闭ARP响应
/sbin/route add -host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p >/dev/null 2>&1
echo "RealServer Start OK"
;;
stop)
ifconfig lo:0 down
route del $VIP /dev/null 2>&1
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore //启动ARP响应
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "RealServer Stopd"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
exit 0
2.4 Make the following settings on the two LVS active and standby servers keepalived:
2.4.1 Configure keepalived parameters
PC-2 和 PC-3 服务器上
编辑keepalived
cd /etc/keepalived/
Root@pc-2 keepalived]# vim keepalived.conf
#! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1 //第一处修改:改为127.0.0.1
smtp_connect_timeout 30
router_id LVS_01 //第二处修改:LVS2服务器修改为02
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER //第三处:备用服务器LVS2改为BACKUP
interface ens33 //第四处:改为ens33
virtual_router_id 10 //第五处:虚拟组,两个LVS服务器必须一样
priority 100 //第六处:备用LVS2优先级小于100,设为90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.10 //第七处:只要虚拟IP地址
}
}
virtual_server 192.168.100.10 80 {
//第八处:虚拟IP地址和端口
delay_loop 6
lb_algo rr
lb_kind DR //第九处:我们使用的是LVS的DR模式
persistence_timeout 50
protocol TCP
real_server 192.168.100.4 80 {
//10、web1服务器节点
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.100.5 80 {
//11、web2的服务器节点
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
### 2.4.2 将文件复制到另外一台LVS
[root@pc-2 keepalived]# scp keepalived.conf root@192.168.100.3:/etc/keepalived/
The authenticity of host '192.168.100.3 (192.168.100.3)' can't be established.
ECDSA key fingerprint is SHA256:90aPw0S3HsHIz87PlUJMnwuceFtLo+xqNo6qSzF4vME.
ECDSA key fingerprint is MD5:86:db:7f:af:8d:e0:7d:c0:a2:0c:b1:17:e9:9f:0b:5d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.100.3' (ECDSA) to the list of known hosts.
[email protected]'s password:
keepalived.conf
2.4.3 Modify the relevant parameters to ensure that the ROUTER-ID is different, the vrrp group is the same, and the priority is different
2.5 Open the service, bind all virtual machine network cards to host-only mode, and test
systemctl restart keepalived
systemctl restart ipvsadm.service
service network restart
service dr.sh start
service web.sh start
2.6 Test LVS failure keepalived tool automatic switch function
The keepalived tool can automatically failover, shut down the LVS1 server service, and simulate failures. If the client can still communicate with the virtual IP address and can access the website normally, it means that LVS2 is working instead of LVS1 to prevent a single point of failure. Up.
2.6.1 Manually stop the virtual interface to simulate failure test
root@pc-2 ~]# ifdown ens33:0
[root@pc-2 ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.20 netmask 255.255.255.0 broadcast 192.168.100.255
inet6 fe80::20c:29ff:fe5c:722e prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:5c:72:2e txqueuelen 1000 (Ethernet)
RX packets 6353 bytes 497090 (485.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12384 bytes 1197061 (1.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 53132 bytes 4193520 (3.9 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53132 bytes 4193520 (3.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:bc:78:98 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
2.7 The test is successful, or the node WEB server is accessed normally