Article Directory
LVS-DR
Load balancing cluster working mode-Direct Routing is
referred to as DR mode. It adopts a semi-open network structure, which is similar to the structure of the TUN mode, but the nodes are not scattered everywhere, but are located on the same physical network as the scheduler
The load scheduler is connected to each node server through the local network, without the need to establish a dedicated IP tunnel
The LVS-DR data flow diagram
is stolen...I admit
1. The Client sends a request to the target VIP, and the Director (load balancer) receives it. At this time, the IP header and data frame header information is:
2. Director selects Realserver_1 according to the load balancing algorithm, does not modify or encapsulate the IP message, but changes the MAC address of the data frame to the MAC address of RealServer_1, and then sends it on the LAN . The IP header and data frame header information is as follows:
3. RealServer_1 receives this frame and finds that the target IP matches the machine after decapsulation (RealServer is bound to VIP in advance), so it processes this message. Then re-encapsulate the message and send it to the LAN. At this time, the IP header and data frame header information is:
4. The Client will receive the reply message. Client thinks that it is getting normal service, but does not know which server handles it
Note: If it crosses the network segment, the message will be returned to the user via the router via terne
Possible problems
In the LVS-DR load balancing cluster, the load balancer and the node server must be configured with the same VIP address
. Having the same IP address in the local area network will cause disturbances in the ARP communication of each server.
When an ARP broadcast is sent to the LVS-DR cluster , Because the load balancer and the node server are connected to the same network, they will both receive the ARP broadcast.
At this time, only the front-end load balancer responds, and other node servers should not respond to the ARP broadcast
to process the node server. It does not respond to ARP requests for VIPs.
Use virtual interface lo:0 to carry the VIP address.
Set the kernel parameter arp_ignore=1: The system only responds to ARP requests whose destination IP is the local IP.
RealServe returns packets (source IP is VIP) and are forwarded by the router. The MAC address of the router must be obtained first when re-encapsulating the message
Solution to the above ARP problem
Modify the /etc/sysctl.conf file
to process the node server so that it does not respond to ARP requests for VIP
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2 The
system does not use IP packets To set the source address of the ARP request, select the IP address of the sending interface
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
keepalived
Overview of keepalived tools
A health check tool specially designed for LVS and HA
Support automatic failover (Failover)
Support node health check (Health Checking)
official website http://www.keepalived.org
Principle analysis
Keepalived adopts VRRP hot backup protocol to realize the multi-machine hot backup function of Linux server.
VRRP, virtual routing redundancy protocol, is a backup solution for routers.
Multiple routers form a hot backup group, which is provided through a shared virtual IP address. Service
Only one main router in each hot standby group provides service at the same time, and other routers are in a redundant state.
If the currently online router fails, other routers will automatically take over the virtual IP address according to the set priority and continue to provide services
Keepalievd deployment description
Keepalived can realize multi-machine hot backup. Each hot-standby group can have multiple servers. The most commonly used is dual
-machine hot-standby. The failover of dual-machine hot-standby is realized by the drift of virtual IP address, which is suitable for various application servers.
Keepalievd installation description
When applying in an LVS cluster environment, you also need to use the lipvsadm management tool
YUM to install Keepalived to
enable Keepalived service
Common configuration
The Keepalievd configuration directory is located at /etc/keepalievd/
keepalievd.conf is the main configuration file. The
global_defs{...} section specifies the global parameter
vrrp_instance instance name {...} section specifies the VRRP hot standby parameters. The
comment text starts with the "!" symbol. The
directory samples/, Many configuration examples are provided for reference
Common configuration options
router_id HA_TEST_R1: the name of the router (server)
vrrp_instance VI_1: defines the VRRP hot standby instance
state MASTER: hot standby state, MASTER represents the master server
interface ens33: the physical interface carrying the VIP address
virtual_router_id 1: the ID number of the virtual router, each hot Keep the backup group the same
priority 100: priority, the larger the value, the higher the priority
advert_int 1: the number of seconds between notifications (heartbeat frequency)
auth_type PASS: authentication type
auth_pass 123456: password string
virtual_ipaddress{vip}: specify the drift address (VIP) , There can be multiple, multiple drift addresses are separated by commas
Slave server configuration
The configuration of the Keepalived backup server is different from the master configuration in three options
router_id: set to free name
state: set to BACKUP
priority: value lower than that of the master server
Other options are the same as master
LCS-DR! Come on!! Show!!!
Experiment description
Or use Apache to show it habitually!
LVS01: 192.168.10.20
LVS02: 192.168.10.21
WebServer01: 192.168.10.30
WebServer02: 192.168.10.31
Virtual IP (Vip): 192.168.10.10
Purpose
The client can successfully access the webpage by accessing the drifting IP address of the lvs scheduler
Start showing!
Configure LVS
Both LVS must be installed with ipvsadm (I installed keepalived here to do dual-system hot backup)
[root@lvs01 ~]# yum -y install ipvsadm keepalived
[root@lvs02 ~]# yum -y install ipvsadm keepalived
Two LVS change configuration files, turn off routing forwarding and redirection
[root@lvs01 ~]# vim /etc/sysctl.conf
##末行插入如下配置
##开启ipv4地址转发
net.ipv4.ip_forward = 1
##关闭ipv4全部重定向
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
##重载配置,使配置生效
[root@lvs01 ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
Both LVS create a service script and put it in init.d to facilitate service use
[root@lvs01 /]# vim /etc/init.d/DR.sh
#!/bin/bash
GW=192.168.10.1
VIP=192.168.10.10 ##虚拟ip
RIP1=192.168.10.30 ##真实web服务器ip
RIP2=192.168.10.31
case "$1" in
start)
/sbin/ipvsadm --save > /etc/sysconfig/ipvsadm ##保存配置
systemctl start ipvsadm ##启动服务
/sbin/ifconfig ens33:0 $VIP broadcast $VIP netmask 255.255.255.255 broadcast $VIP up
##设置ens33:0地址,广播地址,子网掩码,并开启
/sbin/route add -host $VIP dev ens33:0 ##添加路由网段信息
/sbin/ipvsadm -A -t $VIP:80 -s rr ##指定虚拟服务访问入口,指定轮询算法
/sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g ##指定真实服务器,dr模式
/sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g
echo "ipvsadm starting --------------------[ok]"
;;
stop)
/sbin/ipvsadm -C ##清空缓存
systemctl stop ipvsadm ##关闭服务
ifconfig ens33:0 down ##关闭接口
route del $VIP ##删除路由信息
echo "ipvsamd stoped----------------------[ok]"
;;
status)
if [ ! -e /var/lock/subsys/ipvsadm ];then ##判断文件存在与否决定状态
echo "ipvsadm stoped---------------"
exit 1
else
echo "ipvsamd Runing ---------[ok]"
fi
;;
*)
echo "Usage: $0 {start|stop|status}"
exit 1
esac
exit 0
[root@lvs01 /]# chmod +x /etc/init.d/DR.sh
Virtual machine settings, put both LVSs in the LAN segment 1.
Because the command in the shell is to configure a temporary IP address, change it again in the configuration file
[root@lvs01 /]# cd /etc/sysconfig/network-scripts/
[root@lvs01 network-scripts]# cp -p ifcfg-ens33 ifcfg-ens33:0
[root@lvs01 network-scripts]# vim ifcfg-ens33:0
DEVICE=ens33:0
ONBOOT=yes
IPADDR=192.168.10.10
NETMASK=255.255.255.0
[root@lvs01 network-scripts]# vim ifcfg-ens33
BOOTPROTO=static
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.10.20 ##LVS02改成192.168.10.21
PREFIX=24
GATEWAY=192.168.10.1
Both restart the service and start the virtual network card and shell script
[root@lvs01 network-scripts]# systemctl restart network
[root@lvs01 network-scripts]# ifup ens33:0
[root@lvs01 network-scripts]# service DR.sh start
ipvsadm starting --------------------[ok]
[root@lvs01 network-scripts]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.20 netmask 255.255.255.0 broadcast 192.168.10.255
ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.100.10 netmask 255.255.255.255 broadcast 192.168.100.10
ether 00:0c:29:43:a9:53 txqueuelen 1000 (Ethernet)
[root@lvs01 network-scripts]# systemctl stop firewalld
[root@lvs01 network-scripts]# setenforce 0
Configure WEB server
Install HTTPD on two
[root@web01 /]# yum -y install httpd
Change the network mode of web01 and 02 to LAN section 1.
Change the IP and configure lo:0
[root@web01 network-scripts]# cp -p ifcfg-lo ifcfg-lo:0
[root@web01 network-scripts]# vim ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.10.10
NETMASK=255.255.255.0
ONBOOT=yes
[root@web01 network-scripts]# vim ifcfg-ens33
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.10.30 ##web02配置成 192.168.10.31
PREFIX=24
GATEWAY=192.168.10.1
[root@web01 network-scripts]# systemctl restart network
[root@web01 network-scripts]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.10.30 netmask 255.255.255.0 broadcast 192.168.10.255
lo:0: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 192.168.10.10 netmask 255.255.255.0
loop txqueuelen 1000 (Local Loopback)
Configure arp suppression script
[root@web01 network-scripts]# vim /etc/init.d/apa.sh
VIP=192.168.10.10
case "$1" in
start)
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
/sbin/route add -host $VIP dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore '//arp忽略'
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p >/dev/null 2>&1
echo "RealServer Start OK "
;;
stop)
ifconfig lo:0 down
route del $VIP /dev/null 2>&1
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore '//arp开启'
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
echo "RealServer Stopd"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
exit 0
[root@web01 network-scripts]# chmod +x /etc/init.d/apa.sh
Configure the main page for the two web services
[root@web01 html]# echo "<h1>This is Web01 Server.</h1>" > index.html
[root@web01 html]# ls
index.html
[root@web02 html]# echo "<h1>This is Web02 Server.</h1>" > index.html
[root@web02 html]# ls
index.html
Open and restart each service
[root@web01 html]# ifup lo:0
[root@web01 html]# service apa.sh start
RealServer Start OK
[root@web01 html]# systemctl stop firewalld
[root@web01 html]# setenforce 0
Verification experiment
Visit the floating IP, successfully visit the webpage and confirm that the polling mode is turned on
Keepalievd! Come on!! Show me again!!!
The experimental environment is based on the above DR deployment
Start showing!
Configure two LVS
[root@lvs01 network-scripts]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1 ##指向自身环回口IP
smtp_connect_timeout 30
router_id LVS01 ##两台LVS的id 不能相同,另一台配 LVS02
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state MASTER
interface ens33 ##根据自己的网卡设置
virtual_router_id 10 ##两个虚拟号需要相同
priority 100 ##优先级,越大越优先,所以 02 的优先级可以配90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111 ##上面一行和这一行最好不要改,如果改了也要相同
}
virtual_ipaddress {
192.168.10.10
}
}
virtual_server 192.168.10.10 80 {
##改成浮动VIP 端口80 指向http服务
delay_loop 6
lb_algo rr ##轮询
lb_kind DR ##DR模式
persistence_timeout 50
protocol TCP
real_server 192.168.10.30 80 {
##指向 web01 端口80
weight 1 ##向下删除原来大概9行
TCP_CHECK{
添加如下
connect_port 80 ##添加连接端口
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
##删了下面的全部,然后复制上面的 real_server 段
real_server 192.168.10.31 80 {
weight 1
TCP_CHECK{
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
For the convenience of copying, I put the configuration file of web02 down below
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS02
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 10
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.10.10
}
}
virtual_server 192.168.10.10 80 {
delay_loop 6
lb_algo rr
lb_kind DR
persistence_timeout 50
protocol TCP
real_server 192.168.10.30 80 {
weight 1
TCP_CHECK{
添加如下
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.31 80 {
weight 1
TCP_CHECK{
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
Open service
[root@lvs01 network-scripts]# systemctl start keepalived
[root@lvs01 network-scripts]# systemctl restart keepalived
[root@lvs01 network-scripts]# systemctl restart network
Verification experiment
Visit the webpage to
close LVS01
[root@lvs01 network-scripts]# systemctl stop network
Visit again.
Server scheduling is successful and the experiment is complete