LVS-DR cluster deployment

1. Brief description of LVS-DR mode

1. The working principle of LVS-DR mode

  • The data from the client to the server in DR mode first passes through the scheduler and then flows to each Web node;
  • The data from the Web node to the client passes through the router, not the scheduler;

2. LVS-DR data packet flow analysis

Insert picture description here

  • 1. The client sends a request to the target VIP, and the Director (load balancer) receives
  • 2. Director selects RealServer_1 according to the load balancing algorithm, does not modify or encapsulate the P message, but changes the MAC address of the data frame to the MAC address of RealServer_1, and then sends it on the LAN
  • 3. RealServer_1 receives this frame and finds that the target IP matches the machine after decapsulation (RealServer is bound to VIP in advance), so it processes the message, then re-encapsulates the message and sends it to the LAN
  • 4. The Client will receive the reply message, and the Client thinks that it has received normal service, but will not know which server processed it.
    Note: If it is across network segments, the message will be returned to the user via the router via the Internet

3. ARP problem in LVS-DR

1. Problem

  • Having the same IP address in the local area network will inevitably cause disorder in the ARP communication between the servers.
  • When the ARP broadcast is sent to the LVS-DR cluster, because the load balancer and the node server are connected to the same network, they will both receive the ARP broadcast.

1.1 How to solve the problem

Only the front-end load balancer responds, and other node servers should not respond to ARP broadcasts.

1.2 Solution

  • Process the node server so that it does not respond to ARP requests for VIP (virtual address)
  • Use virtual interface lo:0 to carry VIP (virtual address) addresses
  • Set the kernel parameter arp_ignore=1, the system only responds to ARP requests whose destination IP is the local IP

2. New issues

  • The message returned by the node server is forwarded by the router. When re-encapsulating the message, the MAC address of the router must be obtained first
  • When sending an ARP request, Linux uses the source IP address of the IP packet (ie virtual IP) as the source IP address in the ARP request packet by default, instead of using the IP address of the sending interface
  • After the router receives the ARP request, it will update the ARP table entry, and the MAC address of the load balancer corresponding to the original virtual IP will be updated to the MAC address of the node server corresponding to the virtual IP
  • According to the ARP table entry, the router will send the new request message to the node server, causing the virtual IP of the load balancer to fail

2.1 Solution

Set the node server, set the kernel parameter arp_announce=2, the system does not use the source address of the IP packet to set the source address of the ARP request, but selects the IP address of the sending interface

2. LVS-DR deployment process

1. Basic equipment configuration architecture

调度服务器一台:lvs
ip:192.168.1.10(内网)
    192.168.2.10(外网)
        
Web服务器两台:
ip:192.168.1.11(第1台)
ip:192.168.1.12(第2台)

NFS共享服务器:
ip:192.168.1.13

客户端一台:用于测试验证
ip:192.168.2.14(外网)

2. Operation process

1. Check whether the software is installed on the nfs storage server

[root@nfs ~]# rpm -qa | grep nfs
[root@nfs ~]# rpm -qa | grep rpcbind

Insert picture description here
2. Configure shared directories and create web pages

[root@nfs ~]# mkdir /web1
[root@nfs ~]# mkdir /web2
[root@nfs ~]# echo "<h1>This is Web1</h1>" > /web1/index.html
[root@nfs ~]# echo "<h1>This is Web2</h1>" > /web2/index.html

3. Edit configuration

[root@nfs ~]# vi /etc/exports

添加:
目录     共享对象       权限
/web1 192.168.1.11/32(ro)
/web2 192.168.1.12/32(ro)

5. Service start

[root@nfs ~]# systemctl start nfs  #开启服务允许nginx客户机访问共享,网络文件xt
[root@nfs ~]# systemctl start rpcbind  #开启跨平台,端口管理服务
[root@nfs ~]# showmount -e   #查看共享情况
Export list for nfs:
/web2 (everyone)
/web1 (everyone)
注:允许给所有用户共享


1. Install httpd software on web1 and web2

[root@web1 ~]#yum -y  install   httpd
[root@web2 ~]#yum -y  install   httpd

2. Mount to
web1 on the storage server

[root@web1 ~]# mount 192.168.1.13:/web1 /var/www/html/
[root@web1 ~]# df -Th
[root@web1 ~]# systemctl start httpd  使两个节点网页启动
[root@web1 ~]# curl http://localhost    测试自己的网页

Insert picture description here

web2

[root@web2 ~]# mount 192.168.1.13:/web2 /var/www/html/
[root@web2 ~]# df -Th
[root@web2 ~]# systemctl start httpd  使两个节点网页启动
[root@web2 ~]# curl http://localhost    测试自己的网页

Insert picture description here

3. Edit the configuration virtual interface address script
on web1

[root@web1 ~]# vi web1.sh
[root@web1 ~]# chmod +x web1.sh 
[root@web1 ~]# ./web1.sh 
[root@web1 ~]# ifconfig    #查看虚拟接口
[root@web1 ~]# route -n     #查看增加的路由

生成虚拟接口,地址与ens33在同一网段
添加:
#!/bin/bash
# lvs web1        
ifconfig lo:0 192.168.1.200 broadcast 192.168.1.200 netmask 255.255.255.255 up 
route add -host 192.168.1.200 dev lo:0      #针对lo:0x虚拟接口生成一个路由
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &> /dev/null    #配置生效并在屏幕中不显示输出

Insert picture description here
Insert picture description here

web2 same as above

[root@web2 ~]# vi web2.sh
[root@web2 ~]# chmod +x web2.sh 
[root@web2 ~]# ./web2.sh 
[root@web2 ~]# ifconfig    #查看虚拟接口
[root@web2 ~]# route -n     #查看增加的路由

生成虚拟接口,地址与ens33在同一网段
添加:
#!/bin/bash
# lvs web1        
ifconfig lo:0 192.168.1.200 broadcast 192.168.1.200 netmask 255.255.255.255 up 
route add -host 192.168.1.200 dev lo:0      #针对lo:0x虚拟接口生成一个路由
echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &> /dev/null    #配置生效并在屏幕中不显示输出

Insert picture description here
Insert picture description here

1. Determine the kernel's support for LVS on the lvs scheduler

[root@lvs ~]# modprobe ip_vs        #手动加载,对ip_vs的探测加载
[root@lvs ~]# cat /proc/net/ip_vs    #查看基本信息(版本信息,虚拟地址)
IP Virtual Server version 1.2.1 (size=4096)    #虚拟服务器版本信息
Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@lvs ~]# yum -y install ipvsadm

2. Placement script

[root@lvs ~]# vi lvs.sh
[root@lvs ~]# chmod  +x  lvs.sh
[root@lvs ~]# ./lvs.sh
[root@lvs ~]# systemctl stop firewalld
[root@lvs ~]# setenforce 0

添加:
#!/bin/bash
# lvs
ifconfig ens33:0 192.168.1.200 broadcast 192.168.1.200 netmask 255.255.255.255 up
添加虚拟地址的虚接口
route add -host 192.168.1.200 dev ens33:0    添加路由
ipvsadm -C   清除内核虚拟服务器表中的所有记录
ipvsadm -A -t 192.168.1.200:80 -s rr   创建虚拟服务器
ipvsadm -a -t 192.168.1.200:80 -r 192.168.1.11:80 -g  (-g:DR模式)地址捆绑虚拟地址
ipvsadm -a -t 192.168.1.200:80 -r 192.168.1.12:80 -g
ipvsadm -Ln   查看节点状态,加个“-n”将以数字形式显示地址,端口信息

Insert picture description here

Test
192.168.1.200
Insert picture description here
3. Check the scheduling situation

[root@lvs ~]# ipvsadm -Lnc

Insert picture description here

Guess you like

Origin blog.csdn.net/F2001523/article/details/110943700