LVS of load balancing cluster (2) DR mode construction and keepalived+lvs

4. LVS DR mode construction

Why not use IP TUNNEL mode?

In the production environment, the DR mode is mostly used, and the NAT mode is not used too much, because we also talked about the bottleneck of NAT.

If the scale is less than 10 and the number of visits is not large and the hardware configuration + network environment is acceptable, it is recommended to use the NAT mode, which can save the public network IP, because the cost of the public network IP is also relatively high.

Another solution is to build an intranet LVS. All servers use the intranet IP. We can use a public network IP port to map to the 80 port of the intranet VIP, so as to save IP resources.

4.1 Preparations

Three simulated servers

CPU name IP address Role
yt-01 192.168.2.131 Director
yt-02 192.168.2.132 Real server 1
yt-03 192.168.2.133 Real server 2
192.168.2.200 VIP

4.2 Make sure each machine has installed the ipvsadm service

[root@yt-01 ~]# yum install -y ipvsadm
[root@yt-02 ~]# yum install -y ipvsadm
[root@yt-03 ~]# yum install -y ipvsadm

4.3 Writing scripts on Director

[root@yt-01 ~]# vim /usr/local/sbin/lvs_dr.sh

#! /bin/bash
    echo 1 > /proc/sys/net/ipv4/ip_forward
    ipv=/usr/sbin/ipvsadm
    vip=192.168.2.200
    rs1=192.168.2.122
    rs2=192.168.2.123
#注意这里的网卡名字
    ifdown ens33
    ifup ens33
    ifconfig ens33:2 $vip broadcast $vip netmask 255.255.255.255 up
    route add -host $vip dev ens33:2
    $ipv -C
    $ipv -A -t $vip:80 -s wrr
    $ipv -a -t $vip:80 -r $rs1:80 -g -w 1
    $ipv -a -t $vip:80 -r $rs2:80 -g -w 1

4.4 Run the lvs_dr script on the DR

[root@yt-01 ~]# sh /usr/local/sbin/lvs_dr.sh
成功断开设备 'ens33'。
成功激活的连接(D-Bus 激活路径:/org/freedesktop/NetworkManager/ActiveConnection/3)

4.5 Scripts are also written on each Real Server

[root@yt-02 ~]# vim /usr/local/sbin/lvs_dr.sh

#! /bin/bash
vip=192.168.2.200
#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端
ifdown lo
ifup lo
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

4.6 Running scripts on each Real Server

[root@yt-02 ~]# sh /usr/local/sbin/lvs_rs.sh
[root@yt-03 ~]# sh /usr/local/sbin/lvs_rs.sh


# 查看一下每台real server的router -n
[root@zhdy-02 ~]# route -n
Kernel IP routing table
Destination    Gateway        Genmask        Flags Metric Ref    Use Iface
0.0.0.0        192.168.2.2    0.0.0.0        UG    100    0        0 ens33
192.168.2.0    0.0.0.0        255.255.255.0  U    100    0        0 ens33
192.168.2.200  0.0.0.0        255.255.255.255 UH    0      0        0 lo

# 查看IP是否已经绑在lo卡上
[root@yt-02 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
    inet 192.168.2.200/32 brd 192.168.2.200 scope global lo:0

4.7 Testing

  • Be sure to close all iptables before testing
# systemctl stop firewalld
# systemctl disable firewalld
  • Modify the content of the nginx homepage of the 2 RSs to distinguish
[root@yt-02 ~]# echo "rs1rs1" >/usr/share/nginx/html/index.html
[root@yt-03 ~]# echo "rs2rs2" >/usr/share/nginx/html/index.html
  • Test VIP with a browser, try several times
[root@zhdy-01 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
TCP  192.168.2.200:80 wrr
  -> 192.168.2.122:80            Route  1      1          9        
  -> 192.168.2.123:80            Route  1      1          8

5. keepalived + LVS

LVS has a key point and a fatal point. All requests will be forwarded to the Real server through the Director. If the Director is down, all our services will be stopped. So we will put keepalived here to achieve high availability of DR, which will solve the problem perfectly!

The complete architecture requires two servers (with the role of dir) to install keepalived software separately, in order to achieve high availability, but keepalived itself also has the function of load balancing, so this experiment can only install one keepalived.

5.1 Preparations

CPU name IP address Role
yt-01 192.168.2.131 Director, install keepalived
yt-02 192.168.2.132 Real server 1
yt-03 192.168.2.133 Real server 2
without 192.168.2.300 VIP

5.2 Configuring the director

[root@yt-01 ~]# yum install -y keepalived

# 自定义Keepalived配置文件
[root@yt-01 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
    #备用服务器上为 BACKUP
    state MASTER
    #绑定vip的网卡为ens33,你的网卡可能不一样,这里需要你改一下
    interface ens33
    virtual_router_id 51
    #备用服务器上为90
    priority 100
    #设置为不抢占,只在优先级高的机器上设置即可,优先级低的机器不设置,如果高的被down掉后,又起来,这样不会抢占。
    nopreempt    ##备用服务器不用写这句
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 123456
    }
    virtual_ipaddress {
        192.168.2.300
    }
}
virtual_server 192.168.2.300 80 {
    #(每隔10秒查询realserver状态)
    delay_loop 10
    #(lvs 算法) 
    lb_algo wlc 
    #算法(DR模式)
    lb_kind DR
    #(同一IP的连接60秒内被分配到同一台realserver)
    persistence_timeout 0 
    #(用TCP协议检查realserver状态)
    protocol TCP 
    real_server 192.168.2.132 80 {
        #(权重) 
        weight 100
        TCP_CHECK {
        #(10秒无响应超时)
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
    }  
    real_server 192.1682.133 80 {
        weight 100
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 80
        }
    }  
}    

# 启动Keepalived服务
[root@yt-01 ~]# systemctl start keepalived

查看网卡信息:
[root@yt-01 ~]# ip add
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:be:0e:17 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.131/24 brd 192.168.2.255 scope global ens33
      valid_lft forever preferred_lft forever
    inet 192.168.2.300/32 scope global ens33
      valid_lft forever preferred_lft forever
    inet6 fe80::592f:39cc:1b50:1d07/64 scope link 
      valid_lft forever preferred_lft forever
#虚拟IP(VIP)在ens33网卡上

# 查看ipvsadm规则
[root@director ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port          Forward Weight ActiveConn InActConn
TCP  192.168.2.300:80 wlc
  -> 192.168.2.132:80            Route  100    0          0        
  -> 192.168.2.133:80            Route  100    0          0        

5.3 Configure Real Server

# 编辑路由转发脚本
[root@yt-02 ~]# vim /usr/local/sbin/lvs_rs.sh

#/bin/bash
vip=192.168.2.300
#把vip绑定在lo上,是为了实现rs直接把结果返回给客户端
ifconfig lo:0 $vip broadcast $vip netmask 255.255.255.255 up
route add -host $vip lo:0
#以下操作为更改arp内核参数,目的是为了让rs顺利发送mac地址给客户端
#参考文档www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.html
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

# 运行脚本
[root@yt-02 ~]# sh /usr/local/sbin/lvs_rs.sh

RS3上同上

Configuration complete

5.4 Testing

Access VIP: 192.168.2.300 in the browser, refresh the web page, the access result will be replied by RS1 and RS2 alternately, stop any RS server, the web page will not be interrupted.

5.5 Function of Keepalived+LVS

  • Keepalived builds high availability to ensure that the server is not paralyzed after the director in LVS goes down (using multiple directors)
  • If only LVS is used, when a real server in the LVS architecture goes down, the director will continue to send requests to it. After adding Keepalived, the downed real server will be automatically cleared from the rs list.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325091686&siteId=291194637