Construction LVS + Keepalived + Nginx DR mode based high availability solutions

In large sites generally do cluster server, while using a load balancer load balancing. This is conducive to disperse a large number of requests to each server, to enhance the response speed of the site. Of course, in order to solve the problem single point of failure, but also do hot backup solution. Demonstrated here with LVS load balancer do, while taking advantage of Keepalived ensure its availability, Nginx build a cluster based on LVS DR mode.

1, environment preparation

Various software and version information is as follows:

software version
Centos system Linux release 7.3.1611 (Core)
Nginx 1.16.0
LVS ipvsadm-1.27-7.el7.x86_64
Keepalived keepalived.x86_64 0:1.3.5-8.el7_6

Node allocation and roles are as follows:

node Character
192.168.208.154 lvs master
192.168.208.155 lvs slave
192.168.208.150 Nx1
192.168.208.151 nginx2

Paying particular attention to this set of VIP address: 192.168.208.100, VIP that is a virtual IP address, that is, when the external IP address of the access request.

2, deployment architecture

Based on the above environment, the deployment architecture are as follows:

pay attention:

Since the DR mode is employed, i.e. when the user request is sent to the VIP, LVS based load balancing algorithm set forwards the request to a specific server Nginx (Real Server), and when a particular server is dealt with directly Nginx the results are returned to the user. Therefore, in particular Nginx server is to set the loopback IP, address that is set in the lo VIP card.

3, deployment

(1), the first node and the slave node Master lvs lvs installation and keepalived:

yum install ipvsadm
yum install keepalived

(2), mounted on nginx1 nginx and nginx2 node:

# 添加nginx的yum源
rpm -ivh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

# 安装
yum install nginx

After installation nginx, edit the default page, plus specific information, so as to determine which nginx handle requests in the end, that is:

vi /usr/share/nginx/html/index.html

For nginx1 plus 150 words, nginx2 plus 151 words, namely:

When direct access nginx1, effect:

When direct access nginx2, effect:

(3)、在lvs master节点和lvs slave节点配置keepalived信息:

首先配置lvs master节点:

编辑如下文件:

vi /etc/keepalived/keepalived.conf

内容为:

! Configuration File for keepalived

global_defs {
# 这里将这些邮件设置的相关信息都注释掉了
#   notification_email {
#     [email protected]
#     [email protected]
#     [email protected]
#   }
#   notification_email_from [email protected]
#   smtp_server 192.168.200.1
#   smtp_connect_timeout 30
# router_id是keepalived的一个标识,最好不同的keepalived配置成不一样
   router_id LVS_DEVEL
#   vrrp_skip_check_adv_addr
#   vrrp_strict
#   vrrp_garp_interval 0
#   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    # MASTER表示是主节点,备份节点是BACKUP
    state MASTER
    # 网卡名称,这个不同的服务器,可能有所不同
    interface ens33
    # 路由标识,MASTER和BACKUP节点的该值要保持一致
    virtual_router_id 51
    # 优先级,MASTER节点的值必须大于BACKUP的值
    priority 100
    # MASTER与BACKUP同步的时间间隔,单位为秒
    advert_int 1
    # lvs对应的真实IP
    mcast_src_ip=192.168.208.154
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP的址
    virtual_ipaddress {
        192.168.208.100
    }
}

virtual_server 192.168.208.100 80 {
    # 健康检查的时间,单位为秒
    delay_loop 6
    # 负载调度算法,这里设置为rr,即轮询算法
    lb_algo rr
    # 设置DR模式
    lb_kind DR
    # 虚拟地址的子网掩码
    nat_mask 255.255.255.0
    # 会话保持时间,单位为秒
    persistence_timeout 50
    protocol TCP

    # 配置真实服务器信息
    real_server 192.168.208.150 80 {
        # 节点的权值
        weight 1
        TCP_CHECK {
            # 超时时间
            connect_timeout 3
            # 重试次数
            nb_get_retry 3
            # 重试间隔
            delay_before_retry 3
        }
    }

    real_server 192.168.208.151 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

基于上述的配置,那么lvs slave的配置如下:

! Configuration File for keepalived

global_defs {
#   notification_email {
#     [email protected]
#     [email protected]
#     [email protected]
#   }
#   notification_email_from [email protected]
#   smtp_server 192.168.200.1
#   smtp_connect_timeout 30
   router_id LVS_DEVEL_SLAVE
#   vrrp_skip_check_adv_addr
#   vrrp_strict
#   vrrp_garp_interval 0
#   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 99
    advert_int 1
    mcast_src_ip=192.168.208.155
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.208.100
    }
}

virtual_server 192.168.208.100 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.255.0
    persistence_timeout 50
    protocol TCP

    real_server 192.168.208.150 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.208.151 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

分别启动lvs master和slave的keepalived,并且设置为开机自启动:

systemctl start keepalived
systemctl enable keepalived

此时在lvs master节点查看IP地址情况:

ip a

结果为:

说明此时VIP在master节点上的ens33网卡上生成好了。

在lvs master节点查看路由转发情况:

ipvsadm -Ln

结果为:

这个结果跟预期的是一样的。

(4)、关闭lvs master和slave节点上的访火墙:

systemctl stop firewalld
systemctl disable firewalld

(5)、在nginx服务器上设置回环IP:

由于服务器重启后设置的回环IP会失效,所以将设置的内容放在脚本lvs-rs.sh中,内容如下:

#!/bin/bash
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

sysctl -w net.ipv4.ip_forward=1
ifconfig lo:0 192.168.208.100 broadcast 192.168.208.100 netmask 255.255.255.255 up
route add -host 192.168.208.100 dev lo:0

执行后,查看IP信息,可以在lo网卡中看到VIP的信息,即:

4、测试

分别打开Chrome、IE浏览器,同时输入http://192.168.208.100,结果如下:

结果也达到预期的效果的。

现在测试将lvs master节点关闭掉,然后查看lvs slave节点的IP路由情况:

ip a

结果为:

此时VIP漂移到了lvs slave节点上了。

ipvsadm -Ln

结果为:

此时lvs slave已经可以做路由地址转发了。

关注我

以你最方便的方式关注我:
微信公众号:
架构与我

Guess you like

Origin www.cnblogs.com/atcloud/p/11281982.html