Build LVS (DR mode) + Keepalived high availability cluster, you can now do! ! !

Keepalived design goal is to build a highly available LVS load-balancing cluster, you can call ipvsadm tool to create a virtual server, server pool management, not just as a hot standby. LVS cluster using Keepalived build more ease of use, the main advantage is reflected in, for LVS load balancer to achieve hot standby switching, increase availability, server node pool health checks, automatically remove the failed node, restore and re-join.

Based LVS (DR Mode) + LVS cluster structure Keepalived implementation, comprising at least two hot standby load balancer, two or more server nodes, in the present experiment LVS DR model based clustering, adding a load scheduler, implemented using Keepalived master, from the hot standby scheduler to construct both load balancing, high availability site the LVS cluster platforms both capabilities, the experiment shown in FIG topology:
Build LVS (DR mode) + Keepalived high availability cluster, you can now do!  !  !
due to the experimental environment, so the two structures Web site server, NFS shared storage with scheduler, Web nodes on the same network segment, simplify experimental procedures. Production environment, under normal circumstances, can not be shared memory with the scheduler, the Web node on the same segment, with the need to isolate the switches, in order to increase read and write operations to shared storage. The actual environment, the scheduler, the Web node requires at least two network cards, to facilitate communication with the shared storage. Experimental environment, simplifying a bit, everyone a lot of understanding!

When using Keepalived build LVS cluster management tools also need to use ipvsadm (used to check the load scheduling effect), most of the work done by the Keepalived automatically, without having to manually perform ipvsadm (In addition to viewing and monitoring outside the cluster). On how to set up NFS shared storage, set up NFS shared storage is particularly simple! The experiments do not do it, do not know for NFS cluster can refer friends Bowen: LVS load-balancing cluster of building NAT mode, you can now do! ! !

Experimental Procedure:

A configured master scheduler

1. Global configuration, hot-standby configuration

First main hot standby function to achieve from the scheduler, the drift VIP addresses LVS cluster address.

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# setenforce 0
//关闭防火墙和SELinux
[root@localhost ~]# yum -y install keepalived ipvsadm
//安装相应的服务、工具
[root@localhost ~]# vim /etc/keepalived/keepalived.conf 
//编辑Keepalived服务的配置文件
  //配置文件少了许多没必要的东西,比如关于邮件的内容,为了更加简便的理解其配置文件内容
global_defs {                                              
   router_id LVS_DEVEL1                     //主调度器名称
}

vrrp_instance VI_1 {
    state MASTER                               //主调度器的热备状态
    interface ens33
    virtual_router_id 1
    priority 100                                     //主调度器的优先级
    advert_int 1
    authentication {                              //主、从热备认证信息
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {                        //指定群集VIP地址(可以有多个)
        192.168.1.254
    }
}

Profile details about Keepalived services can refer Bowen: use Keepalived achieve hot standby Detailed

2.Keepalived profile Web service pool configuration

Add "Virtual_server VIP ports {...}" in the spare area configuration based on Keepalived configure a virtual server, the load includes a scheduler algorithm, the cluster mode, setting the parameters of health check intervals, the real server address.

[root@localhost ~]# vim /etc/keepalived/keepalived.conf 
                                           ……………………………………
//省略Keepalived配置文件的全局配置、热备配置
virtual_server 192.168.1.254 80 {                                 //虚拟服务器地址(VIP)、端口
    delay_loop 15                                                           //健康检查的间隔时间(秒)
    lb_algo rr                                                                  //轮询(rr)调度算法
    lb_kind DR                                                               //直接路由(DR)群集工作模式
    !  persistence_timeout 50                                        
        //连接保持时间(秒),“!”表示不启用的意思,为了验证效果,建议禁用。
    protocol TCP                                                            //应用服务采用的是TCP协议

    real_server 192.168.1.3 80 {                                  //第一个Web节点的地址
        weight 1                                                             //节点的权重
        TCP_CHECK {                                                  //健康检查方式
            connect_port 80                                            //检查的目标端口
            connect_timeout 3                                        //连接超时(秒)
            nb_get_retry 3                                              //重试次数
            delay_before_retry 4                                    //重试间隔
        }
    }

    real_server 192.168.1.4 80 {                              //第二个Web节点的地址、端口
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }
    }
        //如果有多个Web节点服务器,依次添加即可!
}
//注意配置文件的“{}”,较多,小心修改
[root@localhost ~]# systemctl start keepalived
//启动Keepalived服务

Second, the configuration from the scheduler

Substantially the same configuration as the scheduler from master scheduler, comprising a global configuration, hot-standby configuration, pool configuration server, simply modify the global configuration: the name of the router (the router_id of the) hot standby state (state), the priority (priority).

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# setenforce 0
//关闭防火墙和SELinux
[root@localhost ~]# yum -y install keepalived ipvsadm
//安装相应服务、工具
[root@localhost ~]# vim /etc/keepalived/keepalived.conf
//编辑Keepalived服务的配置文件
global_defs {
   router_id LVS_DEVEL2                        //从调度器名称
}

vrrp_instance VI_1 {
    state BACKUP                                    //热备状态(备用)
    interface ens33
    virtual_router_id 1
    priority 99                                           //从调度器的优先级
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.254
    }
}

virtual_server 192.168.1.254 80 {
    delay_loop 15
    lb_algo rr 
    lb_kind DR 
    !  persistence_timeout 50
    protocol TCP

    real_server 192.168.1.3 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }   
    }   

    real_server 192.168.1.4 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }   
    }   
}   

Except where the mark, and the rest of the place with the same master scheduler (must be the same)!
[root @ localhost ~] # systemctl Start keepalived
// start Keepalived Service

Third, configure the Web server node

,
Depending on the operating mode of the selected cluster (DR or the NAT), the configuration server node also different. The experiment in the DR mode, for example, in addition to the need to adjust / proc ARP response system parameters, required for the virtual interface (lo: 0) VIP address configuration, and to add a local routing of VIP. DR mode on how to build a Web server node can refer Bowen: LVS cluster of load balancing DR construct mode, you can now do! ! !

The steps outlined below:

[root@localhost ~]# systemctl stop firewalld
s[root@localhost ~]# setenforce 0
//关闭防火墙与SELinux
[root@localhost ~]# yum -y install httpd
//安装http服务
[root@localhost ~]# echo qqqqq > /var/www/html/index.html
[root@localhost ~]# systemctl start httpd
//修改Web节点服务器的主页内容,并启动http服务
[root@localhost ~]# vim /etc/sysctl.conf
//修改内核参数,添加以下内容
net.ipv4.conf.all.arp_ignore  =  1
net.ipv4.conf.all.arp_announce  =  2
net.ipv4.conf.default.arp_ignore  =  1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore  =  1
net.ipv4.conf.lo.arp_announce  = 2
[root@localhost ~]# sysctl -p
//加载内核参数
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-lo ifcfg-lo:0
[root@localhost network-scripts]# vim ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.1.254
NETMASK=255.255.255.255
ONBOOT=yes
NAME=loopback:0
[root@localhost network-scripts]# ifup lo
[root@localhost network-scripts]# ifconfig lo:0
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.1.254  netmask 255.255.255.255
        loop  txqueuelen 1  (Local Loopback)
//为本机(lo)网卡添加虚拟IP
[root@localhost ~]# route add -host 192.168.1.254 dev lo:0
//添加一条到VIP的本地路由

Regardless of several Web server node, the configuration is the same!
Suggested two Web server node's home page not the same as for testing! In the actual production environment, will be set up NFS shared storage (NFS shared storage on how to build a blog has been mentioned in the beginning) to automatically synchronize the contents of the home page of the Web server node!

Fourth, the test LVS (DR Mode) + Keepalived high availability cluster

In the browser client, it is possible Keepalived + VIP address of the cluster (192.168.1.254) normal access to Web page content through LVS (DR mode), when the main, from any failure of a scheduler, you can still access the Web site (may be required or refresh a few times to re-open the browser); as long as the server pool has more than two nodes or two real Web server is available, you can achieve traffic load balancing (load balancing effect and significantly more than LVS load balancing effect)! Authenticate themselves!

In the main, according to / var / log / messages log file tracking failover from the scheduler. To see the distribution of load, you can be on the master scheduler, execute the following command:

[root@localhost ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.1.254:80 rr
  -> 192.168.1.3:80               Route   1      1          0         
  -> 192.168.1.4:80               Route   1      1          0   

That's why you want to install ipvsadm tool.
Eventually verified LVS (DR Mode) + robustness Keepalived availability load balancing cluster!

Recommended to build NFS shared storage, to ensure consistent service node Web content (the actual environment must be done so)!

-------- end of this article so far, thanks for reading --------

Guess you like

Origin blog.51cto.com/14157628/2439181