Centos 7 build LVS + Keepalived high availability cluster Web services

A, LVS + Keepalived high availability cluster

Keepalived design goal is to build a highly available LVS load-balancing cluster, you can call ipvsadm tool to create a virtual server, server pool management, not just as a hot standby. LVS cluster using Keepalived build more ease of use, the main advantage is reflected in: LVS load balancer to achieve hot standby switching, increase availability; node server pool health checks, automatically remove the failed node, restore and re-join.

LVS based on the cluster structure LVS + Keepalived implemented, at least including two hot standby scheduler load, three or more server nodes. This will LVS blog the DR cluster model is based on an increase in the load balancer, to achieve the main Keepalived use, from the hot standby scheduler to construct both load balancing, high availability cluster LVS both capabilities internet site .

Because of the technical services related to the LVS, LVS relevant overview and configuration Bowen may refer to the following links:
Centos LVS load-balancing cluster of 7 Overview

Construction of address translation based on load balancing (LVS-NAT) mode cluster

A load-balancing cluster based on direct routing mode (DR) of

1, case environment is as follows:

Centos 7 build LVS + Keepalived high availability cluster Web services

When using Keepalived build LVS cluster management tools also need to use ipvsadm, but most of the work will be done automatically by Keepalived, without having to manually perform ipvsadm (In addition to viewing and monitoring cluster outside).

2, environmental analysis

1), 2 scheduler web and two nodes with a network address, network communication directly with the external. For shared storage security, usually a web server and storage node to the network planning environment, so the web node must have two or more interfaces card.

2) I have here limited resources, and to facilitate the configuration, so the scheduler and web nodes were only two, the amount requested under the access is not the case in the web, is enough, but if the access request is relatively large, then at least to be configured separately three scheduler and web node, if only two web node, then traffic and relatively large, so if there is a down, and that leaving a single seedling will certainly not carry because of a surge in access requests, and was killed .

3), prepare system image for installation related services.

4), self-configure firewall policies and IP address in addition to the VIP (here I directly off the firewall).

5), keepalived it will automatically call IP_vs module, there is no need to manually load.

3, the final results

1), VIP client access to multiple clusters, get the same page.

2), the master scheduler down, the VIP address of the cluster will automatically drift to the (backup) scheduler, at this time, all of the scheduled tasks is allocated by the scheduler. When the master scheduler resume running, VIP address of the cluster will be automatically transferred back to the master scheduler, the master scheduler continues to work, back to the state from a backup scheduler.

3), the web node goes down, will be detected keepalived health check function, to automatically remove the web down node node pool, resume operation until the node web, web will be automatically added to the node pool.

Second, start configuring high availability cluster LVS + Keepalived

1, the deployment of the first Web server

[root@centos01 ~]# yum -y install httpd  <!--安装httpd服务-->
[root@centos01 ~]# echo "www.benet.com" >
/var/www/html/index.html   <!--创建网站主页,写入测试数据-->
[root@centos01 ~]# systemctl start httpd   <!--启动httpd服务-->
[root@centos01 ~]# systemctl enable httpd<!--设置开机自动启动-->
[root@centos01 ~]# cp /etc/sysconfig/network-scripts/ifcfg-lo 
/etc/sysconfig/network-scripts/ifcfg-lo:0   
           <!--复制lo:0网卡配置文件-->
[root@centos01 ~]# vim /etc/sysconfig/network-scripts/ifcfg-lo:0
         <!--编辑lo:0网卡配置文件-->
DEVICE=lo:0   <!--修改网卡名字-->
IPADDR=192.168.100.253   <!--配置VIP地址-->
NETMASK=255.255.255.255   <!--配置子网掩码-->
ONBOOT=yes
<!--保留上面四行配置项即可,多余的行删除-->
[root@centos01 ~]# systemctl restart network   <!--重启网卡服务-->
[root@centos01 ~]# ifconfig    <!--查看配置是否生效-->
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 488  bytes 39520 (38.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 488  bytes 39520 (38.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.100.253  netmask 255.255.255.255
        loop  txqueuelen 1  (Local Loopback)
[root@centos01 ~]# vim /etc/sysctl.conf <!--修改web服务器ARP响应-->
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@centos01 ~]# sysctl -p   <!--刷新配置-->
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

2, deployment of the second Web server

[root@centos02 ~]# yum -y install httpd  <!--安装httpd服务-->
[root@centos02 ~]# echo "www.accp.com" >
/var/www/html/index.html   <!--创建网站主页,写入测试数据-->
[root@centos02 ~]# systemctl start httpd  <!--启动httpd服务-->
[root@centos02 ~]# systemctl enable httpd   <!--设置开机自动启动-->
[root@centos02 ~]# scp [email protected]:/etc/sysconfig/network-scripts/ifcfg-lo:0
/etc/sysconfig/network-scripts/  
<!--拷贝 第一台网站服务器的lo:0网卡配置文件到第二台网站服务器-->
The authenticity of host '192.168.100.10 (192.168.100.10)' can't be established.
ECDSA key fingerprint is SHA256:PUueT9fU9QbsyNB5NC5hbSXzaWxxQavBxXmfoknXl4I.
ECDSA key fingerprint is MD5:6d:f7:95:0e:51:1a:d8:9e:7b:b6:3f:58:51:51:4b:3b.
Are you sure you want to continue connecting (yes/no)? yes  <!--输入yes-->
Warning: Permanently added '192.168.100.10' (ECDSA) to the list of known hosts.
[email protected]'s password:   <!--输入密码-->
ifcfg-lo:0                                                          100%   70    53.3KB/s   00:00    
[root@centos02 ~]# scp [email protected]:/etc/sysctl.conf /etc/sysctl.conf    
            <!--拷贝ARP响应到第二台网站服务器-->
[email protected]'s password:    <!--输入密码-->
sysctl.conf                                                         100%  660   304.3KB/s   00:00   
[root@centos02 ~]# systemctl restart network  <!--重启网卡服务-->
[root@centos02 ~]# ifconfig    <!--查看是否配置生效-->
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1  (Local Loopback)
        RX packets 496  bytes 40064 (39.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 496  bytes 40064 (39.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.100.253  netmask 255.255.255.255
        loop  txqueuelen 1  (Local Loopback)
[root@centos02 ~]# sysctl -p  <!--刷新配置-->
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

3, deployment master scheduler

[root@centos04 ~]# yum -y install keepalived ipvsadm <!--安装所需工具-->
[root@centos04 ~]# vim /etc/sysctl.conf <!--调整内核参数,写入下面三行-->
            .....................
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@centos04 ~]# sysctl -p      <!--刷新一下-->
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@centos04 ~]# cd /etc/keepalived/
[root@centos04 keepalived]# cp keepalived.conf keepalived.conf.bak          <!--备份配置文件-->
[root@centos04 keepalived]# vim keepalived.conf <!--编辑keepalived配置文件-->

! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL1<!--本服务器的名称改一下,在群集中所有调度器名称里必须唯一-->
}

vrrp_instance VI_1 {         <!--定义VRRP热备实例-->
    state MASTER             <!--设置为主调度器-->
    interface ens32            <!--承载VIP地址的物理网卡接口根据实际情况改一下-->
    virtual_router_id 51<!--虚拟路由器的ID号,每个热备组保持一致-->
    priority 100            <!--优先级,数值越大优先级越高-->
    advert_int 1           <!--通告间隔秒数(心跳频率)-->
    authentication {       <!--认证信息,每个热备组保持一致-->
        auth_type PASS   <!--认证类型-->
        auth_pass 1111       <!--密码字串-->
    }
    virtual_ipaddress {
        192.168.100.253     <!--指定漂移IP地址(VIP),可以有多个-->
    }
}

virtual_server 192.168.100.253 80 { <!--改为VIP地址及所需端口-->
    delay_loop 6            <!--健康检查的间隔时间(秒)-->
    lb_algo rr   <!--根据需要改一下负载调度算法,rr表示轮询-->
    lb_kind DR   <!--设置工作模式为DR(直接路由)模式-->
    persistence_timeout 50   <!--连接保持时间-->
    protocol TCP          <!--应用服务采用的是TCP协议-->

real_server 192.168.100.10 80 {       <!--一个web节点的配置,real_server 192.168.100.10 80 {  ..... }是复制下面的。复制过来后,更改一下节点IP地址即可-->
        weight 1                 <!--节点的权重-->
        TCP_CHECK {       <!--健康检查方式-->
            connect_port 80   <!--检查的目标端口-->
            connect_timeout 3     <!--连接超时(秒)-->
            nb_get_retry 3       <!--重试次数-->
            delay_before_retry 3   <!--重试间隔(秒)-->
        }
    }

    real_server 192.168.100.20 80 {        <!--一个web节点的配置,改动完成后,有几个节点就复制几份real_serve 192.168.100.20 80  {  ..... }到该行的上方,最好别往下面粘贴,以防大括号丢失-->
        weight 1            <!--节点的权重-->
        TCP_CHECK {    <!--健康检查方式-->
            connect_port 80     <!--检查的目标端口-->
            connect_timeout 3   <!--连接超时(秒)-->
            nb_get_retry 3          <!--重试次数-->
            delay_before_retry 3   <!--重试间隔(秒)-->
        }
    }
}

 <!--以下还有很多配置项,我这里有98行,全部删除即可,若不删除时重启服务可能报错-->
[root@centos04 ~]# systemctl restart keepalived <!--重启服务-->
[root@centos04 ~]# systemctl enable keepalived <!--设置开机自启动-->

4, the configuration from the scheduler

[root@centos05 ~]# yum -y install ipvsadm keepalived   
                  <!--安装ipvsadm和keepalived软件包-->
[root@centos05 ~]# scp [email protected]:/etc/sysctl.conf /etc/              
<!--将主调度器的/proc参数文件复制过来-->
[email protected] s password:            <!--输入主调度器的用户密码-->
sysctl.conf                                 100%  566   205.8KB/s   00:00    
[root@centos05 ~]# sysctl -p               <!--刷新-->
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@centos05 ~]# scp [email protected]:/etc/keepalived/keepalived.conf
/etc/keepalived/      
             <!--拷贝主节点的keepalived主配置文件到从节点服务器上-->
[email protected]'s password:       <!--输入密码-->
keepalived.conf                                                                                     100%  803     2.1MB/s   00:00  
[root@centos05 ~]# vim /etc/keepalived/keepalived.conf <!--修改keepalived主配置文件-->
<!--若两台服务器都是ens32网卡的话,那么所需该的只有以下三项(其他项保持默认)-->
   router_id LVS_HA_Backup    <!--将route_id改为不一样的,route_id必须是唯一的-->
    state BACKUP         <!--状态改为BACKUP,注意大小写-->
    interface ens32
    priority 99      <!--优先级要比主调度器小,并且不可与其他备份调度器优先级冲突-->
[root@centos05 ~]# systemctl start keepalived     <!--启动keepalived服务-->
[root@centos05 ~]# chkconfig --level 35 keepalived on   <!--设置开机自动启动-->

At this point, the master, also from the scheduler is configured, if desired, a plurality of deployment from the scheduler, according to the above from (backup) scheduler configuration.

5. Configure Client Access

Client Test Access VIP address: 192.168.100.253
Centos 7 build LVS + Keepalived high availability cluster Web services

Centos 7 build LVS + Keepalived high availability cluster Web services

To test, so just prepare for each node different web page file, in order to test whether the effect of load balancing, and now the effect has been, so to build a shared storage server, all nodes are web pages to read from the shared storage server document available to the client, in order to provide the same documents to the web client.

Then start to configure shared storage server

6, NFS server configuration

[root@centos03 ~]# yum -y install rpcbind nfs-utils   <!--安装所需软件包-->
[root@centos03 ~]# mkdir /web   <!--创建共享网站根目录-->
[root@centos03 ~]# echo "www.nfs.com" > /web/index.html
              <!--创建网站主页,写入测试数据-->
[root@centos03 ~]# vim /etc/exports      <!--修改nfs主配置文件共享/web目录-->
/web    192.168.100.10(ro) 192.168.100.20(rw)  
[root@centos03 ~]# systemctl start rpcbind      <!---启动相关服务->
[root@centos03 ~]# systemctl start nfs      <!--启动相关服务-->
[root@centos03 ~]# systemctl enable rpcbind     <!--设置开机自动启动-->
[root@centos03 ~]# systemctl enable nfs   <!--设置开机自动启动-->
[root@centos03 ~]# showmount -e 192.168.100.30   <!--查看共享目录-->
Export list for 192.168.100.30:
/web 192.168.100.20,192.168.100.10

7, Web site directory to mount the shared storage

1) Web server node 1 mount the shared directory

[root@centos01 ~]# mount 192.168.100.30:/web /var/www/html/    
          <!--挂载共享目录到网站服务器的根目录-->
[root@centos01 ~]# cat /var/www/html/index.html  <!--查看是否挂载成功-->
www.nfs.com
[root@centos01 ~]# vim /etc/fstab        <!--设置自动挂载-->
192.168.100.30:/web     /var/www/html/                            nfs     defaults        0 0
[root@centos01 ~]# systemctl restart httpd     <!--重新启动httpd服务-->

2) Web server node 2 to mount the shared directory

[root@centos02 ~]# mount 192.168.100.30:/web /var/www/html/    
          <!--挂载共享目录到网站服务器的根目录-->
[root@centos02 ~]# cat /var/www/html/index.html        <!--查看是否挂载成功-->
www.nfs.com
[root@centos02 ~]# vim /etc/fstab   <!--设置自动挂载-->
192.168.100.30:/web     /var/www/html/                            nfs     defaults        0 0
[root@centos02 ~]# systemctl restart httpd       <!--重新启动httpd服务-->

8, Client Access test again

This time, no matter how the client refresh, see page always www.nfs.com
Centos 7 build LVS + Keepalived high availability cluster Web services

9, the case related to the query command

. 1) on which VIP scheduler, the scheduler queries the physical interface carrying the VIP address, to see VIP address (VIP address finding out on the backup scheduler):

[root@centos04 ~]# ip a show dev ens32   
     <!--查询承载VIP地址的物理网卡ens32或者使用ip a命令也可以查看VIP地址-->
ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> ate UP groupn 1000
    link/ether 00:0c:29:77:2c:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.40/24 brd 192.168.100.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet 192.168.100.253/32 scope global ens32    <!--VIP地址-->
       valid_lft forever preferred_lft forever
    inet6 fe80::95f8:eeb7:2ed2:d13c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

2) What are web query nodes

[root@centos04 ~]# ipvsadm -ln     <!--查询web节点池及VIP-->
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.100.253:80 rr persistent 50
  -> 192.168.100.10:80            Route   1      0          0         
  -> 192.168.100.20:80            Route   1      0          0         

3) analog and second Web master scheduler node failure, and VIP query and a web on the backup node again scheduler

[root@centos05 ~]# ip a show dev ens32   <!--可以看到VIP地址已经转移到了备份调度器上-->
ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> ate UP groupn 1000
    link/ether 00:0c:29:77:2c:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.40/24 brd 192.168.100.255 scope global noprefixroute ens32
       valid_lft forever preferred_lft forever
    inet 192.168.100.253/32 scope global ens32    <!--VIP地址-->
       valid_lft forever preferred_lft forever
    inet6 fe80::95f8:eeb7:2ed2:d13c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@centos05 ~]# ipvsadm -ln <!--Web2节点宕机后,就查不到了。-->
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.100.253:80 rr 
  -> 192.168.100.10:80            Route   1      0          0         

4) Check the log message scheduler failover

[root@centos05 ~]# tail -30 /var/log/messages

------ This concludes the article, thanks for reading ------

Guess you like

Origin blog.51cto.com/14156658/2457917