Build: LVS + Keepalived highly available Web services in a clustered environment

The service more technology involved, explained in detail the relevant technical documentation may refer to the following links:

Centos 7 load balancing configuration in detail the DR (direct routing) mode;

Detailed configuration Centos 7 load balancing NAT (address translation) mode;

LVS load balancing cluster Detailed;

The above binding Bowen, keepalived + can build a highly available web DR / NAT cluster mode, this blog to keepalived + DR environment to build a highly available web service cluster.

In this configuration the main post, and be configured in a production environment, can be copied, the environment is as follows:

Build: LVS + Keepalived highly available Web services in a clustered environment

First, the environmental analysis:

1,2 scheduler web and two nodes with a network address, network communication directly with the external. For shared storage

Security, general store and the web server nodes within the network environment to plan, so the web node and scheduler, there must be two

NIC interfaces and over.

2, I'm here to limited resources, and to easy to configure, so the scheduler and web were only two nodes, please visit the web

Small amount of a lower demand situations, adequate, but if the access request is relatively large, then a minimum of three scheduler and configured to respectively

web host, web if only two nodes, then visits a relatively large, so if there is a down, and that the rest

A request for access only child certainly could not carry because of a surge, but were killed.

3, ready system image for installation related services.

4, self-configure firewall policies and IP address in addition to the VIP (I directly off the firewall here).

5, keepalived will automatically call IP_vs module, there is no need to manually load.

Second, the final results:

1, multiple client access to the cluster VIP, get the same page.

2, master scheduler after down, the VIP address of the cluster will automatically drift to the (backup) scheduler , at this time, all

Scheduling by the task allocation from the scheduler. When the master scheduler resume running, VIP address of the cluster will be automatically transferred back

Master scheduler, the master scheduler continues to operate, the backup status back from the scheduler.

. 3, web node goes down, the keepalived health checks are detected, to automatically remove the web Nodes pool

Downtime node, until web node recovery operation, will be automatically added to the web node pool.

Third, we began to build:

1, the configuration master scheduler (LVS1):

[root@LVS1 ~]# yum -y install keepalived ipvsadm                #安装所需工具
[root@LVS1 ~]# vim /etc/sysctl.conf              #调整内核参数,写入下面三行
            .....................
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@LVS1 ~]# sysctl -p                  #刷新一下
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@LVS1 ~]# cd /etc/keepalived/
[root@LVS1 keepalived]# cp keepalived.conf keepalived.conf.bak          #备份配置文件
[root@LVS1 keepalived]# vim keepalived.conf               #编辑keepalived配置文件

! Configuration File for keepalived

global_defs {
   notification_email {
     [email protected]            #收件人地址(没需要的话可以不做修改)
     [email protected]
     [email protected]
   }
   notification_email_from [email protected]       #发件人姓名、地址(可不做修改)
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL1       #本服务器的名称改一下,在所有调度器名称里必须唯一。
}

vrrp_instance VI_1 {
    state MASTER             #设置为主调度器
    interface ens33            #承载VIP地址的物理网卡接口根据实际情况改一下
    virtual_router_id 51           
    priority 100                  
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        200.0.0.100                  #指定漂移IP地址(VIP),可以有多个。
    }
}

virtual_server 200.0.0.100 80 {                       #改为VIP地址及所需端口
    delay_loop 6
    lb_algo rr                             #根据需要改一下负载调度算法,rr表示轮询。
    lb_kind DR                          #设置工作模式为DR(直接路由)模式。
   ! persistence_timeout 50      #为了一会测试看到效果,将连接保持这行前加“ !”将该行注释掉。
    protocol TCP

real_server 200.0.0.4 80 {       #一个web节点的配置,real_server 200.0.0.4 80 {  ..... }是复制下面的。复制过来后,更改一下节点IP地址即可。
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 200.0.0.3 80 {             #一个web节点的配置,改动完成后,有几个节点就复制几份real_serve 200.0.0.3 80  {  ..... }到该行的上方,最好别往下面粘贴,以防大括号丢失。
        weight 1
        TCP_CHECK {
            connect_port 80                #添加该行,以配置连接端口
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

 #以下还有很多配置项,我这里有98行,全部删除即可,若不删除时重启服务可能报错。 
 [root@LVS1 ~]# systemctl restart keepalived              #重启服务
 [root@LVS1 ~]# systemctl enable keepalived              #设置开机自启动

At this point, the master scheduler configuration is completed! ! !

2, the configuration from the scheduler (LVS2):

[root@LVS2 ~]# yum -y install ipvsadm keepalived           #安装所需工具
[root@LVS2 ~]# scp [email protected]:/etc/sysctl.conf /etc/              
#将主调度器的/proc参数文件复制过来
[email protected]'s password:            #输入主调度器的用户密码
sysctl.conf                                 100%  566   205.8KB/s   00:00    
[root@LVS2 ~]# sysctl -p               #刷新
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@LVS2 ~]# scp [email protected]:/etc/keepalived/keepalived.conf /etc/keepalived/   
#将主调度器的keepalived配置文件复制过来,稍微改动即可。
[email protected]'s password:                     #输入主调度器的用户密码
keepalived.conf                             100% 1053     2.5MB/s   00:00    
[root@LVS2 ~]# vim /etc/keepalived/keepalived.conf             #编辑复制过来的配置文件
#若两台服务器都是ens33网卡的话,那么所需该的只有以下三项(其他项保持默认):
router_id LVS_DEVEL2        #将route_id改为不一样的。route_id必须是唯一的。
state BACKUP              #状态改为BACKUP,注意大小写。
priority 90        #优先级要比主调度器小,并且不可与其他备份调度器优先级冲突。
[root@LVS2 ~]# systemctl restart keepalived           #启动服务
[root@LVS2 ~]# systemctl enable keepalived          #设置开机自启动

Thus, also from the scheduler is configured, if desired, a plurality of deployment from the scheduler, according to the above from (backup) scheduler configuration.

3, node configuration web1:

[root@Web1 ~]# cd /etc/sysconfig/network-scripts/
[root@Web1 network-scripts]# cp ifcfg-lo ifcfg-lo:0           #复制一份回环地址的配置文件
[root@Web1 network-scripts]# vim ifcfg-lo:0           #编辑回环地址,以承载群集的VIP。
DEVICE=lo:0            #更改网卡名称
IPADDR=200.0.0.100            #配置群集的VIP
NETMASK=255.255.255.255             #子网掩码必须为4个255。
ONBOOT=yes
#保留上面四行配置项即可,多余的删除。
[root@Web1 network-scripts]# ifup lo:0          #启动回环接口
[root@Web1 ~]# ifconfig lo:0                    #查看VIP配置是否正确
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 200.0.0.100  netmask 255.255.255.255
        loop  txqueuelen 1  (Local Loopback)
[root@web1 ~]# route add -host 200.0.0.100 dev lo:0              #添加VIP本地访问路由记录
[root@web1 ~]# vim /etc/rc.local               #设置开机自动,添加这条路由记录              
                ................................
/sbin/route add -host 200.0.0.100 dev lo:0
[root@web1 ~]# vim /etc/sysctl.conf                  #调整/proc响应参数,写入下面六行
                    ...................
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@web1 ~]# sysctl -p                #刷新一下
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@web1 ~]# yum -y install httpd             #安装http服务
[root@Web1 ~]# echo 111111111111 > /var/www/html/index.html           #准备测试网页文件
[root@Web1 ~]# systemctl start httpd             #启动http服务
[root@Web1 ~]# systemctl enable httpd             #设置开机自启动

At this point, the first web node has been configured.

4, the node configuration web2:

[root@Web2 ~]# scp [email protected]:/etc/sysconfig/network-scripts/ifcfg-lo:0 \
/etc/sysconfig/network-scripts/                    #复制web1节点的lo:0配置文件
The authenticity of host '200.0.0.3 (200.0.0.3)' can't be established.
ECDSA key fingerprint is b8:ca:d6:89:a2:42:90:97:02:0a:54:c1:4c:1e:c2:77.
Are you sure you want to continue connecting (yes/no)? yes           #输入yes
Warning: Permanently added '200.0.0.3' (ECDSA) to the list of known hosts.
[email protected]'s password:           #输入web1节点用户的密码
ifcfg-lo:0                                  100%   66     0.1KB/s   00:00    
[root@Web2 ~]# ifup lo:0                  #启用lo:0
[root@Web2 ~]# ifconfig lo:0          #确认VIP无误
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 200.0.0.100  netmask 255.255.255.255
        loop  txqueuelen 1  (Local Loopback)
[root@Web2 ~]# scp [email protected]:/etc/sysctl.conf /etc/       #复制内核文件
[email protected]'s password:                     #输入web1节点用户的密码
sysctl.conf                                 100%  659     0.6KB/s   00:00   
[root@Web2 ~]# sysctl -p                     #刷新一下
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@Web2 ~]# route add -host 200.0.0.100 dev lo:0            #添加VIP本地访问路由记录
[root@Web2 ~]# vim /etc/rc.local                  #设置开机自动,添加这条路由记录
[root@web1 ~]# yum -y install httpd             #安装http服务
[root@Web1 ~]# echo 22222222222 > /var/www/html/index.html           #准备测试网页文件
[root@Web1 ~]# systemctl start httpd             #启动http服务
[root@Web1 ~]# systemctl enable httpd             #设置开机自启动

So far, web2 also the configuration is complete, and now use the client to test whether the cluster effect:

5, Client Access test:

Build: LVS + Keepalived highly available Web services in a clustered environment

Build: LVS + Keepalived highly available Web services in a clustered environment

If access to the same page, in the case of exclusion errors on the configuration, you can open multiple web pages, or wait a bit longer to refresh, because it might be a time to remain connected, so there will be a delay.

To test, so just prepare for each node different web page file, in order to test whether the effect of load balancing, and now the effect has been, so to build a shared storage server, all nodes are web pages to read from the shared storage server document available to the client, in order to provide the same documents to the web client.

Here is a simple set up shared storage server, if you need to build highly available storage server, you can focus on my blog: warrent , I will write how to build a highly available storage server in a future post.

6, configure NFS shared storage server:

[root@NFS /]# yum -y install nfs-utils rpcbind                   #安装相关软件包
[root@NFS /]# systemctl enable nfs               #设置为开机自启动
[root@NFS /]# systemctl enable rpcbind          #设置为开机自启动
[root@NFS /]# mkdir -p /opt/wwwroot               #准备共享目录
[root@NFS /]# echo www.baidu.com > /opt/wwwroot/index.html              #新建网页文件
[root@NFS /]# vim /etc/exports                         #设置共享目录(该文件内容默认为空)
/opt/wwwroot   192.168.1.0/24(rw,sync,no_root_squash)           #写入该行
[root@NFS /]# systemctl restart rpcbind        #重启相关服务,需注意服务启动的先后顺序
[root@NFS /]# systemctl restart nfs
[root@NFS /]# showmount -e               #查看本机共享的目录
Export list for NFS:
/opt/wwwroot 192.168.2.0

7, all nodes mount the shared storage web server:

1) Web1 configuration server node:

[root@Web1 ~]# showmount -e 192.168.1.5            #查看存储服务器共享的目录
Export list for 192.168.1.5:
/opt/wwwroot 192.168.1.0/24 
[root@Web1 ~]# mount 192.168.1.5:/opt/wwwroot /var/www/html/         #挂载至网页根目录
[root@Web1 ~]# df -hT /var/www/html/                       #确认挂载成功
文件系统                 类型  容量  已用  可用 已用% 挂载点
192.168.1.5:/opt/wwwroot nfs4   39G  5.5G   33G   15% /var/www/html
[root@Web1 ~]# vim /etc/fstab               #设置自动挂载 
                   .........................
192.168.1.5:/opt/wwwroot  /var/www/html   nfs   defaults,_netdev 0 0
#写入上面内容

2) Web2 configuration server node:

[root@Web2 ~]# showmount -e 192.168.1.5            #查看存储服务器共享的目录
Export list for 192.168.1.5:
/opt/wwwroot 192.168.1.0/24 
[root@Web2 ~]# mount 192.168.1.5:/opt/wwwroot /var/www/html/         #挂载至网页根目录
[root@Web2 ~]# df -hT /var/www/html/                       #确认挂载成功
文件系统                 类型  容量  已用  可用 已用% 挂载点
192.168.1.5:/opt/wwwroot nfs4   39G  5.5G   33G   15% /var/www/html
[root@Web2 ~]# vim /etc/fstab               #设置自动挂载 
                   .........................
192.168.1.5:/opt/wwwroot  /var/www/html   nfs   defaults,_netdev 0 0
#写入上面内容

8, Client Access test again:

This time, no matter how the client refresh, will see the same page, as follows:

Build: LVS + Keepalived highly available Web services in a clustered environment

9, accompanied by some of the View command:

. 1) on which VIP scheduler, the scheduler queries the physical interface carrying the VIP address, to see VIP address (VIP address finding out on the backup scheduler):

[root@LVS1 ~]# ip a show dev ens33              #查询承载VIP地址的物理网卡ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> ate UP groupn 1000
    link/ether 00:0c:29:77:2c:03 brd ff:ff:ff:ff:ff:ff
    inet 200.0.0.1/24 brd 200.0.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 200.0.0.100/32 scope global ens33                   #VIP地址。
       valid_lft forever preferred_lft forever
    inet6 fe80::95f8:eeb7:2ed2:d13c/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

2) What are the web query nodes:

[root@LVS1 ~]# ipvsadm -ln                  #查询web节点池及VIP
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
   RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  200.0.0.100:80 rr
  200.0.0.3:80                 Route   1      0          0         
  200.0.0.4:80                 Route   1      0          0      

3) analog Web2 down node and the master scheduler, and VIP query and a web on the backup node again scheduler:

[root@LVS2 ~]# ip a show dev ens33       #可以看到VIP地址已经转移到了备份调度器上
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> 
    link/ether 00:0c:29:9a:09:98 brd ff:ff:ff:ff:ff:ff
    inet 200.0.0.2/24 brd 200.0.0.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 200.0.0.100/32 scope global ens33                      #VIP地址。
       valid_lft forever preferred_lft forever
    inet6 fe80::3050:1a9b:5956:5297/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@LVS2 ~]# ipvsadm -ln                   #Web2节点宕机后,就查不到了。
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  200.0.0.100:80 rr
  -> 200.0.0.3:80                 Route   1      0          0         

    #当主调度器或Web2节点恢复正常后,将会自动添加到群集中,并且正常运行。

4) Check the log message dispatcher failover:

[root@LVS2 ~]# tail -30 /var/log/messages

Qi live .....................

Guess you like

Origin blog.51cto.com/14154700/2416771
Recommended