LVS+Keepalived high availability cluster (theory + actual deployment)

Preface

In this highly information-based IT era, the production system, business operations, sales and support, and daily management of enterprises are increasingly dependent on computer information and services, which has greatly increased the demand for high-availability (HA) technology applications. In order to provide continuous, uninterrupted computer system or network services.
Use Keepalived to implement dual-system hot backup, including failover for IP addresses, and hot backup applications in LVS high-availability clusters.

1. Basics of Keepalived dual-system hot backup

1.1 Overview and installation of Keepalived

1.1.1, Keepalived hot backup method

Keepalived adopts VRRP hot backup protocol to realize the multi-machine hot backup function of Linux server

VRRP, Virtual Routing Redundancy Protocol, is a backup solution for routers

Multiple routers form a hot backup group, and provide services to the outside through a shared virtual IP address

Only one main router provides service at the same time in each hot backup group, and other routers are in redundant state

If the currently online router fails, other routers will automatically take over the virtual IP address according to the set priority and continue to provide services
Insert picture description here

1.1.2, Keepalived installation and service control

When applying in an LVS cluster environment, you also need to use the lipvsadm management tool
YUM to install Keepalived to
enable Keepalived service

[root@localhost ~]# yum -y install keepalived ipvsadm

1.2. Use Keepalived to achieve dual-system hot backup

Keepalived can realize multi-machine hot backup, each hot-standby group can have multiple servers, the most commonly used is dual-machine hot backup

The failover of dual-system hot backup is realized by the drift of virtual IP address, which is suitable for various application servers

This deployment will implement dual-machine hot backup based on web services
Insert picture description here

1.2.1, the configuration of the main server

The Keepalievd configuration directory is located at /etc/keepalievd/
keepalievd.conf is the main configuration file

[root@localhost ~]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.conf.bak
[root@localhost keepalived]# vi keepalived.conf
global_defs{
    
    }区段指定全局参数
vrrp_instance实例名称{
    
    }区段指定VRRP热备参数
注释文字以“!”符号开头
目录samples/,提供了许多配置样例作为参考
常用配置选项
router_id HA_TEST_R1: 本路由器(服务器)的名称
vrrp_instance VI_1:定义VRRP热备实例
state MASTER:热备状态,MASTER表示主服务器
interface ens33:承载VIP地址的物理接口
virtual_router_id 1:虚拟路由器的ID号,每个热备组保持一致
priority 100:优先级,数值越大优先级越高
advert_int 1:通告间隔秒数(心跳频率)
auth_type PASS:认证类型
auth_pass 123456:密码字串
virtual_ipaddress{
    
    vip}:指定漂移地址(VIP),可以有多个,多个漂移地址以逗号分隔

Confirm that there is no problem with the configuration, start the Keepalived service, and you can view it through the ip command

[root@localhost keepalived]# systemctl start keepalived                  ####启动keepalived
[root@localhost keepalived]# ip addr show dev ens33                      ####查看主控制IP地址和漂移地址

1.2.2, the configuration of the standby server

The configuration of the Keepalived backup server is different from the master configuration in three options
router_id: set to free name
state: set to BACKUP
priority: value lower than that of the master server
Other options are the same as master

1.2.3, test the dual-system hot backup function

Test the effect of dual-system hot backup The
main and backup machines are both enabled with web services, and the content is the same.
Disable and enable the network card of the main server successively, and perform the following test

Test 1: Use ping to detect the connectivity of 19216810.72
Test 2: Visit htt:/192168.10.72 to confirm the availability and content changes
Test 3: Check the changes in the log file /var/log/messages

2. LVS+Keepalived high-availability cluster actual deployment

2.1. Experimental environment

VMware 5 server
IP address planning:
Drift address (VIP): 192.168.100.100
Primary scheduler: 192.168.100.21
Secondary scheduler: 192.168.100.20
WEB server 1: 192.168.100.22
WEB server 2: 192.168.100.23
Storage server: 192.168. 100.24

2.2, configure the main scheduler

2.2.1, adjust /proc response parameters

[

root@localhost network-scripts]# vi /etc/sysctl.conf 
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@localhost network-scripts]# sysctl -p
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0 

2.2.2, adjust keepalived parameters

[root@localhost ~]# yum -y install keepalived ipvsadm
[root@localhost ~]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.conf.bak
[root@localhost keepalived]# vi keepalived.conf
global_defs {
    
    
   router_id HA_TEST_R1
}
vrrp_instance VI_1 {
    
    
   state MASTER
   interface ens33
   virtual_router_id 1
   priority 100
   advert_int 1
   authentication {
    
    
      auth_type PASS
      auth_pass 123456
   }
   virtual_ipaddress {
    
    
      192.168.100.100
   }
}

virtual_server 192.168.100.100 80 {
    
    
    delay_loop 15
    lb_algo rr
    lb_kind DR
    persistence 60
    protocol TCP

    real_server 192.168.100.22 80 {
    
    
        weight 1
        TCP_CHECK {
    
    
	    connect_port 80
	    connect_timeout 3
	    nb_get_retry 3
	    delay_before_retry 4
	}
    }
    real_server 192.168.100.23 80 {
    
    
        weight 1
        TCP_CHECK {
    
    
	    connect_port 80
	    connect_timeout 3
	    nb_get_retry 3
	    delay_before_retry 4
	}
    }
}
[root@localhost keepalived]# systemctl start keepalived
[root@localhost keepalived]# ip addr show dev ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:11:0d:16 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.21/24 brd 192.168.100.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.100.100/32 brd 192.168.100.100 scope global noprefixroute ens33:0
       valid_lft forever preferred_lft forever
    inet6 fe80::3069:1a3d:774b:18f/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

2.3, configure the slave scheduler

2.3.1, adjust /proc response parameters

[root@localhost network-scripts]# vi /etc/sysctl.conf 
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
[root@localhost network-scripts]# sysctl -p     
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

2.3.2, adjust keepalived parameters

[root@localhost ~]# yum -y install keepalived ipvsadm
[root@localhost ~]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.conf.bak
[root@localhost keepalived]# vi keepalived.conf
global_defs {
    
    
   router_id HA_TEST_R2
}
vrrp_instance VI_1 {
    
    
   state BACKUP
   interface ens33
   virtual_router_id 1
   priority 99
   advert_int 1
   authentication {
    
    
      auth_type PASS
      auth_pass 123456
   }
   virtual_ipaddress {
    
    
      192.168.100.100
   }
}

virtual_server 192.168.100.100 80 {
    
    
    delay_loop 15
    lb_algo rr
    lb_kind DR
    persistence 60
    protocol TCP

    real_server 192.168.100.22 80 {
    
    
        weight 1
        TCP_CHECK {
    
    
	    connect_port 80
	    connect_timeout 3
	    nb_get_retry 3
	    delay_before_retry 4
	}
    }
    real_server 192.168.100.23 80 {
    
    
        weight 1
        TCP_CHECK {
    
    
	    connect_port 80
	    connect_timeout 3
	    nb_get_retry 3
	    delay_before_retry 4
	}
    }
}
[root@localhost keepalived]# systemctl start keepalived
[root@localhost keepalived]# ip addr show dev ens33 
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:48:b8:83 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.20/24 brd 192.168.100.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::e438:b533:985e:cf94/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

2.4, configure storage server

First, check whether nfs-utils and rpcbind are installed, if they are not installed with yum
, start the two services after installation

[root@localhost ~]# systemctl start nfs
[root@localhost ~]# systemctl start rpcbind
[root@localhost ~]# mkdir /opt/51xit /opt/52xit
[root@localhost ~]# vi /etc/exports
/opt/51xit 192.168.100.0/24(rw,sync)
/opt/52xit 192.168.100.0/24(rw,sync)
[root@localhost ~]# systemctl restart rpcbind
[root@localhost ~]# systemctl restart nfs
[root@localhost ~]# systemctl enable nfs
[root@localhost ~]# systemctl enable rpcbind
[root@localhost ~]# echo "this is www.51xit.top" > /opt/51xit/index.html
[root@localhost ~]# echo "this is www.52xit.top" > /opt/52xit/index.html

2.5, configure node server

2.5.1, configure virtual IP address (VIP)

Both the firewall and core protection are turned off, check whether nfs-utils is installed

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-lo ifcfg-lo:0
[root@localhost network-scripts]# vi ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.100.100
NETMASK=255.255.255.255
ONBOOT=yes

[root@localhost network-scripts]# ifup lo:0
[root@localhost network-scripts]# ifconfig
        省略部分内容
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.100.100  netmask 255.255.255.255
        loop  txqueuelen 1000  (Local Loopback)
        省略部分内容
[root@localhost network-scripts]# vi /etc/rc.local 
/sbin/route add -host 192.168.100.100 dev lo:0

[root@localhost network-scripts]# route add -host 192.168.100.100 dev lo:0

2.5.2, adjust /proc response parameters

[root@localhost network-scripts]# vi /etc/sysctl.conf 
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

[root@localhost network-scripts]# sysctl -p

The two node servers previously configured are the same

2.5.3, install httpd mount test page

Mount two node servers separately below

[root@localhost ~]# showmount -e 192.168.100.24
Export list for 192.168.100.24:
/opt/52xit 192.168.100.0/24
/opt/51xit 192.168.100.0/24
[root@localhost ~]# yum -y install httpd
[root@localhost ~]# mount 192.168.100.24:/opt/51xit /var/www/html/
[root@localhost ~]# vi /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Thu Aug  6 12:23:03 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=a1c935eb-f211-43a5-be35-2a9fef1f6a89 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/cdrom /mnt iso9660 defaults 0 0
192.168.100.24:/opt/51xit/ /var/www/html/ nfs defaults,_netdev 0 0
[root@localhost ~]# systemctl start httpd

Test whether the login is normal
Insert picture description here

[root@localhost ~]# showmount -e 192.168.100.24
Export list for 192.168.100.24:
/opt/52xit 192.168.100.0/24
/opt/51xit 192.168.100.0/24
[root@localhost ~]# yum -y install httpd
[root@localhost ~]# mount 192.168.100.24:/opt/52xit /var/www/html/
[root@localhost ~]# vi /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Thu Aug  6 12:23:03 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=a1c935eb-f211-43a5-be35-2a9fef1f6a89 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/cdrom /mnt iso9660 defaults 0 0
192.168.100.24:/opt/52xit/ /var/www/html/ nfs defaults,_netdev 0 0
[root@localhost ~]# systemctl start httpd

Test whether the login is normal
Insert picture description here

2.6, experimental verification

2.6.1, test the main scheduler

Open the packet capture tool, you will find that the master scheduler of 192.168.100.21 is sending VRRP packets all the time
Insert picture description here
. Enter 192.168.100.100 in the browser of the real machine
Insert picture description here
and wait for a minute to refresh or re-enter the
Insert picture description here
master scheduler is normal! ! !

2.6.2, test slave scheduler

Stop keepadlive of the main server

[root@localhost keepalived]# systemctl stop keepalived

Open the packet capture tool, you will find that the 192.168.100.20 slave scheduler has been sending VRRP packets
Insert picture description here
. Enter 192.168.100.100 in the browser of the real machine
Insert picture description here
and wait a minute to refresh or re-enter
Insert picture description here
it. The scheduler is normal! !

Guess you like

Origin blog.csdn.net/weixin_48191211/article/details/108749220