LVS (DR) + Keepalived high-availability cluster-dual machine hot standby

The shortcomings of traditional LVS

  • In enterprise applications, a single server bears the risk of a single point of failure in the application
  • Once a single point of failure occurs, enterprise services will be interrupted, causing great harm
    Insert picture description here

Keepalived tool introduction

1. A health check tool specially designed for LVS and HA

  • Support automatic failover (Failover)
  • Support node health check (Health Checking)
  • Official website: http://www.keepalived.orgl
  • Currently more than 2.0 versions are used

2. Analysis of Keepalived Implementation Principle

  • Keepalived adopts VRRP hot backup protocol
  • Realize the multi-machine hot backup function of Linux server

3. Analysis of Keepalived Implementation Principle

  • VRRP (Virtual Routing Redundancy Protocol) is a backup solution for routers
  • Multiple routers form a hot backup group, and provide services to the outside through a shared virtual IP address
  • There is only one main router in each hot standby group to provide services at the same time, and the other routers are in a redundant state
  • If the currently online router fails, other routers will automatically take over the virtual IP address according to the set priority and continue to provide services
    Insert picture description here

4. Practical application of Keepalived

Insert picture description here

  1. Keepalived can realize multi-machine hot backup, and each hot backup group can have multiple servers

  2. The failover of dual-system hot backup is realized by the drift of virtual IP address, which is suitable for various application servers

  3. Realize dual machine hot backup based on Web service

  • Drift address: 192.168.10.72
  • Primary and standby servers: 192.168.10.73, 192.168.10.74
  • Application Service Provided: Web

Keepalived installation and startup

1. Environment deployment

  1. When applied in the LVS cluster environment, the ipvsadm management tool is also needed
  2. YUM install Keepalived
  3. Enable Keepalived service

2. Configure Keepalived master server

Keepalived configuration directory is located at letc/keepalivedl
keepalived.conf is the main configuration file

  • global_defs {...} section specifies global parameters
  • vrrp_instance instance name {...} section specifies the VRRP hot standby parameters
  • The comment text starts with the "!" symbol
  • Catalog samples, provides many configuration samples for reference

2.1. Common configuration options

  1. router_id HA_TEST_R1: The name of the router (server)
  2. vrrp_instance Vl_1: Define VRRP hot standby instance
  3. state MASTER: Hot standby state, MASTER represents the master server
  4. interface ens33: The physical interface that carries the VIP address
  5. virtual_router_id 1: The ID number of the virtual router, which is consistent for each hot standby group
  6. priority 100: priority, the larger the value, the higher the priority
  7. advert_int 1: The number of seconds between advertisements (heartbeat frequency)
  8. auth_type PASS: authentication type
  9. auth_pass 123456:. Password string
  10. virtual_ipaddress {vip}: Specify a drift address (VIP), there can be multiple

3. Configure Keepalived slave server

There are three options for the configuration of the Keepalived backup server and the master configuration.

  1. router_id: set as your own name
  2. state: set to BACKUP
  3. priority: value is lower than the main server

Introduction to LVS+keepalived cluster

  • Keepalived’s design goal is to build a highly available LVS load balancing cluster. You can call the ipvsadm tool to create virtual servers and manage server pools, not just for dual-system hot backup
  • Using Keepalived to build an LVS cluster is easier to use

1. Main advantages

  1. Realize hot standby switch for LVS load scheduler to improve availability
  2. Perform health checks on nodes in the server pool and automatically remove failed nodes,
  3. Rejoin after recovery

2. Test the cluster

  • Through the /varllog/messages log files of the master and slave schedulers, you can track the failover process
  • You can execute "ipvsadm -ln" ."ipvsadm -lnc" and other operation commands to check the load distribution

Case practice

Experimental topology

Insert picture description here

Experimental operation

One, configure the main server

1. Adjust the /proc response parameters

[root@localhost ~]# vi /etc/sysctl.conf
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0  
[root@localhost ~]# sysctl -p  //生效优化的配置
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

2. Install ipvsadm and keepalived programs

[root@localhost ~]# yum -y install ipvsadm keepalived

3. Clear the load distribution strategy

[root@localhost ~]# ipvsadm -C

4. Adjust the keepalived parameters

[root@localhost keepalived]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.confbak
[root@localhost keepalived]# vim keepalived.conf
global_defs {
    
    
   router_id HA_TEST_R1
}
   state MASTER
   interface ens33
   virtual_router_id 1
   priority 100
      auth_type PASS
      auth_pass 123456
   virtual_ipaddress {
    
    
      192.168.30.100 
   }  
}  
    lb_algo rr 
    lb_kind DR 
    persistence 60
    protocol TCP
    
    real_server 192.168.30.22 80 {
    
    
        weight 1
        TCP_CHECK {
    
    
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }   
    }   
    real_server 192.168.30.33 80 {
    
    
        weight 1
        TCP_CHECK {
    
    
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }   
    }   
}   

The following is an explanation of the above script
global_defs { router_id HA_TEST_R1 ####The server name of the router HA_TEST_R1 } vrrp_instance VI_1 {####Define VRRP hot standby real state MASTER ####Hot standby state, master represents the main server interface ens33 # ###Indicates the physical interface that carries the VIP address virtual_router_id 1 ####The ID number of the virtual router, each hot standby group keeps the same priority 100 ####Priority, the higher the priority, the higher the priority advert_int 1 ## ##Announcement interval in seconds (heartbeat frequency) authentication {####Authentication information, each hot standby group keeps the same auth_type PASS ####Authentication type auth_pass 123456 ####Authentication password } virtual_ipaddress {####drift Address (VIP), can be multiple 192.168.100.10 } } virtual_server 192.168.100.10 80 {####Virtual server address (VIP), port delay_loop 15 ####Health check interval (seconds) lb_algo rr ## ##Polling scheduling algorithm lb_kind DR ####Direct routing (DR) cluster working mode




















persistence 60 ####Connection retention time (seconds), please remove it if enabled! No
protocol TCP #### application service uses the TCP protocol
real_server 192.168.100.42 80 {#### first WEB site address, the port
authority's heavy weight 1 #### node
TCP_CHECK {#### health check Way
connect_port 80 ####Check port target
connect_timeout 3 ####Connection timeout (seconds)
nb_get_retry 3 ####Retry times
delay_before_retry 4 ####Retry interval (seconds)
}
}

5. Turn on keepalived service

[root@localhost keepalived]# systemctl start keepalived 
[root@localhost keepalived]# systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@localhost keepalived]# ip addr show dev ens33    //查看ens33地址,开启keepalived服务后自动生成VIP地址,不需要手动配置
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:2e:3b:31 brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.10/24 brd 192.168.30.255 scope global noprefixroute ens33
       valid_lft forever preferred_lft forever
    inet 192.168.30.100/32 scope global ens33    ##这里可以看到VIP地址了
      ……省略部分

6. View the load balancing strategy

[root@localhost ~]# ipvsadm -ln   //策略自动添加
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.100:80 rr
  -> 192.168.30.22:80             Route   1      0          0         
  -> 192.168.30.33:80             Route   1      0          0         

Two, configure the standby dispatch server

1. Adjust the /proc response parameters

[root@localhost ~]# vi /etc/sysctl.conf
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0  
[root@localhost ~]# sysctl -p  //生效优化的配置
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

2. Install ipvsadm and keepalived programs

[root@localhost ~]# yum -y install ipvsadm keepalived

3. Clear the load distribution strategy

[root@localhost ~]# ipvsadm -C

4. Adjust the keepalived parameters

[root@localhost keepalived]# cd /etc/keepalived/
[root@localhost keepalived]# cp keepalived.conf keepalived.confbak
[root@localhost keepalived]# vim keepalived.conf
global_defs {
    
    
   router_id HA_TEST_R2
}
vrrp_instance VI_1 {
    
    
   state BACKUP
   interface ens33
   virtual_router_id 1
   priority 99
   advert_int 1
   authentication {
    
    
      auth_type PASS
      auth_pass 123456
   }
   virtual_ipaddress {
    
    
      192.168.30.100
   }
}

virtual_server 192.168.30.100 80 {
    
    
    delay_loop 15
    lb_algo rr
    lb_kind DR
    persistence 60
    protocol TCP

    real_server 192.168.30.22 80 {
    
    
        weight 1
        TCP_CHECK {
    
    
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }
    }
    real_server 192.168.30.33 80 {
    
    
        weight 1
        TCP_CHECK {
    
    
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 4
        }
    }
}

The following is an explanation of the above script
global_defs { router_id HA_TEST_R2 ####The server name of the router HA_TEST_R2 } vrrp_instance VI_1 {####Define the VRRP hot standby state BACKUP ####Hot standby state, backup means the secondary server interface ens33 # ###Indicates the physical interface that carries the VIP address virtual_router_id 1 ####The ID number of the virtual router, each hot standby group keeps the same priority 99 ####Priority, the higher the priority, the higher the priority advert_int 1 ## ##Announcement interval in seconds (heartbeat frequency) authentication {####Authentication information, each hot standby group keeps the same auth_type PASS ####Authentication type auth_pass 123456 ####Authentication password } virtual_ipaddress {####drift Address (VIP), can be multiple 192.168.100.10 } }















5. Turn on keepalived service

root@localhost keepalived]# systemctl start keepalived 
[root@localhost keepalived]#  systemctl enable keepalived
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.
[root@localhost keepalived]# ip addr show dev ens33   //现在是查看不到VIP地址的,因为是备选服务器
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:e5:5e:bb brd ff:ff:ff:ff:ff:ff
    inet 192.168.30.11/24 brd 192.168.30.255 scope global noprefixroute ens33
     ……省略部分

6. View the load balancing strategy

[root@localhost ~]# ipvsadm -ln   //策略自动添加
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.30.100:80 rr
  -> 192.168.30.22:80             Route   1      0          0         
  -> 192.168.30.33:80             Route   1      0          0         
[root@localhost ~]# tail -f /var/log/messages  //查看日志可以观察负载情况

3. Set up shared storage

[root@localhost ~]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.30.44  netmask 255.255.255.0  broadcast 192.168.30.255
        inet6 fe80::a52a:406e:6512:1c66  prefixlen 64  scopeid 0x20<link>
[root@localhost ~]# route -n   //查看路由表,看网关
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
[root@localhost ~]# rpm -q nfs-utils  //查看nfs是否安装
nfs-utils-1.3.0-0.61.el7.x86_64
[root@localhost ~]# rpm -q rpcbind  //查看rpcbind是否安装
rpcbind-0.2.0-47.el7.x86_64
[root@localhost ~]# yum -y install nfs-utils  //确实安装了
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Package 1:nfs-utils-1.3.0-0.61.el7.x86_64 already installed and latest version
Nothing to do
[root@localhost ~]# yum -y install rpcbind  //安装远程调用
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Package rpcbind-0.2.0-47.el7.x86_64 already installed and latest version
Nothing to do
[root@localhost ~]# systemctl start nfs  //启动nfs
[root@localhost ~]# systemctl enable nfs   //设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@localhost ~]# systemctl start rpcbind
[root@localhost ~]# systemctl enable rpcbind
[root@localhost ~]# vi /etc/exports   //设置共享名单
/opt/web1 192.168.30.0/24(rw,sync)
/opt/web2 192.168.30.0/24(rw,sync)
[root@localhost ~]# systemctl restart nfs
[root@localhost ~]# systemctl restart rpcbind
[root@localhost ~]# showmount -e  //查看共享目录
Export list for localhost.localdomain:
/opt/web2 192.168.30.0/24
/opt/web1 192.168.30.0/24
[root@localhost web2]# exportfs -vr
exporting 192.168.30.0/24:/opt/web2
exporting 192.168.30.0/24:/opt/web1
[root@localhost ~]# mkdir /opt/web1/ /opt/web1/  
[root@localhost ~]# vi /opt/web1/index.html   //制作web1的网页
<html>
<title>I'm Web1</title>
<body><h1>I'm Web1</h1></body>
<img src="web1.jpg" />
</html>
[root@localhost ~]# vi /opt/web2/index.html   //制作web2的网页
<html>
<title>I'm Web2</title>
<body><h1>I'm Web2</h1></body>
<img src="web2.png" />
</html>

Fourth, configure web1 server

1. Add lo:0 VIP address of virtual network card

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-lo ifcfg-lo:0
[root@localhost network-scripts]# vi ifcfg-enslo:0
DEVICE=lo:0
IPADDR=192.168.30.100
NETMASK=255.255.255.255
ONBOOT=yes
[root@localhost network-scripts]# ifup lo:0   //开启lo:0网卡
[root@localhost network-scripts]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.30.33  netmask 255.255.255.0  broadcast 192.168.30.255
……省略部分
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.30.100  netmask 255.255.255.255

2. Adjust /proc response parameters

[root@localhost network-scripts]# vi /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@mysql2 network-scripts]# sysctl -p   //生效参数
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

3. Set up local routing

[root@localhost network-scripts]# vi /etc/rc.local  //设置开机项
/sbin/route add -host 192.168.30.100 dev lo:0  //添加VIP到本地路由,即直连路由
[root@localhost network-scripts]# route add -host 192.168.30.100 dev lo:0
[root@mysql2 network-scripts]# route -n  //查看路由表,VIP添加成功
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.30.11   0.0.0.0         UG    100    0        0 ens33
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.30.100  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

4. Mount the nfs shared storage

[root@localhost ~]# yum -y install nfs-utils
[root@localhost ~]# showmount -e 192.168.30.44   //若查看不到,可能是nfs服务器发布失败,去nfs服务器再次发布一下:exportsfs
Export list for 192.168.30.44:
/opt/web2 192.168.30.0/24
/opt/web1 192.168.30.0/24
[root@mysql2 ~]# yum -y install httpd
[root@mysql2 ~]# systemctl start httpd
[root@mysql2 ~]# systemctl enable httpd
[root@localhost html]# vi /etc/fstab
192.168.30.44:/opt/web1 /var/www/html nfs defaults,_netdev 0 0
[root@localhost html]# mount 192.168.30.44:/opt/web1 /var/www/html/

5. Test the mounting status, the test is correct

Insert picture description here

Five, configure web2 server

1. Add lo:0 VIP address of virtual network card

[root@localhost html]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]#cp ifcfg-lo ifcfg-lo:0
[root@localhost network-scripts]# vi ifcfg-enslo:0
DEVICE=lo:0
IPADDR=192.168.30.100
NETMASK=255.255.255.255
ONBOOT=yes 
[root@localhost network-scripts]# systemctl restart network
[root@localhost network-scripts]# ifconfig
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.30.22  netmask 255.255.255.0  broadcast 192.168.30.255
……省略部分
lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 192.168.30.100  netmask 255.255.255.255

2. Adjust /proc response parameters

[root@localhost network-scripts]# vi /etc/sysctl.conf
########插入下面配置,解决ARP映射问题参数
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
[root@mysql2 network-scripts]# sysctl -p   //生效配置
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

3. Set up local routing

[root@localhost network-scripts]# vi /etc/rc.local
/sbin/route add -host 192.168.30.100 dev lo:0   //添加VIP本地访问路由
[root@localhost network-scripts]# route add -host 192.168.30.100 dev lo:0
[root@mysql2 network-scripts]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.30.11   0.0.0.0         UG    100    0        0 ens33
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.30.100  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

4. Mount the nfs shared storage

[root@localhost ~]# yum -y install nfs-utils
[root@localhost ~]# showmount -e 192.168.30.44   //若查看不到,可能是nfs服务器发布失败,去nfs服务器再次发布一下:exportsfs
Export list for 192.168.30.44:
/opt/web2 192.168.30.0/24
/opt/web1 192.168.30.0/24
[root@mysql2 ~]# yum -y install httpd
[root@mysql2 ~]# systemctl start httpd
[root@mysql2 ~]# systemctl enable httpd
[root@localhost html]# vi /etc/fstab
192.168.30.44:/opt/web1 /var/www/html nfs defaults,_netdev 0 0
[root@localhost html]# mount 192.168.30.44:/opt/web1 /var/www/html/

5. Test the mounting status, the test is correct

Insert picture description here

Six, cluster test

1. Test the LVS polling status, log in twice to check whether the load distribution is normal, polling is to check the data of the web server in turn

Insert picture description here
Insert picture description here

2. Test keepalived status

2.1. Log in to the webpage and capture packets. When the two scheduling servers are online, capture the VRRP message sent by the master server to
Insert picture description here
ping the VIP address, and check the MAC address information corresponding to the ARP table, which is the MAC address of the master at this time

Insert picture description here
Insert picture description here
2.2, turn off the keepalived function of the master, test again, and the standby server sends a message

Insert picture description here
Ping the VIP address again and check the MAC address information corresponding to the ARP table. At this time, it has been converted to the MAC address of the Backup server.
Insert picture description here

Guess you like

Origin blog.csdn.net/CN_LiTianpeng/article/details/108749155