LVS+Keepalived cluster deployment

一.Keepalived

1. Case study

  • In the enterprise, a single server bears the risk of a single point of failure in the application
  • Once a single point of failure occurs, enterprise services will be interrupted, causing great danger

2. Tool introduction

  • A health check tool specially designed for LVS and HA
  • Support automatic failover (Failover)
  • Support node health check (Health Checking) to
    determine the availability of LVS load scheduler and node server. When the master host fails, switch to the backup node in time to ensure normal business. When the master fails, the host will rejoin the cluster and the business will be switched back. master node.
  • Official website: http://www.keepalived.org/

3. Anatomy of experimental principles

  • Keepalived adopts VRRP hot backup protocol to realize the multi-machine hot backup function of Linux server.
  • VRRP (Virtual Routing Redundancy Protocol) is a backup solution for routers.
  • A hot backup group is formed by multiple routers, which provide external services through a shared virtual IP address.
  • There is only one master router in each hot standby group to provide services at the same time, and the other routers are in a redundant state.
  • If the currently online router fails, other routers will automatically take over the virtual IP address according to the set priority and continue to provide services.

4. Installation and startup

  • When applied in the LVS cluster environment, the ipvsadm management tool is also needed.
  • YUM install Keepalived
  • Enable Keepalived service

2. LVS+Keepalived high-availability cluster deployment

1. Set up the environment:

Primary DR server (load scheduler) (centos7-5): 192.168.174.135
Standby DR server (load scheduler) (centos7-4): 192.168.174.136
Web server 1 (centos7-6): 192.168.174.133
Web server 2 ( centos7-7): 192.168.174.134
VIP: 192.168.174.132
Windows10 client: 192.168.174.100

2.LVS deployment

1. Configure the load scheduler (main and standby are the same)
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

yum install -y ipvsadm
modprobe ip_vs #Load ip_vs module
cat /proc/net/ip_vs #View ip_vs version information

vim /etc/sysctl.conf
net.ipv4.ip_forward = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0

sysctl -p

cd /etc/sysconfig/network-scripts/
cp ifcfg-ens33 ifcfg-ens33:0

vim ifcfg-ens33:0
DEVICE=ens33:0
ONBOOT=yes
IPADDR=192.168.174.132
NETMASK=255.255.255.255

systemctl restart network
ifup ens33:0
ifconfig ens33:0

ipvsadm-save > /etc/sysconfig/ipvsadm
或者
ipvsadm --save > /etc/sysconfig/ipvsadm

systemctl start ipvsadm.service

ipvsadm -C #Clear the original policy
ipvsadm -A -t 192.168.174.132:80 -s rr
ipvsadm -a -t 192.168.174.132:80 -r 192.168.174.133:80 -g #If it is tunnel mode, replace -g with -i
ipvsadm -a -t 192.168.174.132:80 -r 192.168.174.134:80 -g

ipvsadm -ln #View node status, Route represents DR mode

3. Configure the node server

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
yum -y install httpd
systemctl start httpd

-----------192.168.174.133-----------
echo 'this is lmx web! ’ > /var/www/html/index.html

-----------192.168.174.134-----------
echo 'this is hqq web! ’ > /var/www/html/index.html
vim /etc/sysconfig/network-scripts/ifcfg-lo:0
DEVICE=lo:0
ONBOOT=yes
IPADDR=192.168.174.132
NETMASK=255.255.255.255

service network restart or systemctl restart network #Restart the network card
ifup lo:0
ifconfig lo:0
route add -host 192.168.174.132 dev lo:0

vim /etc/sysctl.conf
net.ipv4.conf.lo.arp_ignore = 1 #The system only responds to ARP requests whose destination IP is the local IP
net.ipv4.conf.lo.arp_announce = 2 #The system does not use the source address of the IP packet To set the source address of the ARP request, and select the IP address of the sending interface
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2

sysctl -p

4. Configure Keepalived

  • Note:
    yum install -y keepalived
    cd /etc/keepalived/
    cp keepalived.conf keepalived.conf.bak
    vim keepalived.conf
    global_defs {#define global parameters-
    10 lines, modify mail The service points to the local-----
    smtp_server 127.0.0.1
    ---- line 12, modify, specify the name of the server (router), the name of the primary and secondary servers must be different, the primary is LVS_01, the standby is LVS_02
    router_id LVS_01
    }

vrrp_instance VI_1 {#Define VRRP hot standby instance parameters
---- 20 lines, modify, specify the hot standby state, the master is MASTER, the backup is BACKUP----
state MASTER
---- line 21, modify, specify the bearer VIP address Physical interface ----
interface ens33
---- line 22, modify, specify the ID number of the virtual router, and keep the same for each hot standby group----
virtual_router_id 10
---- line 23, modify, specify the priority , The larger the value, the higher the priority. Here, set the main to 100 and the standby to 99.
priority 100
advert_int 1 #announcement interval seconds (heartbeat frequency)
authentication {
#define authentication information, keep the same for each hot standby group auth_type PASS #Authentication type
---- Line 27, modify, specify the authentication password, the primary and backup servers are consistent ----
auth_pass abc123
}
virtual_ipaddress { #Specify the cluster vip address
192.168.174.132
}
}

----Line 36, modify, specify the virtual server address (VIP), port, define the virtual server and Web server pool parameters
virtual_server 192.168.174.132 80 { delay_loop 6 #health check interval (seconds) lb_algo rr #specify scheduling algorithm , Polling (rr) ----line 39, modify, specify the cluster working mode, direct routing (DR) lb_kind DR persistence_timeout 50 #Connection retention time (seconds) protocol TCP #The application service uses the TCP protocol ---- Line 43, modify, specify the address and port of the first web node ---- real_server 192.168.174.133 { weight 1 #node weight








-----45 line-delete, add the following health check method
TCP_CHECK { connect_port 80 #add the target port to be checked connect_timeout 3 #add connection timeout (seconds) nb_get_retry 3 #add the number of retries delay_before_retry 4 #add the retry interval } }





----Add the address of the second web node, port
real_server 192.174.134.70 { weight 1 TCP_CHECK { connect_port 80 connect_timeout 3 nb_get_retry 3 delay_before_retry 4 } } ###Delete the redundant configuration behind ### }









systemctl start keepalived.service
ip addr show dev ens33:0 #View virtual network card vip

5. Test verification

Visit http://192.168.174.132/ on the client, the default gateway points to 192.168.174.132
and then test after disabling the network card on the main dispatch server, ifdown ens33:0

Guess you like

Origin blog.csdn.net/LI_MINGXUAN/article/details/113208942