LVS+keepalived load balancing + high availability cluster

LVS+keepalived load balancing + high availability cluster

1. Overview

1. Case study of Keepalived

In enterprise applications, a single server bears the risk of a single point of failure in the application

Once a single point of failure occurs, enterprise services will be interrupted, causing great harm

2. Introduction to Keepalived tool

A health check specially designed for LVS and HA] C Tool

Support automatic failover (Failover)

Support node health check (Health Checking)

Official website: http://www.keepalived.org/

3. Analysis of Keepalived implementation principle

1. Keepalived adopts VRRP hot backup protocol

Realize the multi-machine hot backup function of Linux server

2. VRRP (Virtual Routing Redundancy Protocol) is a backup solution for routers

Consists of multiple routers-a hot standby group, which provides external services through a shared virtual IP address

In each hot standby group, only one main router provides services at the same time, and other routers are in a redundant state

If the currently online router fails, other routers will automatically take over the virtual IP address according to the set priority and continue to provide services

3. Keepalived can realize multi-machine hot backup, and each hot backup group can have multiple servers

2. Experiment

Experiment introduction

Case: Load balancing + high-availability cluster
1. Keepalived's design goal is to build a highly available LVS load-balancing cluster. You can call the ipvsadm tool to create virtual servers and manage server pools, not just for dual-system hot backup.
2. Use Keepalived Constructing LVS cluster is easier and easier to use
3. Main advantages
Realize hot standby switching of LVS load scheduler to improve availability
Perform health check on nodes in the server pool, automatically remove failed nodes, and rejoin after recovery

Case topology
■ In the LVS cluster structure based on LVS+Keepalived, it includes at least two hot standby load schedulers and more than three node servers

mark

1. Experimental planning

All five machines use host-only mode

DR1 : 192.168.100.128

DR2 : 192.168.100.129

WEB1:192.168.100.201

WEB2:192.168.100.202

client:192.168.100.20

2. Install the toolkit and close the firewall (DR1 and DR2)

setenforce 0

iptables -F

yum install keepalived ipvsadm -y

3. Edit configuration files (DR1 and DR2)

vim /etc/sysctl.conf

net.ipv4.ip_forward=1 ##proc response to close the redirection function
net.ipv4.conf.all.send_redirects=0
net.ipv4.conf.default.send_redirects=0
net.ipv4.conf.ens33.send_redirects=0

sysctl -p ##Make the configuration file effective

4. Edit the network card file (DR1)

##DR1Network Card Configuration File

cd /etc/sysconfig/network-scripts/

cp -p ifcfg-ens33 ifcfg-ens33:0

vim ifcfg-ens33

mark

vim ifcfg-ens33:0

systemctl restart network ##Restart the network card after editing

mark

5. Edit the network card file (DR2)

cd /etc/sysconfig/network-scripts/

cp -p ifcfg-ens33 ifcfg-ens33:0

vim ifcfg-ens33

mark

vim ifcfg-ens33:0

systemctl restart network ##Restart the network card after editing

mark

6. Edit configuration files (DR1 and DR2)

cd /etc/init.d/

vi dr.sh

#!/bin/bash
GW=192.168.100.1
VIP=192.168.100.10
RIP1=192.168.100.201
RIP2=192.168.100.202
case “$1” in
start)
/sbin/ipvsadm --save > /etc/sysconfig/ipvsadm
systemctl start ipvsadm
/sbin/ifconfig ens33:0 $VIP netmask 255.255.255.255 broadcast $VIP up
/sbin/route add -host $VIP dev ens33:0
/sbin/ipvsadm -A -t $VIP:80 -s rr
/sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g
/sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g
echo “ipvsadm starting-------------------[ok]”
;;
stop)
/sbin/ipvsadm -C
systemctl stop ipvsadm
ifconfig ens33:0 down
route del $VIP
echo “ipvsamd stoped---------------------[ok]”
;;
status)
if [!-e /var/lock/subsys/ipvsadm ];then
echo “ipvsadm stoped------------------------”
exit 1
else
echo “ipvsamd Runing---------------------[ok]”
fi
;;
*)
echo “Usage:$0{start|stop|status}”
exit 1
esac
exit 0

chmod +x dr.sh ##Add file execution permission
service dr.sh start ##Start file

mark

7. Install http toolkit (WEB1 and WEB2)

yum install httpd -y

8. Edit network card information (WEB1 and WEB2)

cd /etc/sysconfig/network-scripts/

cp -p ifcfg-lo ifcfg-lo:0

vim ifcfg-lo:0

DEVICE=lo:0
IPADDR=192.168.100.10
NETMASK=255.255.255.0
ONBOOT=yes

mark

vim ifcfg-ens33

TYPE=“Ethernet”
PROXY_METHOD=“none”
BROWSER_ONLY=“no”
BOOTPROTO=“static”
DEFROUTE=“yes”
IPV4_FAILURE_FATAL=“no”
IPV6INIT=“yes”
IPV6_AUTOCONF=“yes”
IPV6_DEFROUTE=“yes”
IPV6_FAILURE_FATAL=“no”
IPV6_ADDR_GEN_MODE=“stable-privacy”
NAME=“ens33”
DEVICE=“ens33”
UUID=“c6759283-4ba1-4bb4-88c8-8edc352c2017”
ONBOOT=“yes”
IPADDR=“192.168.100.201”
NETMASK=“255.255.255.0”
GATEWAY=“192.168.100.1”

mark

mark

systemctl restart network ##Restart the network card

9. Edit configuration files (WEB1 and WEB2)

cd /etc/init.d/

vim web.sh

#!/bin/bash
VIP=192.168.100.10
case “$1” in
start)
ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
/sbin/route add -host $VIP dev lo:0
echo “1” > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo “2” > /proc/sys/net/ipv4/conf/lo/arp_announce
echo “1” > /proc/sys/net/ipv4/conf/all/arp_ignore
echo “2” > /proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p > /dev/null 2>&1
echo “RealServer Start OK”
;;
stop)
ifconfig lo:0 down
route del $VIP /dev/null 2>&1
echo “0” > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo “0” > /proc/sys/net/ipv4/conf/lo/arp_announce
echo “0” > /proc/sys/net/ipv4/conf/all/arp_ignore
echo “0” > /proc/sys/net/ipv4/conf/all/arp_announce
echo “RealServer Stopd”
;;
*)
echo “Usage:$0{start|stop}”
exit 1
esac
exit 0

chmod +x web.sh

service web.sh start

10. Apache creates the main interface on the server (WEB1 and WEB2)

cd / var / www / html /

<h1>this is kgc web!</h1> ##web1The content of the main interface

<h1>this is test web!</h1> ##web2Content in the main interface

systemctl restart httpd ##Restart the service

mark

mark

11. Edit the keepalived configuration file (DR1 and DR2)

cd /etc/keepalived/

vim keepalived.conf

global_defs{

smtp_server 127.0.0.1 ##Point to local

router_id LVS_01 ##Specify the name, the backup server has a different name

}

vrrp_instance VI_1 { state MASTER ##Backup service is BACKUP virtual_router_id 10 ##The group number is the same priority 100 ##Priority backup is less than the main ... authentication { auth_type PASS auth_pass 1111 ##Password information does not need to be modified } virtual_ipaddress { 192.168.100.10 ##Virtual address } }











virtual_server 192.168.100.10 80 { lb_kind DR ##LVS mode


real_server 192.168.100.201 80 {
weight 1
TCP_CHECK{
connect_port 80 ##添加端口
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.100.202 80 {
weight 1
TCP_CHECK{
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}

}

scp keepalived.conf [email protected]:/etc/keepalived/

vim keepalived.conf ##The modified content of another scheduling host configuration file is as follows

mark

12. Open the keepalived service and restart the network card (DR1 and DR2)

systemctl start keepalived.service

service network restart

13. Test (WIN10)

mark

Three, summary of the problem

Solve Failed to start LSB: Bring up/down networking problem

1. The following error occurs when executing systemctl restart network

Restarting network (via systemctl): Job for network.service failed. See ‘systemctl status network.service’ and ‘journalctl -xn’ for details.

mark

2. According to the above prompt, execute systemctl status network.service to output the following similar information

[root@localhost ~]# systemctl status network.service

network.service - LSB: Bring up/down networking

Loaded: loaded (/etc/rc.d/init.d/network)

Active: failed (Result: exit-code) since三 2014-11-05 15:30:10 CST; 1min 5s ago

11月 05 15:30:10 localhost.localdomain network[2920]: RTNETLINK answers: File exists

11月 05 15:30:10 localhost.localdomain network[2920]: RTNETLINK answers: File exists

11月 05 15:30:10 localhost.localdomain network[2920]: RTNETLINK answers: File exists

11月 05 15:30:10 localhost.localdomain network[2920]: RTNETLINK answers: File exists

11月 05 15:30:10 localhost.localdomain network[2920]: RTNETLINK answers: File exists

11月 05 15:30:10 localhost.localdomain network[2920]: RTNETLINK answers: File exists

11月 05 15:30:10 localhost.localdomain network[2920]: RTNETLINK answers: File exists

11月 05 15:30:10 localhost.localdomain systemd[1]: network.service: control process exited, code=exited status=1

11月 05 15:30:10 localhost.localdomain systemd[1]: *Failed to start LSB: Bring up/down networking.*

11月 05 15:30:10 localhost.localdomain systemd[1]: Unit network.service entered failed state.

3. Solution, restart the network card to automatically connect

systemctl stop NetworkManager

systemctl enable NetworkManager

systemctl start NetworkManager

service network restart

4. The above method cannot solve the problem, modify the configuration file

##Comment out this ip judgment can be quickly located by /arping

vim /etc/sysconfig/network-scripts/ifup-eth

mark

5. Save and exit, restart the network card

service network restart

Guess you like

Origin blog.csdn.net/weixin_39608791/article/details/108375488