LVS load balancing mode ------ DR + Keepalived

Article Directory

一、keepalived

What (1) keepalived is

(2) keepalived works

Second, the configuration steps of:

Step 1: Configure two DR

Step Two: configuring a first node server web1

The third step: configuring the second node server web2

Step four: Client Test

Step five: deploy keepalived

Step Six: Experimental results

一、keepalived :

What (1) keepalived is

keepalived cluster management is to ensure a high availability cluster service software, which functions like heartbeat, to prevent a single point of failure.

1, keepalived three core modules:

core Core Module

chech health monitoring

vrrp Virtual Router Redundancy Protocol

2, three important functions Keepalived services:

Management LVS

LVS cluster nodes to check

High Availability function as a system of network services

(2) keepalived works

1, keepalived is to achieve VRRP protocol-based, VRRP stands for Virtual Router Redundancy Protocol, or virtual routing redundancy protocol.

2, virtual routing redundancy protocol, protocol router can be considered highly available, about to stage N routers provide the same functionality of a router group, the group which has a master and multiple backup, there is a master above the external service provider vip (default route to other machines within the LAN router is for vip), master will send multicast, when the backup does not receive packets vrrp considers that the master dawdle out, then you need a VRRP according to priority of election when the backup master. So we can ensure high availability of the router.

3, keepalived there are three main modules, namely core, check and vrrp. keepalived core module as the core, the main process responsible for initiating, maintaining, and loads the global configuration file and parsing. check responsible for health checks, including a variety of common inspection method. vrrp VRRP module is to achieve agreement.

Second, the configuration steps of:

Experimental environment Description:

(1) preparing four virtual machines, scheduling of two servers, the two servers node;

(2) the scheduling server and the deployment LVS keepalived, load balancing and stateful failover;

(3) the client host through a virtual ip address, access to back-end Web server pages;

(4) Experimental results: wherein one DR downtime, normal access all the services as usual.

The role of IP addresses

DRl scheduling server (master) 192.168.100.201

DR2 of the scheduling server (standby) 192.168.100.202

Node server web1 192.168.100.221

Node server web2 192.168.100.222

Virtual IP 192.168.100.10

The client test machine win7 192.168.100.50

Step 1: Configure two DR

(1) Installation and keepalived packet ipvsadm

yum install ipvsadm keepalived -y

(2) modify /etc/sysctl.conf file, add the following code:

net.ipv4.ip_forward=1
//proc响应关闭重定向功能
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.ens33.send_redirects = 0
sysctl -p     这条命令是让以上配置生效

(3) configure virtual NICs (ens33: 0):

1, note the path: / etc / sysconfig / network-scripts /

2, the direct copying an existing card information, can be modified:

cp ifcfg-ens33 ifcfg-ens33:0

vim ifcfg-ens33:0
 删除原来所有信息,添加以下代码:
DEVICE=ens33:0
ONBOOT=yes
IPADDR=192.168.100.10
NETMASK=255.255.255.0

3, enable virtual NIC:

ifup ens33:0

LVS load balancing mode ------ DR + Keepalived

(4) Write the service startup script, path: /etc/init.d

1, vim dr.sh script reads as follows:

#!/bin/bash
GW=192.168.100.1
VIP=192.168.100.10
RIP1=192.168.100.221
RIP2=192.168.100.222
case "$1" in
start)
        /sbin/ipvsadm --save > /etc/sysconfig/ipvsadm
        systemctl start ipvsadm
        /sbin/ifconfig ens33:0 $VIP broadcast $VIP netmask 255.255.255.255 broadcast $VIP up
        /sbin/route add -host $VIP dev ens33:0
        /sbin/ipvsadm -A -t $VIP:80 -s rr
        /sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g
        /sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g
        echo "ipvsadm starting------------------[ok]"
        ;;
        stop)
        /sbin/ipvsadm -C
        systemctl stop ipvsadm
        ifconfig ens33:0 down
        route del $VIP
        echo "ipvsamd stoped--------------------[ok]"
        ;;
stop)
       /sbin/ipvsadm -C
        systemctl stop ipvsadm
        ifconfig ens33:0 down
        route del $VIP
        echo "ipvsamd stoped--------------------[ok]"
        ;;
        status)
        if [ ! -e ar/lock/subsys/ipvsadm ];then
        echo "ipvsadm stoped--------------------"
        exit 1
                else
                echo "ipvsamd Runing-------------[ok]"
        fi
        ;;
        *)
        echo "Usage: $0 {start|stop|status}"
        exit 1
        esac
        exit 0

2, add permissions, startup scripts

chmod +x dr.sh
service dr.sh start

LVS load balancing mode ------ DR + Keepalived

(5) a second DR first configuration and the same, the operation is repeated to look

Step Two: configuring a first node server web1

(1) Installation httpd

yum install httpd -y

systemctl start httpd.service   //开启服务

(2) Write a test page on the site, it will be easy to verify the test results back

路径:/var/www/html
echo "this is accp web" > index.html

(3) create a virtual network adapter

1、路径:/etc/sysconfig/network-scripts/

2、复制网卡信息加以修改
cp ifcfg-lo ifcfg-lo:0

3、vim ifcfg-lo:0
删除原来所有信息,添加以下内容:

DEVICE=lo:0
IPADDR=192.168.100.10
NETMASK=255.255.255.0
ONBOOT=yes

(4) Write the service startup script, path: /etc/init.d

1, vim web.sh script reads as follows:

#!/bin/bash
VIP=192.168.100.10
        case "$1" in
        start)
                ifconfig lo:0 $VIP netmask 255.255.255.255 broadcast $VIP
                /sbin/route add -host $VIP dev lo:0
        echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
        echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce
        sysctl -p > /dev/null 2>&1
        echo "RealServer Start OK "
        ;;
        stop)
                ifconfig lo:0 down
                route del $VIP /dev/null 2>&1
                echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore
                echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce
                echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore
                echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce
    echo "RealServer Stoped"
                ;;
        *)
                echo "Usage: $0 {start|stop}"
                exit 1
        esac
        exit 0

2, add permissions, and execution

chmod +x web.sh           //添加权限
service web.sh start      //启动服务

(5) open virtual network card

ifup lo:0

(6) whether the test page correctly

LVS load balancing mode ------ DR + Keepalived

The third step: configuring the second node server web2

The first web and a second configuration identical, the only difference is that, in order to distinguish the effect of the experiment, the test page for a content of the second stage:

路径:/var/www/html
echo "this is benet web" > index.html

Test page is normal:

LVS load balancing mode ------ DR + Keepalived

Step four: Client Test

(1) Configure the IP address of the client's good

LVS load balancing mode ------ DR + Keepalived

(2) Test

1, with the ability to communicate 192.168.100.10:

LVS load balancing mode ------ DR + Keepalived

2, access a web page is normal

LVS load balancing mode ------ DR + Keepalived

LVS load balancing mode ------ DR + Keepalived

Step five: deploy keepalived

A, on the first deployment DR:

(1) keepalived.conf modify files, directories / etc / keepalived /

Modify the following:

LVS load balancing mode ------ DR + Keepalived

LVS load balancing mode ------ DR + Keepalived

(2) to start the service

systemctl start keepalived.service

Second, on the second deployment DR:

(1) modify keepalived.conf file

LVS load balancing mode ------ DR + Keepalived

LVS load balancing mode ------ DR + Keepalived

(2) to start the service

systemctl start keepalived.service

Step Six: Experimental results

Since the deployment of the LVS and keepalived, object, load balancing and hot standby.

At this point, we simulate what faults, shoot down one of the station DR1, if the client still can and virtual IP address interoperability, and can be accessed the site, then it shows DR2 will replace the DR1 work, the effect of preventing single points of failure to achieve a .

(1) Fault simulation: shoot down DR1

ifdown ens33:0

(2) verification result

1, the client ping the virtual ip

LVS load balancing mode ------ DR + Keepalived

2, there is still able to access the website

LVS load balancing mode ------ DR + Keepalived

LVS load balancing mode ------ DR + Keepalived

Guess you like

Origin blog.51cto.com/14557584/2469220