Linux high availability cluster configuration of LVS] [keepalived

########## ########## configured keepalived
'this time to achieve the troubleshooting that is health checks'
' But if the dispatcher is down, the whole cluster can not access, so require high availability '

# To open a virtual machine, used for high availability, configure yum source

1. source compiler keepalived

tar zxf keepalived-2.0.6.tar.gz
yum install openssl-devel -y
yum install libnl -y
yum install libnl-devel -y
yum install libnfnetlink-devel-1.0.0-1.el6.x86_64.rpm -y
make && make install
./configure --help ##可以看到–with-init=(upstart|systemd|SYSV|SUSE|openrc)specify init type,systemd为rhel7版本

./configure --with-init = SYSV ## before compile and install gcc, OpenSSL-devel
./configure --prefix = / usr / local / keepalived --with-the init = SYSV
see the following options after # is compiled to yes , indicating success
Use IPVS Framework: Yes

make && make install

2. compiled directory scp to server4

3. Configure startup script and configuration file # scheduler (server1 and server4) have done
chmod + x /usr/local/keepalived/etc/rc.d/init.d/keepalived

ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/ ## configuration files, scripts are made of soft link

ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

ln -s /usr/local/keepalived/etc/keepalived/ /etc/

ln -s /usr/local/keepalived/sbin/keepalived /sbin/

4. Configuration keepalived

# Turn off ldirectord, because there are health checks keepalived
# / etc / init.d / ldirectord STOP
#chkconfig ldirectord OFF

1) delete scheduling node (server1, server4) on vip, because keepalived will add their own

ip addr del 172.25.0.100/24 dev eth0

2) Editing server1 (master node) profile keepalived
global_defs {
notification_email {
the root node down to ## @ localhost who sent the message
}
notification_email_from keepalived ## @ localhost sender name
smtp_server 127.0.0.1 ## transmits the server (native)
30 smtp_connect_timeout
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
#vrrp_strict ## commented out, otherwise there will be problems
vrrp_garp_interval 0
vrrp_gna_interval 0
}

{VI_1 vrrp_instance
State MASTER ## master node
interface eth0
virtual_router_id 51 ## time to experiment, to enable students to modify the ID, everyone can not be the same, or will go wrong
priority 100
advert_int 1
authentication {
AUTH_TYPE PASS
AUTH_PASS 1111
}
virtual_ipaddress {
172.25 ## VIP .0.100
}
}

{172.25.0.100 80 virtual_server
delay_loop 3 ## rs error when, how many times before the message attempts to inform
lb_algo rr
lb_kind DR ## DR mode
#persistence_timeout 50
Protocol TCP

real_server 172.25.0.2 80 {
TCP_CHECK {
    weight 1
        connect_timeout 3
        retry 3
        delay_before_retry 3
    }
}
real_server 172.25.0.3 80 {
TCP_CHECK {
    weight 1
        connect_timeout 3
        retry 3
        delay_before_retry 3
    }
}

}

the install the mailx -Y yum
. 3) editing a node (Server4) profile
State the BACKUP
priority modify these two options 50 ##

4) Start keepalived
/etc/init.d/keepalived Start

5) Test
on a physical access: curl 172.25.0.100
can see poll

See logs on server1:
Vim / var / log / messages
On Dec. 8 16:44:15 server1 Keepalived_vrrp [7656]: (VI_1) Entering the MASTER the STATE
On Dec. 8 16:44:15 server1 Keepalived_vrrp [7656]: (VI_1) Setting VIPs .

在server4上查看日志:
vim /var/log/messages
Dec 8 16:50:26 server4 Keepalived_vrrp[1090]: (VI_1) removing VIPs.
Dec 8 16:50:26 server4 Keepalived_vrrp[1090]: (VI_1) Entering BACKUP STATE (init)

Close a rs of http service, once again access (may be a slight delay) on a physical machine, no error will
ipvsadm strategy will be put down in the fall of that station rs automatically remove
#vim / var / log / messages can be seen, after server2 is down, the dispatcher will detect three times, three times and failed will reject
# is the configuration file delay_loop 3 role

mail Which you can see down there will be a message informing the # (no mail order, then, yum install -y mailx)

# Test VIP drift
down off the server (the primary node) of keepalived service
can see the VIP will be automatically removed, and then drift to the server4
service normally visit, to see the logs can see the information VIP and standby switch

server1 again turned keepalived, will automatically take over the VIP, and enter the MASTER status

Guess you like

Origin blog.csdn.net/qq_36016375/article/details/94915741
Recommended