keepalived + LVS hot standby HA cluster

keepalived + LVS high availability cluster

lab environment

Here because of the need, LVS load balancing cluster, of course, do not, then you can, I am here is to demonstrate the LVS

DR mode, on this issue I want to do LVS behind the experiment I will make it clear

Now I have to be on the basis of a blog and can access the test is successful, the link below

https://blog.csdn.net/weixin_45308292/article/details/102485109

Because the lab environment, to see the effect, there is no need to do NFS storage, do a different page in a web host, you can, of course, of course, to do a real environment NFS storage

So you can put on the blog post, NFS storage to restore the original snapshot,
IP is 192.168.100.105 VM 1, as a backup scheduler

1. Configure master scheduler

Enter on one among 192.168.100.102

1) installation services and keep ipvsadm

Loading the optical disc, arranged yum
[CentOS7-02 the root @ ~] # yum -Y keepalived the install the ipvsadm
[CentOS7-02 the root @ ~] # systemctl enable keepalived

2) Configuration master profile keepalived

[CentOS7-02 the root @ ~] # systemctl STOP firewalld
[CentOS7-02 the root @ ~] # CD / etc / keepalived /
[the root @ CentOS7-02 keepalived] # CP keepalived.conf keepalived.conf.bak ( make a copy
[ @ CentOS7-02 keepalived root] # vim keepalived.conf
attention must be according to my figures
Here Insert Picture Description

Down to the end, the new hit me from below

Line 3 I figure wrong, this line is very important lb_algo rr

Of course, I have to lay back, you can copy, there are absolutely right

Here Insert Picture Description
virtual_server 192.168.100.222 80 {
delay_loop 15
lb_algo rr
lb_kind DR
! persistence 60
protocol TCP
real_server 192.168.100.103 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 4
}
}
real_server 192.168.100.104 80 {
weight 1
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 4
}
}
}

保存退出
一定要重启服务,如下
[root@CentOS7-02 ~]# systemctl restart keepalived

Configuring the scheduler

192.168.100.105 into the

1) installation package

[root@centos7-05 ~]# yum -y install keepalived ipvsadm
[root@centos7-05 ~]# systemctl enable keepalived

2) configured from the primary profile of the scheduler

[root @ centos7-05 ~] # vim /etc/keepalived/keepalived.conf
have changed as follows
Here Insert Picture Description

Then also slipped Finally, I marked in the figure below, you can directly copy the master scheduler there is the content of my second picture above,

Because I was replicated in the master scheduler, so the third line here is also a mistake is lb_algo rr

Be sure to change overnight, or be wrong

Here Insert Picture Description
Restart service
[root @ centos7-05 ~] # systemctl restart keepalived

3. Configure the Web server node

If you keep doing this directly hot spare, you must do the following three operations, the following three operations I top that has a link to the blog

ARP 1. adjustment / proc response of the system parameters

2. virtual interface lo: 0 Address Configuration VIP

3. Add a local route to the VIP

Because I do here is follow my blog post on the basis of, so here you can not operate the

4. Since my last blog post is mounted NFS storage, this is not NFS, and either the same page

5. Go 103, 104, replaced the index.html file is not the same, and start the httpd service
[root @ centos7-03 HTML] # systemctl Start httpd
[root @ centos7-04 HTML] # systemctl Start httpd

4. Clear conflict of configuration items

If you now have access to the VIP address, it is absolutely not succeed

Because keepalived own clustering feature, and I am here is to use the strategy in LVS ipvsadm assigned, so there will be conflict

So, I deleted everything below, if you are doing high-availability direct keepalived of these things can not be added
1) Clear the master scheduler ens33: 0 IP Interface

Because LVS should, VIP address configured on the interface, but in this case the VIP address is drift address, it will move
on can not be configured interfaces

Entering 192.168.100.102
[CentOS7-02 the root @ ~] # CD / etc / sysconfig / Network-scripts /
[the root @ CentOS7-02 Network-scripts] # RM-ens33 the ifcfg -rf: 0
[@ CentOS7-02 the root Network- scripts] # systemctl restart Network
[root @ CentOS7-02 ~] # ifconfig (should not ens33: 0 that NIC)

2) Clear ipvsadm master scheduling strategy

In fact there are keepalived comes with clustering capabilities, above the first two pictures, and the first four pictures are actually doing the
DR cluster

[root @ CentOS7-02 ~] # the ipvsadm -C (purge policy)

If you do keep highly available directly, they do not do ipvsadm strategy

3) Clear / proc response parameters on the master scheduler

[root @ CentOS7-02 ~] # vim /etc/sysctl.conf
on the blog post added three lines of argument, these three lines are deleted, and run the following command
[root @ CentOS7-02 ~] # sysctl -p

5. Verify availability keepalived

192.168.100.102 entering master scheduler, run the following command to view a drift address
Here Insert Picture Description
to open a Win7, can also be used to verify the previous machine, vm1 192.168.100.66

Browser access http://192.168.100.222 (VIP address is drift address)

192.168.100.103 page is first displayed, as
Here Insert Picture Description
then shut down the master scheduler 192.168.100.102

Then close the browser, re-open, access http://192.168.100.222 again

192.168.100.104 page should be displayed, but can also be a normal visit

This step verified load balancing, high availability and keep as close a master scheduler, still be a normal visit, but also to switch pages

Here Insert Picture Description
Enter Auxiliary scheduler 192.168.100.105 them to see drift address, has come
Here Insert Picture Description

Experiment completed

Published 54 original articles · won praise 57 · views 20000 +

Guess you like

Origin blog.csdn.net/weixin_45308292/article/details/102554017