This article has optimized the steps of building keepalived in the previous blog, and the following is directly posted dry goods
1. Environment
Host: Win10 professional workstation
VMware:16Pro(16.1.0)
CentOS 7
Network adapter: all in NAT mode
Network card configuration: get IP statically
YUM source: local
Main DR server (load scheduler) (CentOS 7-1): 192.168.126.11
From the DR server (load scheduler) (CentOS 7-2): 192.168.126.12
Web server 1 (CentOS 7-3): 192.168.126.13
Web server 2 (CentOS 7-4): 192.168.126.14
NFS server (CentOS 7-5): 192.168.126.15
VIP:192.168.126.166
Win10 client: 192.168.126.10
2. Construction steps
This experiment is based on the LVS-DR load balancing cluster that has been prepared, and only a slave scheduler is added, and the configuration is consistent with the master scheduler
Seeing this, I need to follow my previous blog to build LVS+DR first, and then add a slave scheduler with the same configuration. The link is posted below
First, the master and the slave execute this command separately
As you can see, only the master has VIP
Then we close the main keepalived to try
The VIP drifted to the slave scheduler. We then restarted the keepalived of the master scheduler and found that the virtual IP was back again. This is related to the master-slave priority set previously.
So it is verified that the IP address of the virtual router (VIP) can be transferred between the routers in the hot standby group
Let's finally verify the access to the browser, here the default gateway of the client does not need to point to VIP