LVS+KEEPALIVE installation and deployment

Lvs+keepalived installation and deployment

  1. Keepalived's design goal is to build a highly available LVS load balancing cluster. You can call the ipvsadm tool to create virtual servers and manage server pools, not just dual-machine hot standby. Advantages: Keepalived implements hot standby switching for the lvs load scheduler, improves availability, performs health checks on nodes in the server pool, automatically removes failed nodes, and rejoins after recovery.
  2. The lvs cluster structure implemented based on lvs+keepalived includes at least two hot standby load schedulers. When using keepalived to build an lvs cluster, the ipvsadm management tool is also needed, but most of the work will be done automatically by keepalived, and there is no need to manually execute ipvsadm (except for viewing and monitoring the cluster).
  3. The LVS cluster is a load balancing project developed for the Linux kernel. It uses VIP based on IP address virtualization and proposes an efficient solution for load balancing based on IP address and content request distribution. You can use the ipvsadm management tool. Through the integration of multiple relatively inexpensive ordinary servers, the same service is provided to the outside with the same address.
  4. Direct routing mode of load balancing. Referred to as the DR working mode, the load scheduler only serves as the client's access portal. Each node server and the scheduler are located on the same physical network, and the node server directly responds to the client's request without passing through the load scheduler.
  5. Keepalived is a powerful auxiliary tool designed specifically for LVS. It is mainly used to provide Failover and Health Checking functions-to determine the availability of LVS load schedulers and node servers, isolate them in time and replace them with new ones Server, rejoin the cluster when the failed host recovers.

LVS (DR—RR)+KEEPALIVE configuration

Configuration information:

Roles IP operating system
LVS-DR-MASTER 192.168.153.140 centos6.5_x64
LVS-DR-BACKUP 192.168.153.131 centos6.5_x64
LVS-DR-VIP 192.168.153.100 centos6.5_x64
WEB1-Realserver 192.168.153.139 centos6.5_x64
WEB2-Realserver 192.168.153.133 centos6.5_x64

Note: Please ignore the IP address and take the one in your environment as the main one.
Remember: turn off the firewall and selinux

Operation begins:

LVS-DR operation:

Install on LVS master and slave respectively as follows:

[root@DB ~]# yum -y install keepalived ipvsadm

Configure the main keepalived:
enter the specified directory and modify the configuration file

[root@DB ~]# cd /etc/keepalived/
[root@DB keepalived]# vim keepalived.conf

The complete icon before modification: Lines 1-56: Don't worry about the lines after 56, you can also delete them directly

Insert picture description here
Comparison chart before and after modification-the line number may be confused, so you should mainly refer to the operation before modification with your own understanding:
1.

Insert picture description here

Operation: delete 8 rows from the fourth row

Insert picture description here
————————————————————————————————
2、

Insert picture description here

Operation: delete two lines from line 18, and then change the only line, here is the VIP address

Insert picture description here
————————————————————————————————
3、

Insert picture description here

Operation:
Line 22: Change the IP and port of VIP.
Line 25: Modify the lvs mode to DR.
Line 27: Delete
line 30: Change the real server IP and port. After
modifying line 31, delete line 8 from line 32 and
then after TCP_CHECK Add a line of content in the curly braces and
finally copy the content of the real_server9 line, paste it on the last curly brace, and modify the IP at the
Insert picture description here
end. The complete icon after modification:

Insert picture description here
The main keepalived configuration file is modified,
start and check whether the VIP is successful

[root@DB keepalived]# service keepalived start
[root@DB keepalived]# ip a

Insert picture description here

Configure the standby keepalived
directly use scp to transfer the main keepalived configuration file to the standby, modify one or two

[root@DB2 keepalived]# scp 192.168.153.140:/etc/keepalived/keepalived.conf ./

Insert picture description here
Insert picture description here
Finally start keepalived

Node web server configuration

Note: WEB1 and WEB2 operations are exactly the same except the test page is different. The
test page is different to visually show the load effect

  • When using the DR mode, the node server also needs to configure the VIP address, and adjust the kernel's ARP response parameters to prevent the VIP MAC address from being updated and avoid conflicts.
[root@THREE ~]# cd /etc/sysconfig/network-scripts/
[root@THREE network-scripts]# cp ifcfg-lo ifcfg-lo:0
[root@THREE network-scripts]# vim ifcfg-lo:0

Insert picture description here
Insert picture description here
Start and check

[root@THREE network-scripts]# ifup lo:0
[root@THREE network-scripts]# ifconfig lo:0

The figure below is success
Insert picture description here

  • Add VIP local access routing (restrict the access to VIP data to be local to avoid communication disorder)
[root@THREE ~]# vim /etc/rc.local 

Insert picture description here
Insert picture description here
Command line operation

[root@THREE ~]# route add -host 192.168.153.100 dev lo:0
  • Modify kernel parameters
[root@THREE ~]# vim /etc/sysctl.conf

After entering the configuration file, add 6 lines to the bottom line
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf. default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2

note:

  • When arp_announce is 2: Only respond to the network card with ARP packets that match the network segment.
  • arp_ignore is 1: only respond to arp requests whose destination IP address is the local address on the receiving network card, which is mainly to prohibit responding to ARP requests.
[root@THREE ~]# sysctl -p
  • Install httpd, create web page test lvs cluster
[root@THREE ~]# yum -y install httpd
[root@THREE ~]# echo web1 > /var/www/html/index.html
[root@THREE ~]# service httpd start

test:

You can visually see the load balancing effect by visiting the VIP address on the web, as shown in the figure below

Insert picture description here
By directly accessing the virtual ip address from the outside, you can access the website information on the corresponding node. When the main scheduler fails, it will automatically switch to the backup scheduler. The number of website visits is calculated according to the weight and algorithm.
To view the load distribution, you can execute the following command (check on the main keeoalived)

[root@DB ~]# ipvsadm -lnc
IPVS connection entries
pro expire state       source             virtual            destination
TCP 01:57  FIN_WAIT    192.168.153.1:64206 192.168.153.100:80 192.168.153.139:80
TCP 01:56  FIN_WAIT    192.168.153.1:64199 192.168.153.100:80 192.168.153.139:80
TCP 01:57  FIN_WAIT    192.168.153.1:64204 192.168.153.100:80 192.168.153.139:80
TCP 01:56  FIN_WAIT    192.168.153.1:64201 192.168.153.100:80 192.168.153.133:80
TCP 14:59  ESTABLISHED 192.168.153.1:64209 192.168.153.100:80 192.168.153.139:80
TCP 01:54  FIN_WAIT    192.168.153.1:64195 192.168.153.100:80 192.168.153.133:80
TCP 01:57  FIN_WAIT    192.168.153.1:64205 192.168.153.100:80 192.168.153.133:80
TCP 01:56  FIN_WAIT    192.168.153.1:64202 192.168.153.100:80 192.168.153.139:80
TCP 01:57  FIN_WAIT    192.168.153.1:64203 192.168.153.100:80 192.168.153.133:80
TCP 01:57  FIN_WAIT    192.168.153.1:64207 192.168.153.100:80 192.168.153.133:80

Test scheduler:
After stopping the main keepalived, continue to visit the http://192.168.20.100
page without being affected as success

Check whether the VIP is transferred

 [root@DB2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:c7:ce:80 brd ff:ff:ff:ff:ff:ff
    inet 192.168.153.131/24 brd 192.168.153.255 scope global eth0
    inet 192.168.153.100/32 scope global eth0
    inet6 fe80::20c:29ff:fec7:ce80/64 scope link 
       valid_lft forever preferred_lft forever

This is the end of our LVS-DR+KEEPALIVED configuration

(* ̄︶ ̄)

Guess you like

Origin blog.csdn.net/qq_49296785/article/details/108476658