LVS load balancing cluster notes

Chapter 5 LVS Load Balancing Cluster

Skills display
1. Understand the structure and working mode of the cluster
2. Learn to configure NFS shared services
3. Learn to build an LVS load balancing cluster


	集群技术概述

**On the Internet, sites have high requirements for hardware performance, response speed, service stability, and data reliability. (A single server cannot bear all access needs)
. Use mainframe (expensive)
. Special load-sharing equipment
. Build a cluster server (by integrating multiple relatively cheap ordinary servers to provide external services at the same address)

. LVS - a cluster service commonly used in enterprises (Linux Virtual Server virtual server)

1. The meaning of cluster
*Cluster, cluster, cluster
* consists of multiple hosts (at least two node servers), but it only appears as a whole to the outside world and only provides one access interface (domain name or IP)

2. Type of cluster
1). Load Balancer cluster
* improves the responsiveness of application systems. Obtain high concurrency and high load (LB) overall performance. Examples (DNS polling, application layer switching, reverse proxy), etc.
2) High Availability (High Available) cluster
* improves the reliability of application systems. Examples (failover, double-click hot standby, multi-machine hot standby)
*Working method: a. Duplex: all nodes are online at the same time
b. Master-slave: The master node is online, and the slave node automatically switches to the master node when the master node fails.
3) High Performance Computer cluster [less used]
* Improve the CPU computing speed of the application system and expand hardware resources and analysis capabilities. Examples (cloud computing, grid computing), rely on 'distributed computing' and 'parallel computing'.

Several cluster modes can be combined when necessary

3. Load balancing structure
The first layer, load scheduler (Load Balancer or Director) (at least one)
*The only entrance to access this cluster system. Externally it is a VIP address (virtual ip/cluster ip address). Usually, master and slave dual-machine hot standby are configured to ensure high availability.
The second layer, Server Pool (a large number of real servers)
*Each node has an independent rip (real IP) and only processes client requests distributed by the scheduler.
*When a node temporarily fails, the fault-tolerance mechanism responsible for the scheduler isolates it. After the error is eliminated, add it back to the server pool.
The third layer, shared storage (Share Storage)
* Provides stable and consistent file access services for the server pool to ensure the unity of the cluster.
*You can use a NAS device or a dedicated server that provides NFS sharing services.

4. Load balancing working mode (based on ip, port, content, etc. The highest efficiency is based on ip)

*Based on IP:

. [Address Translation (NAT) Mode]
*The scheduler is the gateway of the server node, the access entrance of the client and the response exit of each node.
*The server and scheduler use private IPs and are on the same physical network. The security is better than the next two.

【IP Tunnel (TUN) Mode】
*Open network structure.
*The scheduler is the entrance for client access.
*Node servers have independent public network IPs, are scattered in different places, and respond directly to clients.
*Communicate with the scheduler through a dedicated ip tunnel.

[Direct routing (DR) mode]
* Semi-open
* The scheduler only provides access to the client
* Node servers are concentrated together and are on the same physical network as the scheduler. via local connection.
*The node server responds directly to the client.

*NAT only requires one public IP, is the easiest to use, has good security, and is used by many hardware load balancing devices.

*DR and TUN have stronger load capacity, wider application range, and slightly worse node security.


	关于LVS虚拟服务器

1.Linux Virtual Server
*Part of the Linux kernel.
*Load balancing solution for Linux kernel
*In May 1998, created by Dr. Zhang Wensong from China
*Official website: http://www.linuxvirtualserver.org/

[Load module]
[root@localhost ~]# modprobe ip_vs

[Confirm kernel support for LVS]
[root@localhost ~]# cat /proc/net/ip_vs

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

2.LVS load scheduling algorithm
*Round Robin
. Distributed in turn in order, regardless of actual number of connections and system load

*Weighted Round Robin
. Automatically query the load status of each node and dynamically adjust its weight.

*Least Connections
. Give priority to the node with the least number of connections

*Weighted Least Connections
. When the performance difference between server nodes is large, the weights are automatically adjusted. The ones with higher weights bear a greater proportion of the load of active connections.


	使用ipvsadm工具(LVS群集管理工具)

1. Create a virtual server

yum -y install ipvsadm

ipvsadm -v

[The VIP of the scheduler must be the actual enabled IP address of this machine]

ipvsadm -A -t 202.1.1.1:80 -s rr

-A Add virtual server
-t vip address and tcp port
-s load scheduling algorithm (polling rr, weighted polling wrr, least connection lc, weighted least connection wlc)

2. Add and delete server nodes

ipvsadm -a -t 202.1.1.1:80 -r 192.168.10.1:80 -m -w 1

-a Add real server
-t vip address and tcp port
-r rip address and port
-m Use nat cluster mode (-g DR mode, -i TUN mode)
-w Set weight (pause node at 0)

ipvsadm -a -t 202.1.1.1:80 -r 192.168.10.2:80 -m -w 1

ipvsadm -a -t 202.1.1.1:80 -r 192.168.10.3:80 -m -w 1

ipvsadm -a -t 202.1.1.1:80 -r 192.168.10.4:80 -m -w 1

【Delete node】

ipvsadm -d -r 192.168.10.4:80 -t 202.1.1.1:80

[Delete the entire virtual server]

ipvsadm -D -r 202.1.1.1:80

3. Check the cluster and node status

ipvsadm -ln

*Masq NAT mode

  • Route DR mode

4. Save the load distribution strategy

ipvsadm-save > /etc/sysconfig/ipvsadm

cat /etc/sysconfig/ipvsadm

service ipvsadm stop

service ipvsadm start


	NFS共享存储服务(负载均衡群集常用)

1. Network File System, network file system
* relies on RPC (remote procedure call)
* nfs-utils (NFS shared publishing and access), rpcbind (rpc support) software package needs to be installed
* System services: nfs, rpcbind
* Shared configuration File:/etc/exports

2. Use NFS to publish shared resources
*Install nfs-utils and rpcbind software packages

yum -y install nfs-utils rpcbind

chkconfig nfs on

chkconfig rpcbind on

*Set the shared directory
[directory location client address (permission operation)]
/etc/exports

[Share the folder /opt/wwwroot to 172.16.16.0/24, allowing read and write operations]

vi /etc/exports

/opt/wwwroot 172.16.16.0/24 (rw [read and write], sync [synchronous write], no_root_squash [grant local root permissions when the client accesses as root])

*Start NFS service program

[Start rpcbind first, then nfs]

service rpcbind start

service nfs start

netstat -anpt |grep rpcbind

*View the NFS shared directory published by this machine

showmount -e

*View the sharing status of the server from the client computer

showmount -e server ip

3. Access NFS shared resources in the client (if it is a cluster service, it is best to use a dedicated network connection between the nfs server and the client to ensure stability)

*Install the rpcbind package and start the rpcbind service (client)

yum -y install rpcbind nfs-utils

chkconfig rpcbind on

service rpcbind start

showmount -e server ip

*Manually mount NFS shared directory

mount 172.16.16.172:/opt/wwwroot /var/www/html

tail -1 /etc/mtab

vi /var/www/html/index.html

*fstab automatic mount settings

we /etc/fstab

172.16.16.172:/opt/wwwroot /var/www/html nfs defaults,_netdev 0 0

Please think about:
What are the common types of server clusters?
What is the basic process of setting up an LVS cluster using ipvsadm?
How to configure and use NFS shared directory?


	构建LVS-NAT群集

[Case environment]
1. The LVS scheduler serves as the gateway of the Web server pool
2. Use the round-robin (rr) scheduling algorithm

[Configuration process]
1. Configuration of LVS scheduler: SNAT policy, LVS-NAT policy
2. Configuration of Web node server: httpd service
3. Visit http://172.16.16.172/ and verify the cluster allocation


构建LVS-DR群集

Principle Analysis of LVS/DR Mode (FAQs)

LVS has three modes: LVS-DR, LVS-NAT, and LVS-TUN. This article introduces the relevant principles of LVS-DR mode, presented in the form of FAQs. In DR mode, both the scheduler and the actual server have a network card connected to the same physical network segment. vs/dr itself does not care about information above the IP layer. Even the port number is determined by the tcp/ip protocol stack to determine whether it is correct.

  1. How does LVS/DR process request messages? Will it modify the IP packet content?

1.1 vs/dr itself does not care about information above the IP layer. Even the port number is determined by the TCP/IP protocol stack to determine whether it is correct. vs/dr itself mainly does the following things:

1) Receive the client's request and select the IP of a realserver according to the load balancing algorithm you set;

2) Use the mac address corresponding to the selected IP as the target mac, and then re-encapsulate the IP packet into a frame and forward it to the RS;

3) Record the connection information in the hash table.

vs/dr does very few things and is very simple, so its efficiency is very high, not much worse than hardware load balancing equipment.

The general flow direction of data packets and data frames is as follows: client --> VS --> RS --> client

1.2 As answered before, vs/dr will not modify the content of the IP packet.

  1. Why does RealServer configure VIP on the lo interface? Is it possible to configure VIP on the egress network card?

2.1 In order for RS to be able to process IP packets with the destination address vip, RS must first be able to receive this packet.

Configuring vip on lo can complete receiving packets and return the results to the client.

2.2 The answer is that VIP cannot be set on the egress network card, otherwise it will respond to the client's arp request, causing chaos in the client/gateway arp table, so that the entire load balance cannot work properly.

  1. Why does RealServer suppress arp frames?

This issue has been explained in the previous question and will be further elaborated here in conjunction with the implementation command. We will make the following adjustments when implementing the deployment:

   echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
   echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
   echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
   echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

I believe many people don't understand what their function is, they just know that they must be there. I’m not going to discuss it in detail here, I’m just going to make a few explanations, just as supplements.

3.1

echo “1” >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo “2” >/proc/sys/net/ipv4/conf/lo/arp_announce
These two items can be omitted because arp is very important to the logic. Interfaces don't make sense.

3.2 If the external network interface of your RS is eth0, then

echo “1” >/proc/sys/net/ipv4/conf/all/arp_ignore
echo “2” >/proc/sys/net/ipv4/conf/all/arp_announce
What actually needs to be executed is:

echo “1” >/proc/sys/net/ipv4/conf/eth0/arp_ignore
echo “2” >/proc/sys/net/ipv4/conf/eth0/arp_announce
So I personally suggest adding the above two items to you In the script, because if the default values ​​​​of the above two values ​​in the system are not 0, there may be problems.

  1. Why do LVS/DR load balancer (director) and RS need to be in the same network segment?

From the first question, you should understand how vs/dr forwards the request to RS, right? It is implemented at the data link layer, so the director must be in the same network segment as the RS.

  1. Why does the eth0 interface on the director need to be equipped with an IP (i.e. DIP) in addition to the VIP?

5.1 If you use keepalived and other tools for HA or Load Balance, you need to use DIP during health check.

5.2 HA or Load Balance without a health check mechanism has no practical significance.

  1. Does LVS/DR ip_forward need to be enabled?

unnecessary. Because the director and realserver are on the same network segment, there is no need to enable forwarding.

  1. Does the director’s VIP netmask have to be 255.255.255.255?

In lvs/dr, there is no need to set the netmask of the director's VIP to 255.255.255.255, and there is no need to go there again.

route add -host $VIP dev eth0:0
The director's VIP is originally intended to be announced to the outside world like a normal IP address, so don't make it so special.

  1. How does LVS/DR perform tcp three-way handshake?

[Case environment]
1. The LVS scheduler only serves as the Web access entrance.
2. The Web access exit is borne by each node server.

[Configuration process - LVS scheduler]

1. External network interface eth0, cluster interface eth0:0

[Turn off firewall and network management services]

service iptables stop

chkconfig iptables off

service NetworkManager stop

chkconfig NetworkManager off

setenforce 0

[Configure a virtual IP and put it in the same network segment as the public IP]

cp ifcfg-eth1 ifcfg-eth1:0

vi ifcfg-eth1:0

DEVICE=eth1:0
ONBOOT=yes
BOOTPROTO=static
IPADDR=173.16.16.200
NETMASK=255.255.255.0 [can be configured to 255.255.255.255 as needed]

ifup eth1:0

ifconfig eth1:0

eth1:0 Link encap:Ethernet HWaddr 00:0C:29:65:16:13

      inet addr:173.16.16.200  Bcast:173.16.16.255  Mask:255.255.255.0

      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

2. Adjust /proc kernel parameters and turn off redirection response

vi /etc/sysctl.conf

……
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.eth1.send_redirects = 0
……

sysctl -p

3. Configure LVS-DR cluster policy

service ipvsadm status

service ipvsadm stop

ipvsadm -A -t 172.16.16.200:80 -s rr

ipvsadm -a -t 172.16.16.200:80 -r 172.16.16.10 -g -w 1

ipvsadm -a -t 172.16.16.200:80 -r 172.16.16.20 -g -w 1

service ipvsadm save

chkconfig ipvsadm on

[Configuration process - Web node server]

1. External network interface eth0, cluster interface lo:0

[Turn off the firewall, network management, and selinux, so I won’t go into details]

setenforce 0

vi ifcfg-lo:0

DEVICE=lo:0
IPADDR=173.16.16.200[vip virtual ip]
NETMASK=255.255.255.255[must be a 32-bit all-1 mask]
ONBOOT=yes

ifup lo:0

ifconfig lo:0

2. Adjust /proc kernel parameters and turn off ARP response

vi /etc/sysctl.conf

……
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
……

sysctl -p

3. Add a local route record to the cluster IP address

vi /etc/rc.local

……
[Save routing records]
/sbin/route add -host 173.16.16.200 dev lo:0

route add -host 173.16.16.200 dev lo:0

4. Configure and enable httpd service

service rpcbind start

service nfs start

showmount -e 192.168.10.100

mount 192.168.10.100:/web1 /var/www/html

service httpd restart

[Test LVS-DR cluster]

Guess you like

Origin blog.csdn.net/m0_57207884/article/details/119669060