Detailed explanation of actual deployment of LVS load balancing cluster (theory plus actual combat)

Detailed explanation of actual deployment of LVS load balancing cluster (theory plus actual combat)

1. Overview of enterprise cluster applications

(1) The meaning of cluster

Cluster, clusters, clusters are
composed of multiple hosts, but only as a whole externally

(Two), the problem

In Internet applications, as sites have higher and higher requirements for hardware performance, response speed, service stability, data reliability, etc., a single server is unable to do so

(3) Solution

Use expensive minicomputers and mainframes
Use ordinary servers to build service clusters

Two, enterprise cluster classification

(1) According to the target difference of the cluster, it can be divided into three types

Load balancing cluster
High-availability cluster
High-performance computing cluster

(2) Load Balance Cluster

Improve the responsiveness of the application system, process as many access requests as possible, reduce latency as the goal, and obtain high concurrency and high load (LB) overall performance
. The load distribution of LB depends on the distribution algorithm of the master node.

(Three), high availability cluster (High Availability Cluster)

The goal is to improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the fault tolerance effect of high availability (HA).
The working mode of HA includes duplex and master-slave modes.

(4) High Performance Computer Cluster

The goal is to increase the CPU computing speed of the application system, expand hardware resources and analysis capabilities, and obtain high-performance computing (HPC) capabilities equivalent to large and supercomputers. High
performance depends on "distributed computing" and "parallel computing" through dedicated hardware And the software integrates the CPU, memory and other resources of multiple servers to achieve the computing power that only large and supercomputers have

Three, load balancing cluster architecture

(1) The structure of load balancing

The first tier, the
second tier of Load Balancer (Load Balancer or Director) , the
third tier of the Server Pool , and the shared storage (Share Storage)

(Two), load balancing cluster is currently the most used cluster type in enterprises

(3) There are three working modes of cluster load scheduling technology

Address translation (NAT mode)
IP tunnel (TUN mode)
direct routing (DR mode)
Insert picture description here

1. NAT mode

Address conversion:

Network Address Translation, referred to as NAT mode.
Similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client, and each node responds to the access of the client. The
server node uses the private IP address, and The load scheduler is located in the same physical network, and the security is better than the other two methods

2. IP tunnel

IP Tunnel, TUN mode for short:
adopts an open network structure, and the load scheduler is only used as the client's access entrance. Each node directly responds to the client through its own Internet connection, instead of being
scattered in the Internet through the load scheduler server node With independent public IP addresses in different locations, it communicates with the load scheduler through a dedicated IP tunnel

3. Direct routing

Direct Routing, referred to as DR mode,
adopts a semi-open network structure, similar to the structure of the TUN mode, but the nodes are not scattered in various places, but are located on the same physical network as the
scheduler. The scheduler and each node server pass through the local network Connection, no need to establish a dedicated IP tunnel

Four, about LVS virtual server

(一)、Linux Virtual Server

The load balancing solution for the Linux kernel
was created by Dr. Zhang Wensong of China in May 1998.
Official website: http://www.linuxvirtualserver.org/

Confirm the kernel's support for LVS:

modprobe ip_vs					#加载 ip_vs模块
cat /proc/net/ip_vs				#查看 ip_vs版本信息

(Two), LVS load scheduling algorithm

1. Round Robin

Assign the received access requests to each node (real server) in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server

2. Weighted Round Robin

Distribute requests according to the weight value set by the scheduler. Nodes with higher weight value get the task first. The more requests are allocated, the more
high-performance servers are guaranteed to bear more access traffic.

3. Least Connections

Assign according to the number of connections established by the real server, and prioritize the access requests received to the node with the least number of connections

4. Weighted Least Connections

When the performance difference of server nodes is large, the weight can be automatically adjusted for the real server
. Nodes with higher performance will bear a larger proportion of active connection load

Five, ipvsadm tool

(1) Introduction to ipvsadm tool

Starting from version 2.4, the linux kernel supports LVS by default. To use the capabilities of LVS, you only need to install an LVS management tool: ipvsadm.

The structure of LVS is mainly divided into two parts:

  • IPVS module working in the kernel space. The ability of LVS is actually realized by IVPS module.
  • The ipvsadm management tool that works in the user space. Its function is to provide users with a command interface for transmitting configured virtual services and real services to the IPVS module.

(Two), ipvsadm tool installation

ipvsadm工具支持yum安装

yum -y install ipvsadm
也可以编译源码安装,下载地址:

http://www.linuxvirtualserver.org/software/ipvs.html

(3) Use of ipvsadm tool

ipvsadm工具常用的参数选项有:

-A:添加虚拟服务器
-D:删除整个虚拟服务器

-E :编辑虚拟服务

-C:清除所有的虚拟服务规则

-R:恢复虚拟服务规则

-s:指定负载调度算法(轮询:rr、加权轮询:wrr、最少连接:lc、加权最少连接:wlc)
-a:表示添加真实服务器(节点服务器)
-d:删除某一个节点服务器

-e:编辑某个真实服务器

-t:指定 VIP地址及 TCP端口
-r:指定 RIP地址及 TCP端口
-m:表示使用 NAT群集模式
-g:表示使用 DR模式
-i:表示使用 TUN模式
-w:设置权重(权重为 0 时表示暂停节点)
-p 60:表示保持长连接60秒
-l:列表查看 LVS 虚拟服务器(默认为查看所有)
-n:以数字形式显示地址、端口等信息,常与“-l”选项组合使用。ipvsadm -ln

Example:

1. Manage virtual services

- 添加一个虚拟服务192.168.1.100:80,使用轮询算法

  ipvsadm -A -t 192.168.1.100:80 -s rr
- 修改虚拟服务的算法为加权轮询

  ipvsadm -E -t 192.168.1.100:80 -s wrr
- 删除虚拟服务

  ipvsadm -D -t 192.168.1.100:80

2. Manage real services

- 添加一个真实服务器192.168.1.123,使用DR模式,权重2

  ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.123 -g -w 2
修改真实服务器的权重

  ipvsadm -a -t 192.168.1.100:80 -r 192.168.1.123 -g -w 5
- 删除真实服务器

  ipvsadm -d -t 192.168.1.100:80 -r 192.168.1.123

3. View statistics

- 查看当前配置的虚拟服务和各个RS的权重

  ipvsadm -Ln
- 查看当前ipvs模块中记录的连接(可用于观察转发情况)

  ipvsadm -lnc
- 查看ipvs模块的转发情况统计

  ipvsadm -Ln --stats | --rate

Six, NAT mode LVS load balancing cluster deployment simulation actual combat

1. Experimental environment

Five linux virtual machines

负载调度器:内网关 ens33:192.168.126.10,外网关 ens36:12.0.0.1
Web节点服务器1:192.168.126.20
Web节点服务器2:192.168.126.30
NFS服务器:192.168.80.40
客户端:192.168.80.50

2. Experimental steps

1. Deploy shared storage (NFS server: 192.168.126.40)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
yum install nfs-utils rpcbind -y
systemctl start nfs.service
systemctl start rpcbind.service
systemctl enable nfs.service
systemctl enable rpcbind.service
mkdir /opt/kgc /opt/benet
chmod 777 /opt/kgc /opt/benet

Insert picture description here

vim /etc/exports
/usr/share *(ro,sync)
/opt/kgc 192.168.80.0/24(rw,sync)
/opt/benet 192.168.80.0/24(rw,sync)
--发布共享---
exportfs -rv

Insert picture description here

2. Configure the node server (192.168.126.20, 192.168.126.30)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service
yum install nfs-utils rpcbind -y
showmount -e 192.168.80.13
systemctl start rpcbind
systemctl enable rpcbind

Insert picture description here
Insert picture description here
Insert picture description here

192.168.126.20

mount.nfs 192.168.126.40:/opt/kgc /var/www/html
echo 'this is kgc web!' > /var/www/html/index.html
vim /etc/fstab
192.168.126.40:/opt/kgc		/myshare	nfs defaults,_netdev	0  0 #永久挂载

192.168.126.30

mount.nfs 192.168.126.40:/opt/benet /var/www/html
echo 'this is benet web!' > /var/www/html/index.html
vim /etc/fstab
192.168.126.40/opt/benet		/myshare	nfs defaults,_netdev	0  0 #永久挂载

3. Configure the load scheduler (inner gateway ens33: 192.168.126.10, outer gateway ens36: 12.0.0.1)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

Insert picture description here

(1) Configure SNAT forwarding rules

vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
echo '1' > /proc/sys/net/ipv4/ip_forward
sysctl -p
iptables -t nat -F
iptables -F
iptables -t nat -A POSTROUTING -s 192.168.126.0/24 -o ens37 -j SNAT --to-source 12.0.0.1

Insert picture description here

(2) Load LVS kernel module

modprobe ip_vs					#加载 ip_vs模块
cat /proc/net/ip_vs				#查看 ip_vs版本信息

(3) Install ipvsadm management tool

yum -y install ipvsadm

Insert picture description here

--启动服务前须保存负载分配策略---
ipvsadm-save > /etc/sysconfig/ipvsadm
或者 ipvsadm --save > /etc/sysconfig/ipvsadm
systemctl start ipvsadm.service

Insert picture description here

(4) Configure the load distribution strategy (the NAT mode only needs to be configured on the server, and the node server does not require special configuration)

ipvsadm -C 					#清除原有策略
ipvsadm -A -t 12.0.0.1:80 -s rr
ipvsadm -a -t 12.0.0.1:80 -r 192.168.126.20:80 -m [-w 1]
ipvsadm -a -t 12.0.0.1:80 -r 192.168.126.30:80 -m [-w 1]
ipvsadm						#启用策略

ipvsadm -ln					#查看节点状态,Masq代表 NAT模式
ipvsadm-save > /etc/sysconfig/ipvsadm						#保存策略

ipvsadm -d -t 12.0.0.1:80 -r 192.168.126.20:80 -m [-w 1]		#删除群集中某一节点服务器
ipvsadm -D -t 12.0.0.1:80									#删除整个虚拟服务器
systemctl stop ipvsadm										#停止服务(清除策略)
systemctl start ipvsadm										#启动服务(重建规则)、
ipvsadm-restore < /etc/sysconfig/ipvsadm					#恢复LVS 策略

Insert picture description here
Insert picture description here

4. Test results

A client with an IP of 12.0.0.12 uses a browser to visit http://12.0.0.1/, and constantly refresh the browser to test the load balancing effect. The refresh interval needs to be longer.

Insert picture description here
Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_51573771/article/details/112840316