LVS load balancing cluster, NAT mode LVS load balancing actual deployment

1. Overview of enterprise cluster applications

1.1 The meaning of clusters

Cluster, a cluster, a cluster is
composed of multiple hosts, but only appears as a whole externally, and only provides one access entry (domain name or IP address), which is equivalent to a large computer.

1.2 Problem

In Internet applications, as sites have increasingly higher requirements for hardware performance, response speed, service stability, and data reliability, a single server can no longer meet the requirements for load balancing and high availability.

1.3 Solution

●Using expensive minicomputers and mainframes ( vertical expansion )
●Using multiple relatively cheap ordinary servers to build a service cluster ( horizontal expansion )
By integrating multiple servers, LVS is used to achieve high server availability and load balancing, and The same
IP address provides the same service externally.
A cluster technology commonly used in enterprises-LVS (Linux Virtual Server, Linux Virtual Server)

Two, enterprise cluster classification

2.1 According to the target difference of the cluster, it can be divided into three types

●Load balancing cluster
●High availability cluster
●High performance computing cluster

2.2 Load Balance Cluster

●Improve the responsiveness of the application system, process as many access requests as possible, reduce latency as the goal, and obtain high concurrency and high load (LB) overall performance
●LB load distribution depends on the distribution algorithm of the master node , which will come from customers Server access requests are shared among multiple server nodes, thereby alleviating the load pressure of the entire system. For example, "DNS polling", "reverse proxy", etc.
Enterprise cluster classification 2-2

2.3 High Availability Cluster

●Improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the fault tolerance effect of high availability (HA)
The working mode of HA includes duplex and master-slave modes. Duplex is All nodes are online at the same time; only the master node is online for the master and slave, but the slave node can automatically switch to the master node when a failure occurs. For example, "failover", "dual machine hot standby", etc.

2.4 High Performance Computer Cluster

●In order to improve the CPU computing speed of the application system, expand the hardware resources and analysis capabilities, obtain high-performance computing (HPC) capabilities equivalent to large-scale, supercomputers.
●High-performance depends on "distributed computing" and "parallel computing". Through dedicated hardware and software, the CPU, memory and other resources of multiple servers are integrated together to realize the computing power that only large and supercomputers have. For example, "cloud computing", "grid computing", etc.

Three, load balancing cluster architecture

3.1 The structure of load balancing

3.1.1 The first layer, load scheduler (Load Balancer or Director)

The only entrance to access the entire cluster system, the VIP address shared by all servers is used externally, which is also called the cluster
IP address. Usually two schedulers, the main and backup schedulers, are configured to implement hot backup. When the main scheduler fails, it can be smoothly
replaced to the backup scheduler to ensure high availability.

3.1.2 The second layer, Server Pool

The application services provided by the cluster are borne by the server pool. Each node has an independent RIP address (real IP), and only processes client requests distributed by the scheduler. When a node fails temporarily, the fault-tolerant mechanism of the load scheduler will isolate it and wait for the error to be eliminated before re-entering the server pool.

3.1.3 The third layer, shared storage (Share Storage)

Provide stable and consistent file access services for all nodes in the server pool to ensure the unity of the entire cluster. Shared storage can use NAS devices, or dedicated servers that provide NFS sharing services.

Insert picture description here

Four, load balancing cluster working mode analysis

■Load balancing cluster is currently the most commonly used cluster type in enterprises.
■The cluster's load scheduling technology has 3 E working modes.
Address translation
IP tunnel
Direct routing

4.1 NAT mode

4.1.1 Address Translation

●Network Address Translation, referred to as NAT mode
●Private network structure similar to a firewall, the load scheduler serves as the gateway of all server nodes, that is, as
the access entry of the client, and also the access exit of each node in response to the client. The
server node uses a private IP Address, located in the same physical network as the load scheduler, the security is better than the other two modes

4.2 TUN mode

4.2.1 IP tunnel

●IP Tunnel, TUN mode for short
●Adopting an open network structure, the load scheduler only serves as the client's access portal, and each node directly responds to the client through its own Internet connection, instead of passing through the load scheduler.
●Server nodes are scattered in Different locations in the Internet have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel

4.3 DR mode

4.3.1 Direct routing

●Direct Routing, referred to as DR mode.
●Using a semi-open network structure, similar to the structure of the TUN mode, but the nodes are not scattered everywhere, but on the same physical network as the
scheduler. ●Load scheduler and each node server via local network connection, no need to establish a dedicated IP tunnel
H sale

Five, LVS virtual server

5.1 Linux Virtual Server

Load balancing solution developed for Linux kernel. In May 1998, it was founded by Dr. Zhang Wensong of my country.
Official website: http://www.linuxvirtualserver org/
●LVS is actually equivalent to a virtualized application based on IP addresses, and proposes an efficient solution for load balancing based on IP address and content request distribution.
LVS has now become Linux Part of the kernel is compiled as an ip_ vs module by default and can be called automatically when necessary. In the CentOS 7 system, the following operations can manually load the ip_Vs module and view the version information of the ip_vs module in the current system.

# 确认内核对LVS的支持
[root@localhost ~]# modprobe ip_vs
[root@localhost ~]# cat /proc/net/ip_vs
IP Virtual Server version 1 .2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
 -> RemoteAddress:Port Forward Weight ActiveConn InActConn

Insert picture description here

5.2 LVS load scheduling algorithm

5.2.1 Round Robin

◆Distribute the received access requests to each node (real server) in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server

5.2.2 Weighted Round Robin

◆Distribute requests according to the weight value set by the scheduler. Nodes with higher weight value will get the task first, and the more requests will be allocated
◆Ensure that the server with strong performance will bear more traffic

5.2.3 Least Connections

◆Distribute according to the number of connections established by the real server, and prioritize the access requests received to the node with the least number of connections

5.2.4 Weighted Least Connections

◆When the performance difference of server nodes is large, the weight can be automatically adjusted for the real server.
◆Higher performance nodes will bear a larger proportion of active connection load

Six, use ipvsadm tool

lvs cluster creation and management

创建虚拟服务器----->添加、删除服务器节点----->查看群集和节点情况----->保存负载分配策略

Seven, ipvsadm tool option description

-A:添加虚拟服务器
-D:删除整个虚拟服务器
-s:指定负载调度算法(轮询: rr、加权轮询: wrr、 最少连接: lc、加权最少连接: wlc)
-a:表示添加真实服务器(节点服务器)
-d:删除某一一个节点
-t:指定VIP地址及TCP端口
-r:指定RIP地址及TCP端口
-m:表示使用NAT群集模式
-g:表示使用DR模式
-i:表示使用TUN模式
-w:设置权重(权重为0时表示暂停节点)
-p 60:表示保持长连接60秒
-l:列表查看LVS虚拟服务器(默认为查看所有)
-n:以数字形式显示地址、端口等信息,常与“-l"选项组合使用。ipvsadm -ln

Eight, NAT mode LVS load balancing cluster deployment

负载调度器:内网关ens33:192.168.238.20, 外网关ens36:12.0.0.1
Web节点服务器1:192. 168.238.100
Web节点服务器2:192. 168.238.101
NFS服务器:192.168.238.10 
客户端:192.168.238.200

Insert picture description here
Insert picture description here
Insert picture description here

1. Deploy shared storage (NFS server: 192.168.238.10)

关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

yum install nfs-utils rpcbind -y    提供远程调用服务
systemctl start nfs.service
systemctl start rpcbind.service
systemctl enable nfs.service
systemctl enable rpcbind.service

mkdir /opt/jiedian1 /opt/jiedian2         赋权
chmod 777 /opt/jiedian1 /opt/jiedian2

vim /etc/exports   ---->尽量不给写的权限
/usr/share *(ro, sync)    ----> 生产环境使用
/opt/jieidan1 192.168.238.0/24(rw,sync)
/opt/jiedian2 192.168.238.0/24(rw,sync)

重启服务
systemctl restart nfs.service
showmont -e

发布共享
exportfs -rv
echo 'this is jieidan1!' > /opt/jiedian1/index.html
echo 'this is jiedian2!' >/opt/jiedian2/index.html

Insert picture description here
Insert picture description here
Insert picture description here

Insert picture description here

2. Configure the node server (192.168.238.100, 192.168.238.101)

systemctl stop firewalld.service 
systemctl disable firewalld.service
setenforce 0

yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service

yum install nfs-utils rpcbind -y
showmount -e 192.168.238.10
systemctl start rpcbind
systemctl enable rpcbind

--192.168.238.100---
mount.nfs 192.168.238.10:/opt/jiedian1 /var/www/html
#echo 'this is jiedian1!' > /var/www/html/index.html

vim /etc/fstab
192.168.238.10:/opt/jiedian1     /var/www/html  nfs defaults,_netdev   0   0

--192.168.238.101---
mount.nfs 192.168.238.10:/opt/jiedian2 /var/www/html
#echo 'this is jiedian2!' > /var/www/html/index.html
vim /etc/fstab
192.168.238.10:/opt/jiedian2  /var/www/html   nfs    defaults,_netdev    0   0
#  共享目录                         挂载点     协议          参数

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

3. Configure the load scheduler (inner gateway ens33: 192.168.238.20, outer gateway ens36: 12.0.0.1)

vim /etc/sysconfig/network.....
systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

(1)配置SNAT转发规则
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1

或echo '1' > /proc/sys/net/ipv4/ip_forward
sysctl -P

iptables -t nat -F
iptables -F
iptables -t nat -A POSTROUTING -s 192.168.238.0/24 -o ens36 -j SNAT --to-source 12.0.0.1
#-t 指定nat表 -A加入规则链 -s指定源IP地址或网段 -o出站接口  -j指定控制类型 --to-source指定要转换的IP地址
(2)加载LVS内核模块
modprobe ip_vs           加载ip_ vs模块
cat /proc/net/ip_vs      查看ip_ vs版本信息
(3)安装ipvsadm管理工具
yum -y install ipvsadm

--启动服务前须保存负载分配策略---
ipvsadm-save > /etc/sysconfig/ipvsadm
或者ipvsadm --save > /etc/sysconfig/ipvsadm

systemctl start ipvsadm.service

(4)配置负载分配策略(NAT模式只要在服务器上配置,节点服务器不需要特殊配置)
ipvsadm -ln     #查看是否有策略
ipvsadm -C      #清除原有策略
ipvsadm -ln     #再次检测
ipvsadm -A -t 12.0.0.1:80 -s rr    -A添加虚拟服务器  -t指定VIP地址和端口  -s指定负载调度算法  rr轮询
ipvsadm -a -t 12.0.0.1:80 -r 192.168.238.100:80 -m -w 1    -a添加真实服务器  -r指定RIP地址和端口  -m使用NAT群集模式   [-w 1]指定权重
ipvsadm -a -t 12.0.0.1:80 -r 192.168.238.101:80 -m -w 1
                                                 #指定权重

----------------------------------------------------------------------------------------------------------------------------------
ipvsadm      #启用策略
ipvsadm -ln     #查看节点状态,Masq代表NAT模式
ipvsadm-save > /etc/sysconfig/ipvsadm          #保存策略

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

ipvsadm -d -t 12.0.0.1:80 -r 192.168.238.11:80 -m -w 1   #删除群集中某一节点服务器
ipvsadm -D -t 12.0.0.1:80        #删除整个虚拟服务器
systemctl stop ipvsadm           #停止服务(清除策略)
systemctl start ipvsadm          #启动服务(重建规则)
ipvsadm-restore < /etc/sysconfig/ipvsadm       #恢复LVS策略

Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/IvyXYW/article/details/112798872