LVS load balancing cluster and NAT mode

1. Overview of enterprise cluster applications

1. The meaning of cluster

(1) Cluster, cluster, cluster
(2) is composed of multiple hosts, but it only appears as a whole externally, and only provides one access entry (domain name or IP address), which is equivalent to a large computer.

2. Disadvantages and problems

In Internet applications, as sites have higher and higher requirements for hardware performance, response speed, service stability, and data reliability, a single server can no longer meet the requirements for load balancing and high availability.

3. Solution

(1) Use expensive minicomputers and mainframes.
(2) Use multiple relatively cheap ordinary servers to build a service cluster: By integrating multiple servers, using LVS to achieve high availability and load balancing of the servers, and provide the same services externally with the same IP address.
A cluster technology commonly used in enterprises-LVS (Linux Virtual Server, Linux Virtual Server)

2. Classification of enterprise clusters

1. According to the target difference of the cluster, it can be divided into three types

(1), load balancing cluster
(2), high-availability cluster
(3), high-performance computing cluster

2. Load Balance Cluster

(1) Improve the responsiveness of the application system, process as many access requests as possible, reduce latency as the goal, and obtain high concurrency and high load (LB) overall performance

(2) The load distribution of LB relies on the offloading algorithm of the master node to share the access request from the client to multiple server nodes, thereby alleviating the load of the entire system

3. High Availability Cluster

(1) Improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the fault tolerance effect of high availability (HA)

(2) The working mode of HA includes duplex and master-slave modes. Duplex means that all nodes are online at the same time; for master-slave, only the master node is online, but when a failure occurs, the slave node can automatically switch to the master node.
For example: "Failover", "Dual machine hot backup", etc.

4. High Performance Computer Cluster

(1) To improve the CPU computing speed of the application system, expand the hardware resources and analysis capabilities as the goal, obtain high-performance computing (HPC) capabilities equivalent to large-scale, supercomputers

(2) High performance relies on "distributed computing" and "parallel computing". The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve computing capabilities that only large and supercomputers have.

Three, load balancing cluster architecture

1. The structure of load balancing

(1) At the first level, the load scheduler (Load Balancer or Director)
accesses the only entrance of the entire cluster system, and uses the VIP address shared by all servers externally, also known as the cluster IP address. Usually two schedulers, the main and the backup, are configured to achieve hot backup. When the main scheduler fails, it can be smoothly replaced to the backup scheduler to ensure high availability.
(2) The second layer,
the application services provided by the server pool (Server Pool) cluster are borne by the server pool, where each node has an independent RIP address (real IP), and only processes client requests distributed by the scheduler. When a node fails temporarily, the fault-tolerant mechanism of the load scheduler will isolate it and wait for the error to be eliminated before re-entering it into the server pool.
(3) The third layer, shared storage (Share Storage)
provides stable and consistent file access services for all nodes in the server pool to ensure the unity of the entire cluster. Shared storage can use NAS devices, or provide dedicated NFS sharing services. server.
Insert picture description here

2. Analysis of load balancing cluster working mode

1. Load balancing cluster is currently the most commonly used cluster type in enterprises.
2. The cluster load scheduling technology has 3 working modes
● Address translation (NAT mode)
● IP tunnel (TUN mode)
● Direct routing (DR mode)

Four, three load scheduling working modes

1. NAT mode

Address translation
(1) Network Address Translation, referred to as NAT mode
(2) A private network structure similar to a firewall, the load scheduler acts as a gateway for all server nodes, that is, as the access entrance of the client, and each node responds to the access exit of the client
(3) The server node uses a private IP address and is located on the same physical network as the load scheduler, and its security is better than the other two methods
Insert picture description here

2. TUN mode

IP tunnel
● IP Tunnel, TUN mode for short
● Adopting an open network structure, the load scheduler only serves as the client's access entrance, and each node directly responds to the client through its own Internet connection, instead of passing through the load scheduler
● Server node Distributed in different locations on the Internet, with independent public IP addresses, communicating with the load scheduler through a dedicated IP tunnel
Insert picture description here

3. DR mode

Direct routing
● Direct Routing, referred to as DR mode
● It adopts a semi-open network structure, which is similar to the structure of TUN mode, but the nodes are not scattered in various places, but are located on the same physical network as the
scheduler. ● The load scheduler and each The node server is connected through the local network, no need to establish a dedicated IP tunnel
Insert picture description here

Five, about LVS virtual server

1、Linux Virtual Server

● Load balancing solution developed for Linux kernel
● Founded by Dr. Zhang Wensong of China in May 1998
● Official website: http://www.linuxvirtualserver.orgl
● LVS is actually equivalent to virtualized applications based on IP addresses. Load balancing based on IP address and content request distribution proposes an efficient solution.
Note: LVS has now become a part of the Linux kernel, compiled as an ip_ vs module by default, and can be called automatically when necessary. In the CentOS 7 system, the following operations can manually load the ip_ vs module and view the version information of the ip_ vs module in the current system.

modprobe ip_vs
cat /proc/net/ip_vs    #确认内核对LVS的支持

2. LVS load scheduling algorithm

(1) Round Robin

The received access requests are allocated to each node (real server) in the cluster in turn, and each server is treated equally, regardless of the actual number of connections and system load of the server

(2) Weighted Round Robin

● Distribute requests according to the weight value set by the scheduler. Nodes with higher weight value will get the task first, and the more requests are allocated.
● Ensure that the server with strong performance bears more access traffic

(3) Least Connections

Allocation is based on the number of connections established by the real server, and the received access requests are prioritized to the node with the least number of connections

(4) Weighted least connections (Weighted L east Connections)

● When the performance difference of server nodes is large, the weight can be automatically adjusted for the real server
● Nodes with higher performance will bear a larger proportion of the active connection load

Six, LVS cluster creation and management

1. Steps

(1) Create virtual server
(2), add and delete server nodes
(3), view cluster and node status
(4), save load distribution strategy

2. Description of ipvsadm tool options

Insert picture description here

Seven, the actual case of NAT

Environment: LVS scheduler serves as the gateway of the Web server pool. LVS has two network cards, which are connected to the internal and external networks respectively, and use the round-robin (rr) scheduling algorithm.
LVS load scheduler (centos7-8): ens33: 192.168.177.18 ens36: 12.0.0.1
Web node server 1 (centos7-1): 192.168.177.11
Web node server 2 (centos7-6): 192.168.177.6
NFS server (centos7 -5): 192.168.177.8
client (Window7-1): 12.0.0.12

1. Deploy NFS shared storage (Centos7-5 192.168.177.8)

systemctl stop firewalld.service 
systemctl disable firewalld.service 
setenforce 0

yum install -y nfs-utils rpcbind

systemctl start nfs.service 
systemctl start rpcbind.service
systemctl enable nfs.service 
systemctl enable rpcbind.service

mkdir /opt/yy /opt/benet
chmod 777 /opt/yy/ /opt/benet/

vim /etc/exports
/usr/share/ *(ro,sync)
/opt/yy 192.168.177.0/24(rw,sync)
/opt/benet 192.168.177.0/24(rw,sync)

exportfs -rv
showmount -e

Insert picture description here
Insert picture description here
Insert picture description here
Insert picture description here

2. Configure the node server (Centos7-1 192.168.177.11 Centos7-6 192.168.177.6)

systemctl stop firewalld.service 
systemctl disable firewalld.service 
setenforce 0

yum install -y httpd
systemctl start httpd.service 
systemctl enable httpd.service

yum install -y nfs-utils rpcbind

systemctl start rpcbind.service
systemctl enable rpcbind.service

showmount -e 192.168.177.8


-------------centos7-1:192.168.177.11---------------------
mount.nfs 192.168.177.8:/opt/yy /var/www/html/

echo '171717171717' > /var/www/html/index.html

vim /etc/fstab 
192.168.177.8:/opt/yy /var/www/html nfs defaults,_netdev 0 0

mount -a

-------------centos7-6:192.168.177.6---------------------
mount.nfs 192.168.177.8:/opt/benet /var/www/html/

echo 'edgedgedg' > /var/www/html/index.html

vim /etc/fstab 
192.168.177.8:/opt/benet /var/www/html nfs defaults,_netdev 0 0

mount -a

Insert picture description here
Insert picture description here
Insert picture description here

3. Configure the load scheduler (ens33 192.168.177.18 ens36 12.0.0.1)

systemctl stop firewalld.service 
systemctl disable firewalld.service 
setenforce 0

ifconfig ens36 12.0.0.1

------(1)、配置SNAT转发规则-------
vim /etc/sysctl.conf
net.ipv4.ip_forward=1

sysctl -p

iptables -t nat -F
iptables -F
iptables -t nat -nL

iptables -t nat -A POSTROUTING -s 192.168.177.0/24 -o ens36 -j SNAT --to-source 12.0.0.1

-------(2)、加载LVS内核模块-------------
modprobe ip_vs   #加载ip_vs模块
cat /proc/net/ip_vs    #查看ip_vs版本信息

-------(3)、安装ipvsadm管理工具-------------
yum install -y ipvsadm

#注意:启动服务前必须保存负载分配策略,否则将会报错
ipvsadm-save > /etc/sysconfig/ipvsadm
或者
ipvsadm  --save > /etc/sysconfig/ipvsadm

systemctl start ipvsadm.service


------(4)、配置负载分配策略(NAT模式只要在服务器上配置,节点服务器不需要特殊配置)-------------
ipvsadm -C     #清除原有策略
ipvsadm -A -t 12.0.0.1:80 -s rr
ipvsadm -a -t 12.0.0.1:80 -r 192.168.177.11:80 -m -w 1
ipvsadm -a -t 12.0.0.1:80 -r 192.168.177.6:80 -m -w 1

ipvsadm     #启动策略

ipvsadm -ln    #查看节点状态,Masq代表 NAT模式
ipvsadm-save > /etc/sysconfig/ipvsadm    #保存策略

-----------------------------------------------------------
ipvsadm -d -t 12.0.0.1:80 -r 192.168.177.11:80 -m -w 1     #删除群集中某一节点服务器
ipvsadm -D -t 12.0.0.1:80     #删除整个虚拟服务器
systemctl stop ipvsadm     #停止服务(清除策略)
systemctl start ipvsadm    #启动服务(重建规则)
ipvsadm-restore > /etc/sysconfig/ipvsadm      #恢复LVS策略
------------------------------------------------------------

4. Test results

Use a browser to visit http://12.0.0.1/ on a client with an IP of 12.0.0.12, and constantly refresh the browser to test the load balancing effect. The refresh interval needs to be longer.
Insert picture description here

Guess you like

Origin blog.csdn.net/tefuiryy/article/details/112861967