LVS load balancing--NET mode

Table of contents

Enterprise Cluster Application Probability

cluster meaning

question

According to the difference in the target of the cluster, it can be divided into three types

Load Balance Cluster

High Availability Cluster

High Performance Computer Cluster

Load Balancing Cluster Architecture

Structure of Load Balancing

LVM load balancing cluster configuration

Description of ipvsadm tool options:

---------------------NAT mode LVS load balancing cluster deployment---------------------- ---

1. Deploy shared storage (NFS server: 192.168.110.90)

2. Configure the node server (192.168.110.60, 192.168.110.70)

3. Configure the load scheduler (internal gateway ens32: 192.168.110.50, external gateway ens33: 12.0.0.1)

4. Test effect


Enterprise Cluster Application Probability

cluster meaning

Cluster, cluster, cluster

It is composed of multiple hosts, but it appears as a whole to the outside world, providing only one access entry (domain name or ip address), which is equivalent to a large computer.

question

In Internet applications, as sites have higher and higher requirements for hardware performance, response speed, service stability, and data reliability, a single server can no longer meet the requirements of load and high availability.

According to the difference in the target of the cluster, it can be divided into three types

load balancing cluster

High availability cluster

HPC

Load Balance Cluster

Improve the responsiveness of the application system, process as many access requests as possible, and reduce latency as the goal, and obtain high concurrency and high load (LB) overall performance. The load distribution of LB depends on the shunt algorithm of the master node, which will come from the client Access requests are distributed to multiple server nodes, thereby alleviating the load pressure on the entire system. For example, "DNS polling", "reverse proxy", etc.

High Availability Cluster

Improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the fault-tolerant effect of high availability (HA). The
working mode of HA includes two modes: duplex and master-slave. Online at the same time; the master and slave only have the master node online, but when a failure occurs, the slave node can automatically switch to the master node. For example, "failover", "dual machine hot standby", etc.

High Performance Computer Cluster

With the goal of improving the CPU computing speed of the application system, expanding hardware resources and analysis capabilities, and obtaining high-performance computing (HPC) capabilities equivalent to large-scale and supercomputers, high performance depends on "
distributed computing" and "parallel computing". The hardware and software integrate the CPU, memory and other resources of multiple servers to realize the computing power that only large-scale and supercomputers have. For example, "cloud computing", "grid computing", etc.
 

Load Balancing Cluster Architecture

Structure of Load Balancing

The first layer, load scheduler (Load Balancer or Director)

The only entrance to access the entire cluster system uses the VIP address shared by all servers, also known as the cluster IP address. Usually, two schedulers, primary and backup, are configured to implement hot backup. When the primary scheduler fails, it can be smoothly replaced by the standby scheduler to ensure high availability.

The second layer, server pool (Server Pool)

The application services provided by the cluster are undertaken by the server pool, where each node has an independent RIP address (real IP), and only handles client requests distributed by the scheduler. When a node fails temporarily, the fault-tolerant mechanism of the load scheduler will isolate it, and wait for the error to be eliminated before being reintroduced into the server pool.

The third layer, shared storage (Share Storage)

Provide stable and consistent file access services for all nodes in the server pool to ensure the uniformity of the entire cluster. Shared storage can use NAS devices, or dedicated servers that provide NFS sharing services.

 

LVM load balancing cluster configuration

Description of ipvsadm tool options:

-A: Add a virtual server
-D: Delete the entire virtual server
-s: Specify the load scheduling algorithm (round-robin: rr, weighted round-robin: wrr, least connection: lc, weighted least connection: wlc)
-a: means adding a real server (node ​​server)
-d: delete a node
-t: specify VIP address and TCP port
-r: specify RIP address and TCP port
-m: use NAT cluster mode
-g: use DR mode
-i: use TUN Mode
-w: set the weight (when the weight is 0, it means suspend the node)
-p 60: means keep the long connection for 60 seconds (connection keeping is turned off by default)
-l: view the LVS virtual server in the list (the default is to view all)
-n: by number The form displays information such as addresses and ports, and is often used in combination with the "-l" option. ipvsadm -ln


---------------------NAT mode LVS load balancing cluster deployment---------------------- ---

Load Scheduler: Inner Gateway ens33: 192.168.110.30, Outer Gateway ens33: 12.0.0.1
Web Node Server 1: 192.168.110.60
Web Node Server 2: 192.168.110.70
NFS Server: 192.168.110.90
Client: 12.0.0.12

1. Deploy shared storage (NFS server: 192.168.110.90)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
yum install nfs-utils rpcbind -y
systemctl start rpcbind.service
systemctl start nfs.service
systemctl enable nfs.service
systemctl enable rpcbind.service
mkdir /opt/kgc /opt/benet
chmod 777 /opt/kgc /opt/benet
echo ' ni hen shuai!' > /opt/zxr/index.html
echo ' ni ye hen shuai! ' > /opt/yyds/index.html
vim /etc/exports
/usr/share *(ro,sync)
/opt/zxr 192.168.110.0/24(rw,sync)
/opt/yyds 192.168.110.0/24(rw,sync)

--publish share---

exportfs -rv


2. Configure the node server (192.168.110.60, 192.168.110.70)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0
yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service
yum install nfs-utils rpcbind -y
showmount -e 192.168.110.90
systemctl start rpcbind
systemctl enable rpcbind

--192.168.110.60---

mount.nfs 192.168.110.90:/opt/zxr /var/www/html
vim /etc/fstab
192.168.110.90:/opt/zxr        /var/www/html    nfs        defaults,_netdev    0  0

--192.168.110.70---

mount.nfs 192.168.110.90:/opt/yyds /var/www/html
echo 'ni ye hen shuai' > /var/www/html/index.html
vim /etc/fstab
192.168.110.90:/opt/yyds    /var/www/html    nfs     defaults,_netdev    0  0


3. Configure the load scheduler (internal gateway ens32: 192.168.110.50, external gateway ens33: 12.0.0.1)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

(1) Configure SNAT forwarding rules

vim /etc/sysctl.conf
net.ipv4.ip_forward = 1

or

echo '1' > /proc/sys/net/ipv4/ip_forward
sysctl -p
iptables -t nat -F
iptables -F
iptables -t nat -A POSTROUTING -s 192.168.110.0/24 -o ens33 -j SNAT --to-source 12.0.0.1
iptables -t filter -A FORWARD -p tcp --dport 80 -j ACCEPT

(2) Load the LVS kernel module

modprobe ip_vs                    #加载 ip_vs模块
cat /proc/net/ip_vs               #查看 ip_vs版本信息
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

(3) Install the ipvsadm management tool

yum -y install ipvsadm

--The load distribution strategy must be saved before starting the service---

ipvsadm-save > /etc/sysconfig/ipvsadm

or

ipvsadm --save > /etc/sysconfig/ipvsadm
systemctl start ipvsadm.service

(4) Configure the load distribution strategy (NAT mode only needs to be configured on the server, and the node server does not need special configuration)

ipvsadm -C                     #清除原有策略
ipvsadm -A -t 12.0.0.1:80 -s rr [-p 60]
ipvsadm -a -t 12.0.0.1:80 -r 192.168.110.60:80 -m [-w 1]
ipvsadm -a -t 12.0.0.1:80 -r 192.168.110.70:80 -m [-w 1]
ipvsadm                        #启用策略
ipvsadm -ln                         #查看节点状态,Masq代表 NAT模式
ipvsadm-save > /opt/ipvsadm                        #保存策略
ipvsadm-save > /etc/sysconfig/ipvsadm
ipvsadm -d -t 12.0.0.1:80 -r 192.168.110.90:80           #删除群集中某一节点服务器
ipvsadm -D -t 12.0.0.1:80                                #删除整个虚拟服务器

systemctl stop ipvsadm                                   #停止服务(清空策略),如果selinux没关闭/etc/sysconfig/ipvsadm内容也会清空

systemctl start ipvsadm                                  #启动服务(根据/etc/sysconfig/ipvsadm恢复策略)

ipvsadm-restore < /opt/ipvsadm                           #恢复LVS 策略

4. Test effect

Use a browser to access http://12.0.0.1/ on a client with an IP of 12.0.0.12, and constantly refresh the browser to test the load balancing effect. The refresh interval needs to be longer (or close the connection of the Web service to maintain).

Guess you like

Origin blog.csdn.net/2302_76824193/article/details/131086357