Load balancing cluster and LVS-NAT

1. Enterprise cluster classification

  • Classification based on cluster target difference

1.1 Load balancing cluster

  • Improve the responsiveness of the application system, handle as many access requests as possible,
    reduce latency, and achieve high concurrency and overall load (LB) performance
    . The load distribution of LB depends on the offloading algorithm of the master node.

1.2 Highly available cluster

  • Improve the reliability of the application system, reduce the interruption time as much as possible, ensure the
    continuity of the service , and achieve the fault tolerance effect of high availability (HA)
  • The working mode of HA includes duplex and master-slave modes

1.3 High-performance computing cluster

  • The goal is to increase the CPU computing speed of the application system, expand hardware resources and analysis capabilities, and obtain
    high-performance computing (HPC) capabilities equivalent to large-scale, supercomputers
  • High performance relies on "distributed computing" and "parallel computing". Through dedicated hardware and software, the CPU, memory and other resources of multiple servers are integrated to achieve the computing power that only large and supercomputers have.

Two, load balancing cluster architecture

  • The first layer, load scheduling pool (Load Balancer or Director)
  • The second layer, Server Pool
  • The third layer, shared storage (Share Storage)

Insert picture description here

Three, load balancing cluster working mode

  • Load balancing clusters are currently the most commonly used cluster type in enterprises
  • The cluster load scheduling technology has three working modes:
    1. Address translation
    2. IP tunnel
    3. Direct routing

3.1 NAT mode

Address translation

  • Network Address Translation, referred to as NAT mode
  • Similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client, and also the access exit of each node in response to the client.
  • The server node uses a private IP address and is located on the same physical network as the load scheduler. The security is better than the other two methods.

3.2 TUN mode

IP tunnel

  • IP Tunnel, referred to as TUN mode
  • With an open network structure, the load scheduler only serves as the client's access portal, and each node directly responds to the client through its own Internet connection without passing through the
    load scheduler.
  • Server nodes are scattered at different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.

3.3 DR mode

Direct routing

  • Direct Routing, referred to as DR mode
  • It adopts a semi-open network structure, which is similar to the structure of the TUN model, but the nodes are not scattered everywhere, but are located on the same physical network as the scheduler
  • The load scheduler is connected to each node server through the local network, and no dedicated IP tunnel is required.

Four, LVS virtual server

  • Load balancing solution for Linux kernel
  • Founded by Dr. Zhang Wensong in my country in May 1998

4.1 LVS load scheduling algorithm

Round Robin

The received access requests are allocated to each node (real server) in the cluster in turn in order, and each server is treated equally, regardless of the actual number of server connections and system load.

Weighted Round Robin

  • Requests are distributed according to the weight value set by the scheduler. The node with the higher weight value gets the task first, and the more requests are allocated.
  • Ensure that the server with strong performance bears more access traffic.

Least Connections
are allocated according to the number of connections established by the real server, and the received access requests are prioritized to the node with the least number of connections.

Weighted Least Connections

When the performance difference of server nodes is large, the weight can be automatically adjusted for real servers
. Nodes with higher performance will bear a larger proportion of active connection load

Five, LVS-NAT experiment

5.1 Experimental environment

  • Need to prepare four CentOS7.6 system virtual machines and 1 WIN10 system virtual machine
  • One CentOS virtual machine is configured with dual network cards to act as a scheduler, two CentOS virtual machines are installed with apache services to act as servers server1 and server2, the last CentOS virtual machine provides NFS services as NFS storage devices, and WIN10 acts as client access services.
    Experimental topology
    Insert picture description here

5.2 Preparation

To prevent interference, turn off the firewall and core protection of all servers

  • nfs server
[root@nfs ~]# yum -y install nfs-utils
[root@nfs ~]# yum -y install rpcbind
ens33: 192.168.200.60
  • Server 1:
[root@server1 ~]# yum -y install  httpd
ens33: 192.168.200.40
  • Server 2:
[root@server2 ~]#  yum -y install  httpd
ens33: 192.168.200.50
  • Load balancing server
[root@lvs ~]# yum install ipvsadm  -y    ## LVS 管理工具
添加双网卡
ens33:192.168.200.1
ens36:   12.0.0.1

5.3 nfs server configuration

[root@nfs ~]# cd /opt/
[root@nfs opt]# mkdir accp  benet
[root@nfs opt]# chmod 777 accp benet
[root@nfs opt]# vim /etc/exports
[root@nfs opt]# vim /etc/exports
/opt/accp  192.168.200.0/24(rw,sync)
/opt/benet  192.168.200.0/24(rw,sync)
[root@nfs opt]# systemctl restart nfs
[root@nfs opt]# systemctl restart rpcbind
[root@nfs opt]# exportfs -rv  ##  发布共享
exporting 192.168.200.0/24:/opt/benet
exporting 192.168.200.0/24:/opt/accp

5.4 Server configuration

  • server1
[root@server1 ~]# showmount -e 192.168.200.60
Export list for 192.168.200.60:
/opt/benet 192.168.200.0/24
/opt/accp  192.168.200.0/24
[root@server1 ~]# vim /etc/fstab 
192.168.200.60:/opt/accp  /var/www/html  nfs   defaults 0  0
[root@server1 ~]# mount -a
[root@server1 ~]# df -Th
192.168.200.60:/opt/accp nfs4       11G  8.6G  2.5G   78% /var/www/html
[root@server1 ~]# vim /var/www/html/index.html
<h1>this is accp </h1>
[root@server1 ~]# systemctl start httpd

  • server 2
[root@server2 ~]# showmount -e 192.168.200.60
Export list for 192.168.200.60:
/opt/benet 192.168.200.0/24
/opt/accp  192.168.200.0/24
[root@server2 ~]# vim /etc/fstab 
[root@server2 ~]# mount -a
192.168.200.60:/opt/benet /var/www/html  nfs defaults 0 0
[root@server2 ~]# df -Th
192.168.200.60:/opt/benet nfs4       11G  8.6G  2.5G   78% /var/www/html
[root@server2 ~]# vim /var/www/html/index.html
this is benet</h1>
[root@server2 ~]# systemctl start httpd

5.5 Load balancer configuration

vim  nat.sh
#!/bin/bash
echo "1" > /proc/sys/net/ipv4/ip_forward
ipvsadm -C    ## 清除缓存
ipvsadm -A -t 12.0.0.1:80 -s rr      ## 指向-A  vip地址(虚拟服务器)  -s  启用算法 rr  轮询算法
ipvsadm  -a -t 12.0.0.1:80 -r 192.168.200.40:80 -m    ## -a  是定位 指定地址(真实服务器)  -a -t 指向     -m 指的是用NAT 模式
ipvsadm  -a  -t 12.0.0.1:80 -r 192.168.200.50:80 -m
ipvsadm     ## 启用

chmod 777 nat.sh
./nat.sh

5.6 Client access verification

Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_47219725/article/details/108328469