LVS load balancing cluster theory plus LVS-NAT deployment practice! !


Since the project of the laboratory mimic storage needs to map the NFS server through the NAT mode, the purpose of load balancing has been achieved. After investigating a variety of load balancing mechanisms, the author finally chose the NAT mode of LVS to achieve the demand, and then I will record it through the blog The configuration process of LVS-NAT mode.

Brief introduction of LVS service:

LVS is the abbreviation of Linux Virtual Server , which means Linux virtual server . It is a virtual server cluster system, which was developed by Mr. Zhang Wensong in May 1998 . The LVS cluster implements IP load balancing technology and content-based request distribution technology. The scheduler transfers requests to different servers for execution in a balanced manner, and can shield the failed servers in the background, thereby forming a group of servers into a high-performance and highly available server cluster, and this structure is for the client It is completely transparent, so there is no need to modify the client and server programs.

One: Principle of Load Balancing Cluster

1.1: The meaning of clusters

  • Cluster, cluster, cluster

  • Consists of multiple hosts, but only externally appears as a whole

  • In Internet applications, as sites have higher and higher requirements for hardware performance, response speed, service stability, data reliability, etc., a single server is unable to do so

1.2: Solution

Use expensive minicomputers and mainframes

Use ordinary servers to build service clusters

The SLB in Alibaba Cloud is a typical load balancing scheduler, and ECS is a cloud host (virtual machine)

SLB schedules ECS, multiple ECSs form a resource pool, forming the basis of cloud computing

1.3: What is LVS?

LVS-Linux Virtual Server, that is, Linux virtual server (virtual host, shared host), virtual host will not be repeated here, I believe everyone understands.

And LVS is a virtual server cluster system, which implements a high-performance, highly available server. Currently LVS has been integrated into the Linux kernel module.

1.4: The composition of LVS

Physically speaking, the main components of LVS:

  1. Load balancer/Director, which is the front-end machine of the entire cluster to the outside, responsible for sending customer requests to a group of servers for execution, and the customer thinks that the service comes from an IP address (we can call it a virtual IP Address).
  2. The server pool (server pool/Realserver) is a set of servers that actually perform client requests. The services performed generally include WEB, MAIL, FTP, and DNS.
  3. Shared storage (shared storage), which provides a shared storage area for the server pool, so that it is easy to make the server pool have the same content and provide the same service

mark

1.4: According to the target difference of the cluster, it can be divided into three types

  • 负载均衡群集
  • 高可用群集
  • 高性能运算群集

1.4.1: Load Balance Cluster

  • To improve the responsiveness of the application system, process as many access requests as possible, and reduce latency as the goal, to obtain high concurrency and high load (LB) overall performance
  • The load distribution of LB depends on the distribution algorithm of the master node, which is the scheduling

1.4.2: High Availability Cluster

  • To improve the reliability of the application system and reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the high availability (HA) fault tolerance effect
  • The working mode of HA includes duplex and master-slave modes
    • 双工,两个平级状态的协同工作,随时顶替对方
    • 带有主从模式的,一台主,多台从,称为中心化群集
    • 去中心化机制:没有真正的主,如果有,也是象征意义的,所有节点都干活(Redis的群集就是典型去中心化机制)

1.4.3: High Performance Computer Cluster

  • With the goal of improving the CPU computing speed of the application system, expanding hardware resources and analysis capabilities, obtaining high-performance computing (HPC) capabilities equivalent to large-scale, supercomputers
  • The high performance of the high-performance computing cluster relies on "distributed computing" and "parallel computing". The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve computing capabilities that only large and supercomputers have.

1.5: Three working modes of cluster load scheduling

1.51 : NAR

地址转换(Network Address Translation)

  • Referred to as NAT mode, it is similar to the private network structure of the firewall. The load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client, and also the access exit of each node in response to the client
  • The server node uses a private IP address and is located on the same physical network as the load scheduler, and the security is better than the other two methods

mark

1.52: TUN mode

  • IP隧道(IP Tunnel)
    • Referred to as TUN mode, it adopts an open network structure. The load scheduler is only used as the client's access entrance. Each node directly responds to the client through its own Internet connection without passing through the load scheduler.
    • The server nodes are scattered at different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel

mark

1.53: DR mode

  • Direct Routing

    • Referred to as DR mode, it adopts a semi-open network structure, which is similar to the structure of TUN mode, but the nodes are not scattered everywhere, but are located on the same physical network as the scheduler
    • The load scheduler is connected to each node server through the local network, without the need to establish a dedicated IP tunnel

    mark

Difference in working mode NAT mode TUN mode DR mode
Real server (node ​​server)
Server number (number of nodes) Low 10-20 High 100 High 100
Real gateway Load scheduler Own router Free router
IP address Public network + private network public net Private network
advantage High security Wan environment encrypted data Highest performance
Disadvantage Low efficiency and high pressure Need tunnel support Can't span LAN

Two: deployment steps

1.加载ip_vs模块,安装ipvsadm工具

2.开启路由转发

3.新建LVS虚拟服务器并添加节点服务器

4.配置节点服务器

5.保存规则并测试

BL host (load scheduler):

Use round-robin (rr) scheduling algorithm

Host name Intranet IP address Internet IP address Deploy software
LVS-master (BL host) 192.168.100.1 12.0.0.1 ipvsadm
NFS 192.168.100.30 no nfs、rpcbind
WEB1 192.168.100.10 no httpd、nfs-utils
WEB2 192.168.100.20 no httpd、nfs-utils

2.1: Configure LVS server

#加载ip_vs模块,安装ipvsadm工具
[root@localhost ~]# modprobe ip_vs
[root@localhost ~]# cat /proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn
#下载管理模块工具
[root@localhost ~]# yum -y install ipvsadm
  • Configure dual network cards for LVS server

  • Are set to host-only mode

mark

#使用无法显示网卡
[root@localhost ~]# systemctl start NetworkManager

[root@localhost ~]# cd /etc/sysconfig/network-scripts/
#复制ens33 新网卡为ens36
[root@localhost network-scripts]# cp -p ifcfg-ens33 ifcfg-ens36
#编辑ens33

NAME=ens33
UUID=86503bd2-47b6-4518-8a5f-63e4de03d11e
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.100.1
PREFIX=24

#配置ens36     作为内网网关

NAME=ens36
DEVICE=ens36
ONBOOT=yes
IPADDR=12.0.0.1
NETMASK=255.255.255.0

#重启网卡
[root@localhost network-scripts]# systemctl restart network

2.2: web1 configuration

#下载apache服务
[root@web1 ~]# yum -y install httpd

网络模式选择仅主机
[root@web1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.100.10
PREFIX=24
GATEWAY=192.168.100.1

#重启网卡
[root@web1 ~]# systemctl restart network

2.2: web2 settings

[root@web2 ~]# yum -y install httpd

#同样的方法配置完成重启网卡
[root@web2 ~]# systemctl restart network

ping一下两台服务器网关测试连接性
ping 192.168.100.1
....连接没问题

2.3: Configure NFS server

[root@nfs ~]# yum -y install rpcbind nfs-utils
  • Configure the network card in host-only mode
[root@nfs ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33

IPADDR=192.168.100.30
GATEWAY=192.168.30.1
NETMASK=255.255.255.0
PREFIX=24

#重启网卡
[root@nfs ~]# systemctl restart network
  • Set the shared directory to start the service and turn off the firewall
#创建两个站点目录
[root@nfs ~]# cd /opt
[root@nfs opt]# mkdir shuai tom
[root@nfs opt]# chmod 777 shuai/ tom/
#编辑共享目录
[root@nfs opt]# vim /etc/exports
#rw:读写 sync:同步

/opt/shuai 192.168.100.0/24(rw,sync)
/opt/tom 192.168.100.0/24(rw,sync)

#重启服务关闭防火墙
[root@nfs opt]# systemctl start rpcbind
[root@nfs opt]# systemctl start nfs
[root@nfs opt]# iptables -F
[root@nfs opt]# setenforce 0

#发布共享
[root@nfs opt]# exportfs -rv
exporting 192.168.100.0/24:/opt/tom
exporting 192.168.100.0/24:/opt/shuai
  • Switch to web1 and web2 to mount the shared directory

//web1查看
#显示nfs的信息
[root@web1 ~]# showmount -e 192.168.100.30
Export list for 192.168.100.30:
/opt/tom   192.168.100.0/24
/opt/shuai 192.168.100.0/24
1

//web2查看
[root@web2 ~]# showmount -e 192.168.100.30
Export list for 192.168.100.30:
/opt/tom   192.168.100.0/24
/opt/shuai 192.168.100.0/24

#web1进行挂载共享目录
[root@localhost network-scripts]# vim /etc/fstab 
#编辑以下内容
192.168.100.30:/opt/shuai       /var/www/html   nfs     faults  0 0


#加载挂载
[root@web1 ~]# mount -a
#查看挂载
[root@web1 ~]# df -Th
文件系统                类型      容量  已用  可用 已用% 挂载点
.....省略信息
192.168.100.30:/opt/shuai   nfs4       50G  4.1G   46G    9% /var/www/html


#web2进行共享挂载

[root@web2 ~]# vim /etc/fstab 
#编写
192.168.100.30:/opt/tom      /var/www/html   nfs     defaults        0 0
#加载挂载
ot@web2 ~]# mount -a

#查看挂载
[root@web2 ~]# df -Th
文件系统                类型      容量  已用  可用 已用% 挂载点
....省略信息
192.168.100.30:/opt/tom     nfs4       50G  4.1G   46G    9% /var/www/html

Write home page information for web1 and web2

[root@web1 ~]# cd /var/www/html/
[root@web1 html]# vim index.html
#编写主页信息
<h1>hello boy</h1>

#重启服务
[root@web1 html]# systemctl restart httpd

#编写web2的首页信息
[root@web2 html]# vim index.html

<h1>hello girl</h1>

#重启服务
[root@web2 html]# systemctl restart httpd
  • View homepage information

Insert picture description here
Insert picture description here

2.4: Configure load scheduler SNAT forwarding rules

[root@localhost ~]# vim nat.sh

#!/bin/bash
echo "1" > /proc/sys/net/ipv4/ip_forward    //启动路由转发功能
ipvsadm -C                                   //清除缓存
ipvsadm -A -t 12.0.0.1:80 -s rr         //VIP地址 访问入口地址  -s启动调度算法 rr:轮巡
ipvsadm -a -t 12.0.0.1:80 -r 192.168.100.10:80 -m   -m:nat模式
ipvsadm -a -t 12.0.0.1:80 -r 192.168.100.20:80 -m
ipvsadm                //启用lvs功能实现调度

ipvsadm命令选项解析:
-C:清除内核虚拟服务器表中的所有记录
-A:增加一台新的虚拟服务器
-t:说明虚拟服务器提供的是tcp的服务
-s rr:启用轮询算法
-a:在一个虚拟服务器中增加一台新的真实服务器
-r:指定真实服务器
-m:指定LVS的工作模式为NAT模式
ipvsadm:启用LVS功能

#增加执行权限
[root@localhost ~]# chmod +x nat.sh 
#执行脚本
[root@localhost ~]# sh nat.sh 

nat.sh:2: /proc/net/ipv4/ip_forward: 没有那个文件或目录
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  12.0.0.1:80 rr
  -> 192.168.100.10:80            Masq    1      0          0         
  -> 192.168.100.20:80            Masq    1      0          0         
 
1:权重自动分配为1

2.5: Client access

mark

mark
Refresh it will take turns to another page

Guess you like

Origin blog.csdn.net/weixin_47151643/article/details/108332532