LVS load balancing cluster theory + LVS-NAT deployment practical operation-super detailed! ! !

One, understand the principle of load balancing cluster

1.1 Overview of enterprise cluster applications

■ The meaning of cluster

  • Cluster, cluster, cluster
  • Consists of multiple hosts, but the external performance is a whole

■Question

  • In Internet applications, as the site has higher and higher requirements for hardware performance, response speed, service stability, and data reliability, a single server is unable to do so

■Solution

  • Use expensive minicomputers and mainframes
  • Use ordinary servers to build service clusters

1.2 Classification of enterprise clusters

■According to the target difference of the group, it can be divided into three types

  • Load balancing cluster
  • Highly available cluster
  • High-performance computing cluster

■Load balancing cluster

  • Improve the responsiveness of the application system, process as many access requests as possible, reduce latency as the goal, and obtain high concurrency and high load (LB) overall performance.
  • The load distribution of LB depends on the distribution algorithm of the master node.

■High availability cluster

  • Improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the high availability (HA) fault tolerance effect.
  • The working mode of HA includes duplex mode and master-slave mode.

■High-performance computing cluster

  • The goal is to increase the CPU computing speed of the application system, expand the hardware resources and analysis capabilities, and obtain the high-performance computing (HPC) capabilities equivalent to large-scale, supercomputers.
  • High performance relies on "distributed computing" and "parallel computing". The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve the computing power that only large and supercomputers have.

1.3, load balancing cluster architecture

■Load balancing architecture

  • The first layer, load scheduler (choose duplex or master-slave according to requirements )
  • The second layer, server pool (select the availability zone according to customer needs)
  • The third layer, shared storage (content redistribution)

■Structure of load balancing

Insert picture description here

1.4 Analysis of the working mode of the load balancing cluster

■Load balancing cluster is currently the most commonly used cluster type in enterprises.
■The cluster's load scheduling technology has three working modes

  • Address translation
  • IP tunnel
  • Direct routing

1.4.1, NAT mode

■Address Translation

  • Network Address Translation, referred to as NAT mode.
  • Similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client, and also the access exit of each node in response to the client.
  • The server node uses a private IP address and is located on the same physical network as the load scheduler, which is more secure than the other two methods.

1.4.2, TUN mode

■IP tunnel

  • lPTunnel, TUN mode for short.
  • With an open network structure, the load scheduler only serves as the client's access portal, and each node directly responds to the client through its own Internet connection, instead of going through the load scheduler.
  • The server nodes are scattered at different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.

1.4.3, DR mode

■Direct routing

  • Direct Routing, referred to as DR mode.
  • It adopts a semi-open network structure, which is similar to that of the TUN model, but the nodes are not scattered everywhere, but are located on the same physical network as the scheduler.
  • The load scheduler is connected to each node server through the local network, and no dedicated IP tunnel is required.

1.5. About LVS Virtual Server

■LVS load scheduling algorithm

  • Round Robin
    ◆Distribute the received access requests to each node (real server) in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server.
  • Weighted Round Robin
    ◆Distribute requests according to the weight value set by the scheduler. The node with the higher weight value will get the task first, and the more requests are allocated.
    ◆Ensure that the server with strong performance bears more access traffic.
  • Least Connections
    ◆Distribute according to the number of connections established by the real server, and prioritize the access requests received to the node with the least number of connections.
  • Weighted Least Connections
    ◆When the performance difference of server nodes is large, the weight can be automatically adjusted for the real server.
    ◆ Nodes with higher performance will bear a greater proportion of the active connection load.

1.6, use ipvsdam tool

■LVS cluster is created in management

  1. Create a virtual server
  2. Add and delete server nodes
  3. View cluster and node status
  4. Save load balancing strategy

1.7, NFS shared storage service

■Network File System, network file system

  • Rely on RPC (Remote Procedure Call)
  • Need to install nfs-utils, rpcbind software package
  • System services: nfs, rpcbind
  • Shared configuration file: letc/exports

■Use NFS to publish shared resources

  • Install nfs-utils, rpcbind software packages
  • Set up a shared directory
  • Start the NFS service program
  • View the NFS shared directory published by the machine

■Access NFS shared resources in the client

  • Install the rpcbind package and start the rpcbind service
  • Manually mount the NFS shared directory
  • fstab auto mount settings

2. LVS—NAT deployment

2.1. Case environment

        As the gateway of the Web server pool, the LVS scheduler has two LVS network cards, which are connected to the internal and external networks respectively, using the round-robin (rr) scheduling algorithm.

Experimental topology diagram:
Insert picture description here

2.2, virtual machine configuration

Scheduler
External public network 20.0.0.21 (NAT) Service port: 80 Routing and forwarding function
Private network 192.168.100.21 (VM1)

WE1
private network 192.168.100.22 (VM1) Gateway: 192.168.100.21

WE2
private network 192.168.100.23 (VM1) Gateway: 192.168.100.21

Storage
Private network 192.168.100.24 (VM1) Gateway: 192.168.100.21

2.3. Experimental procedure

2.3.1, scheduler configuration

  • The scheduler needs two network cards. After adding a network card to the virtual machine, configure the new network card
[root@localhost ~]# nmcli connection   ## 查看新增网卡的UUID号
NAME   UUID                                  TYPE      DEVICE 
ens33  b3fda006-674b-48c0-b672-594c67beffbd  ethernet  ens33  
ens37  f0f1e5c4-d10a-35b3-b6fe-1da1e9da397f  ethernet  ens37
[root@localhost ~]# cd /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-ens37  ## 把ens33网卡 的配置复制到 ens37网卡
[root@localhost network-scripts]# vi ifcfg-ens37  ## 改成如下配置
NAME=ens37
UUID=f0f1e5c4-d10a-35b3-b6fe-1da1e9da397f
DEVICE=ens37
ONBOOT=yes
IPADDR=192.168.100.21
NETMASK=255.255.255.0
= >> wq  保存
[root@localhost network-scripts]# route -n
-bash: route: command not found     
  • Configure load scheduler SNAT forwarding rules
[root@localhost network-scripts]# yum install net-tools   ## 没有这条命令就安装
[root@localhost network-scripts]# route -n
Kernel IP routing table  
Destination          Gateway         Genmask         Flags   Metric   Ref  Use   Iface
20.0.0.0             0.0.0.0      255.255.255.0        U      100      0    0    ens33
192.168.100.0        0.0.0.0      255.255.255.0        U      101      0    0    ens37

创建虚拟服务器(注意:NAT模式要两块网卡,调度器的地址是外网口地址)
群集的VIP地址为20.0.0.21,针对TCP 80端口提供负载分流服务,使用的轮询调度算法。对于负载均衡调度器来说,vip必须是本机实际已启用的IP地址
[root@localhost ~]# ipvsadm -A -t 20.0.0.21:80 -s rr 
[root@localhost ~]# ipvsadm -a -t 20.0.0.21:80 -r 192.168.100.22:80 -m  ## 添加服务器节点
[root@localhost ~]# ipvsadm -a -t 20.0.0.21:80 -r 192.168.100.23:80 -m
[root@localhost ~]# ipvsadm -ln   ## 查看节点状态,加个“-n”将以数字形式显示地址、端口信息(实验了,-l和-L是一样的)
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  20.0.0.21:80 rr
  -> 192.168.100.22:80            Masq    1      0          0         
  -> 192.168.100.23:80            Masq    1      0          0  
[root@localhost ~]# ipvsadm-save > /opt/ipvsadm   ## 保存策略
[root@localhost ~]# vi /etc/sysctl.conf   ## 开启调度服务器路由转发功能
net.ipv4.ip_forward = 1  ## 最后一行添加
= >> wq 保存
[root@localhost ~]# sysctl -p   ## 让路由转发生效
net.ipv4.ip_forward = 1

ipvsadm命令选项解析:
-C:清除内核虚拟服务器表中的所有记录
-A:增加一台新的虚拟服务器
-t:说明虚拟服务器提供的是tcp的服务
-s rr:启用轮询算法
-a:在一个虚拟服务器中增加一台新的真实服务器
-r:指定RIP地址及TCP端口
-m:指定LVS的工作模式为NAT模式("-g" 是DR模式,"-i" 是TUN模式)
ipvsadm:启用LVS功能

2.3.2, storage server configuration

[root@localhost ~]# rpm -q nfs-utils   ## 我的这台虚拟机装过nfs,rpcbind
nfs-utils-1.3.0-0.61.el7.x86_64
[root@localhost ~]# rpm -q rpcbind 
rpcbind-0.2.0-47.el7.x86_64
[root@localhost ~]# yum -y install nfs-utils rpcbind  ## 没装的需要装nfs,rpcbind
 [root@localhost ~]# mkdir /opt/51xit /opt/52xit    ## 创建测试页目录
[root@localhost ~]# ll /opt
total 0
drwxr-xr-x  2 root root 6 Sep 21 01:46 51xit
drwxr-xr-x  2 root root 6 Sep 21 01:46 52xit
drwxr-xr-x. 2 root root 6 Oct 30  2018 rh
[root@localhost ~]# echo 'this is 51xit.top' > /opt/51xit/index.html   ## 分别给测试页输入内容
[root@localhost ~]# echo 'this is 52xit.top' > /opt/52xit/index.html
[root@localhost ~]# cat /opt/51xit/index.html    ## 查看测试页
this is 51xit.top
[root@localhost ~]# cat /opt/52xit/index.html 
this is 52xit.top
[root@localhost ~]# vi /etc/exports    
/opt/51xit 192.168.100.0/24(rw,sync)   ## rw:读写,sync:同步
/opt/52xit 192.168.100.0/24(rw,sync)
= >> wq 保存
[root@localhost ~]# systemctl restart nfs rpcbind   ## 重启服务
[root@localhost ~]# systemctl enable nfs rpcbind   ## 开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@localhost ~]# showmount -e    ## 说明 nfs共享目录已经共享出去了
Export list for localhost.localdomain:
/opt/52xit 192.168.100.0/24
/opt/51xit 192.168.100.0/24

2.3.3, Web server 1 configuration

[root@localhost ~]# yum -y install nfs-utils
[root@localhost ~]# showmount -e 192.168.100.24
Export list for 192.168.100.24:
/opt/52xit 192.168.100.0/24
/opt/51xit 192.168.100.0/24
[root@localhost ~]# yum -y install httpd   ## 安装http,可以使用apache服务
[root@localhost ~]# systemctl start httpd   ## 开启http
[root@localhost ~]# systemctl enable httpd   ## 开机自启http
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@localhost ~]# mount 192.168.100.24:/opt/52xit /var/www/html/  ## 创建临时挂载
在浏览器输入 192.168.100.22 测试一下

Test success
Insert picture description here

[root@localhost ~]# vi /etc/fstab   ## 进行永久挂载
192.168.100.24:/opt/51xit /var/www/html nfs defaults,_netdev 0 0   ## _netdev告诉我们挂载的目录需要网络
[root@localhost ~]# init 6  ## 重启虚拟机
这时候在清除浏览器缓存,在浏览器输入 192.168.100.22 测试挂载是否成功
[root@localhost ~]# df -Th  ## 通过这个方式也可以查看是否挂载成功
192.168.100.24:/opt/51xit nfs4       82G  4.0G   78G   5% /var/www/html

Test success
Insert picture description here

2.3.4, Web server 2 configuration

[root@localhost ~]# yum -y install nfs-utils
[root@localhost ~]# showmount -e 192.168.100.24
Export list for 192.168.100.24:
/opt/52xit 192.168.100.0/24
/opt/51xit 192.168.100.0/24
[root@localhost ~]# yum -y install httpd   ## 安装http
[root@localhost ~]# systemctl start httpd   ## 开启http
[root@localhost ~]# systemctl enable httpd   ## 开机自启http
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@localhost ~]# mount 192.168.100.24:/opt/52xit /var/www/html/  ## 创建临时挂载
在浏览器输入 192.168.100.23 测试一下

Test success

Insert picture description here

root@localhost ~]# vi /etc/fstab   ## 进行永久挂载
192.168.100.24:/opt/52xit /var/www/html nfs defaults,_netdev 0 0   ## _netdev告诉我们挂载的目录需要网络
[root@localhost ~]# init 6  ## 重启虚拟机
这时候在清除浏览器缓存,在浏览器输入 192.168.100.23 测试挂载是否成功
[root@localhost ~]# df -Th  ## 通过这个方式也可以查看是否挂载成功
192.168.100.24:/opt/52xit nfs4       82G  4.0G   78G   5% /var/www/html

Test success

Insert picture description here

2.4, verification experiment

  • We enter 20.0.0.21 in the browser, refreshing twice will poll two different web pages respectively

Insert picture description here

Insert picture description here

Guess you like

Origin blog.csdn.net/m0_46563938/article/details/108705774