Theory + experiment: LVS load balancing cluster

One, the principle of load balancing cluster

1.1 Overview of enterprise cluster applications

■ The meaning of cluster

  • Cluster, cluster, cluster
  • Consists of multiple hosts, but only externally appears as a whole

■ Problem

  • In Internet applications, as sites have higher and higher requirements for hardware performance, response speed, service stability, data reliability, etc., a single server is unable to do so

■ Solution

  • Use expensive minicomputers and mainframes
  • Use ordinary servers to build service clusters

1.2 Enterprise cluster classification-1

■ According to the target difference of the cluster, it can be divided into three types

  • Load balancing cluster
  • High-availability cluster (one active and one standby)
  • High performance computing cluster (ultra high performance cluster)

■ Load Balance Cluster

  • To improve the responsiveness of the application system, process as many access requests as possible, and reduce latency as the goal, to obtain high concurrency and high load (LB) overall performance
  • The load distribution of LB depends on the distribution algorithm of the master node

1.3 Enterprise cluster classification-2

■ High Availability Cluster

  • To improve the reliability of the application system and reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the high availability (HA) fault tolerance effect
  • The working mode of HA includes duplex and master-slave modes

■ High Performance Computer Cluster

  • With the goal of improving the CPU computing speed of the application system, expanding hardware resources and analysis capabilities, obtaining high-performance computing (HPC) capabilities equivalent to large-scale, supercomputers
  • The high performance of the high-performance computing cluster relies on "distributed computing" and "parallel computing". The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve computing capabilities that only large and supercomputers have.

1.4 Load balancing cluster architecture

■ Load balancing structure

  • The first layer, load scheduler (Load Balancer or Director)
  • The second layer, the server pool (Server Pool)
  • The third layer, shared storage (Share Storage)
    Insert picture description here

2. Master the work mode of load balancing

2.1 Analysis of load balancing cluster working mode

■ Load balancing cluster is currently the most commonly used cluster type in enterprises.
■ There are three working modes for cluster load scheduling technology

  • Address translation
  • IP tunnel
  • Direct routing

2.2 NAT mode

■ Address conversion

  • Network Address Translation, referred to as NAT mode
  • Similar to the private network structure of the firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client, and also the access exit of each node in response to the client
  • The server node uses a private IP address and is located on the same physical network as the load scheduler, and the security is better than the other two methods
    Insert picture description here

2.3 TUN mode

■ IP tunnel

  • IP Tunnel, TUN mode for short
  • Adopting an open network structure, the load scheduler only serves as the client's access portal, and each node directly responds to the client through its own Internet connection, instead of passing through the load scheduler
  • The server nodes are scattered at different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel
    Insert picture description here

2.4 DR mode

■ Direct routing

  • Direct Routing, referred to as DR mode
  • It adopts a semi-open network structure, which is similar to the structure of the TUN model, but the nodes are not scattered everywhere, but are located on the same physical network as the scheduler
  • The load scheduler is connected to each node server through the local network, without the need to establish a dedicated IP tunnel
    Insert picture description here

2.5 About LVS Virtual Server-1

■ LVS load scheduling algorithm

  • Round Robin
    • The received access requests are allocated to each node (real server) in the cluster in turn in order, and each server is treated equally, regardless of the actual number of connections and system load of the server
  • Weighted Round Robin
    • Distribute requests according to the weight value set by the scheduler. The node with the weight value will get the task first, and the more requests are allocated
    • Ensure that the server with strong performance bears more access traffic

2.6 About LVS Virtual Server-2

■ LVS load scheduling algorithm

  • Least connection
    • Assign according to the number of connections established by the real server, and prioritize the received access requests to the node with the least number of connections
    • Weighted least connection
      • When the performance of server nodes differ greatly, the weight can be automatically adjusted for the real server
      • Nodes with higher performance will bear a greater proportion of active connection load

2.7 Use ipvsadm tool

■ LVS cluster creation and management
Insert picture description here

2.8 NFS Shared Storage Service-1

■ Network File System, network file system

  • Rely on RPC (remote procedure call)
  • Need to install nfs-utils, rpcbind software package
  • System service: nfs, rpcbind
  • Shared configuration file: /etc/exports

2.9 NFS Shared Storage Service-2

■ Use NFS to publish shared resources

  • Install nfs-utils, rpcbind software packages
  • Set up a shared directory
  • Start the NFS service program
  • View the NFS shared directory published by the machine

2.10 NFS Shared Storage Service-3

■ Access NFS shared resources in the client

  • Install the rpcbind package and start the rpcbind service
  • Manually mount the NFS shared directory
  • fstab auto mount settings

Three, LVS-NAT deployment practice

IP地址规划:
1、调度器
对外公网:20.0.0.6(NAT) 业务端口号:80       开启路由转发功能      ###不用配网关
私有网络:192.168.200.21(VM1)                                                   ###不用配网关

2、WE1	
私有网络:192.168.200.22(VM1)网关:192.168.200.21

3、WE2
私有网络:192.168.200.23(VM1)网关:192.168.200.21

4、存储
私有网络:192.168.200.24(VM1)网关:192.168.200.21

环境配置:
关闭防火墙、关闭核心防护、安装yum源    ###四台都要这样的环境。前面的博客教过,这里不写了。
调度器服务器配置:

###添加一块网卡###
[root@localhost ~]# nmcli connection  ###查看一下加的新的网卡UUID号
1b17e3d8-6882-3d70-8ab8-c14fa3796a5a ### 保存新的UUID号
[root@localhost ~]# cd /etc/sysconfig/network-scripts/ ###进入到这个目录里
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-ens36 ###复制ens33 创建新的ens36
[root@localhost network-scripts]# vi ifcfg-ens36  ###编辑里面的名字,把33改成36,uuid改成刚刚查看保存的,ip地址改成192.168.200.21,和一个子网掩码就好了,别的不需要。

[root@localhost ~]# yum -y install ipvsadm.x86_64   ###安装负载均衡分发策略工具
[root@localhost ~]# modprobe ip_vs  ###加载这个模块
[root@localhost ~]# cat /proc/net/ip_vs  ###查看一下这个模块加载情况
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn

1、创建虚拟服务器
[root@localhost ~]# ipvsadm -A -t 20.0.0.6:80 -s rr   ###添加虚拟主机,-t是给客户用的后面加ip地址,-s调用算法 rr允许

2、添加服务器节点
[root@localhost ~]# ipvsadm -a -t 20.0.0.6:80 -r 192.168.200.22:80 -m  ###配置去web1服务器的
[root@localhost ~]# ipvsadm -a -t 20.0.0.6:80 -r 192.168.200.23:80 -m  ###配置去web2服务器的

3、保存LVS策略
[root@localhost ~]# ipvsadm-save > /opt/ipvsadm     ###保存策略追加到/opt/ipvsadm

4、开启调度服务器路由转发功能
[root@localhost ~]# vi /etc/sysctl.conf       
net.ipv4.ip_forward = 1                  ###在最后添加一条
[root@localhost ~]# sysctl -p             ###刷新生效一下
net.ipv4.ip_forward = 1
存储服务器配置:
[root@localhost ~]# rpm -q nfs-utils          ###查看一下nfs是否安装
package nfs-utils is not installed             ###没有安装
[root@localhost ~]# yum -y install nfs-utils   ###安装一下nfs
[root@localhost ~]# rpm -q rpcbind           ###查看一下rpcbind是否安装
rpcbind-0.2.0-42.el7.x86_64                                ###安装了,如果没有安装就yum安装一下 ,yum -y install  rpcbind 

[root@localhost ~]# mkdir /opt/51xit /opt/52xit     ###创建两个目录
[root@localhost opt]# echo 'this is 51xit' >/opt/51xit/index.html     ###在51xit目录里创建一个网页
[root@localhost opt]# echo 'this is 52xit' >/opt/52xit/index.html     ###在52xit目录里创建一个网页

[root@localhost opt]# vi /etc/exports      ###发布出去
/opt/51xit 192.168.200.0/24(rw,sync)       
/opt/52xit 192.168.200.0/24(rw,sync)

[root@localhost opt]# systemctl restart nfs rpcbind     ###重启一下
[root@localhost opt]# systemctl enable nfs rpcbind     ###开机自启

[root@localhost opt]# showmount -e  ###检查一下服务
Export list for localhost.localdomain:
/opt/52xit 192.168.200.0/24         ###这两个就是刚刚共享出去的
/opt/51xit 192.168.200.0/24         ###这两个就是刚刚共享出去的
WE1配置:
[root@localhost ~]# rpm -q nfs-utils          ###查看一下nfs是否安装
package nfs-utils is not installed             ###没有安装
[root@localhost ~]# yum -y install nfs-utils   ###安装一下nfs
[root@localhost ~]# rpm -q rpcbind           ###查看一下rpcbind是否安装
rpcbind-0.2.0-42.el7.x86_64                                ###安装了,如果没有安装就yum安装一下 ,yum -y install  rpcbind 

[root@localhost ~]# showmount -e 192.168.200.24   ###查看一下是否显示共享的
Export list for 192.168.200.24:
/opt/52xit 192.168.200.0/24
/opt/51xit 192.168.200.0/24

[root@localhost ~]# yum -y install httpd          ###安装Apache服务
[root@localhost ~]# systemctl restart httpd     ###重启Apache服务
[root@localhost ~]# systemctl enable httpd     ###开机自启Apache

[root@localhost ~]# mount 192.168.200.24:/opt/51xit /var/www/html/   ###临时挂载
###测试:浏览器输入192.168.200.22就能显示51的网页###

[root@localhost ~]# vi /etc/fstab    ###永久挂载
192.168.200.24:/opt/51xit /var/www/html nfs defaults,_netdev 0 0
WE2配置:
[root@localhost ~]# rpm -q nfs-utils          ###查看一下nfs是否安装
package nfs-utils is not installed             ###没有安装
[root@localhost ~]# yum -y install nfs-utils   ###安装一下nfs
[root@localhost ~]# rpm -q rpcbind           ###查看一下rpcbind是否安装
rpcbind-0.2.0-42.el7.x86_64                                ###安装了,如果没有安装就yum安装一下 ,yum -y install  rpcbind 

[root@localhost ~]# showmount -e 192.168.200.24   ###查看一下是否显示共享的
Export list for 192.168.200.24:
/opt/52xit 192.168.200.0/24
/opt/51xit 192.168.200.0/24

[root@localhost ~]# yum -y install httpd          ###安装Apache服务
[root@localhost ~]# systemctl restart httpd     ###重启Apache服务
[root@localhost ~]# systemctl enable httpd     ###开机自启Apache

[root@localhost ~]# mount 192.168.200.24:/opt/52xit /var/www/html/   ###临时挂载
###测试:浏览器输入192.168.200.22就能显示52的网页###

[root@localhost ~]# vi /etc/fstab    ###永久挂载
192.168.200.24:/opt/52xit /var/www/html nfs defaults,_netdev 0 0
测试:浏览器输入20.0.0.6,显示的内容51xit,等一会刷新就会变成52xit,就实现了负载均衡

Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_44733021/article/details/108710313