LVS load balancing cluster ---------------- Take you to the actual deployment of LVS-NAT! ! !

1. Enterprise cluster classification

  • According to the target difference of the cluster, it can be divided into three types:
    ● Load balancing cluster
    ● High availability cluster
    ● High performance computing cluster

1.1 Load balancing cluster

  • Improve the corresponding capabilities of the application system, process as many access requests as possible, reduce latency as the goal, and obtain high concurrency and
    high load (LB) overall performance
  • The load distribution of LB depends on the distribution algorithm of the master node

1.2 High Availability Cluster

  • Improve the reliability of the application system, reduce the interruption time as much as possible, ensure the
    continuity of the service , and achieve the fault tolerance effect of high availability (HA)
  • The working mode of HA includes duplex and master-slave modes

1.3 High Performance Computer Cluster

  • The goal is to increase the CPU computing speed of the application system, expand hardware resources and analysis capabilities, and obtain
    high-performance computing (HPC) capabilities equivalent to large-scale, supercomputers
  • High performance relies on "distributed computing" and "parallel computing".
    The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve
    computing capabilities that only large and supercomputers have

Second, the working mode of the load balancing cluster!

  • Load balancing cluster is currently the most commonly used cluster type in enterprises
  • The cluster load scheduling technology has 3 E working modes
    ●Address translation (NAT mode)
    ●IP tunnel (TUN mode)
    ●Direct routing (DR mode)

NAT mode

●Private network structure similar to a firewall, the load scheduler
acts as the gateway of all server nodes, that is, as the client
's access entrance, and each node responds to the client's access
exit. The
server node uses a private IP address and is
located in the same location as the load scheduler. A physical network is more secure than the other two
methods

IP tunnel

●IP Tunnel, TUN mode for short
●Adopts an open network structure, the load scheduler is only used
as the client's access entrance, and each node
directly responds to the client through its own Internet connection, instead of passing through the
load scheduler.
●Server nodes are scattered
Have independent public IP addresses in different locations on the Internet,
communicate with the load scheduler through a dedicated IP tunnel

DR mode

Direct Routing
●Direct Routing, referred to as DR mode
●Using a semi-open network structure,
similar to the structure of the TUN mode , but the nodes are not scattered in various
places, but on the same physical network as the
scheduler. ●Load scheduler and Each node server is connected through the local network
, no need to establish a dedicated IP tunnel

Third, the actual deployment of LVS-NAT!

Prepare the environment:

server system address effect
Scheduler ipvsadm centos7.6 20.0.0.25 (public network)//192.168.100.25 (intranet) Scheduling to achieve load balancing nat conversion
web1 centos7.6 192.168.100.26 Provide web services
web2 centos7.6 192.168.100.27 Provide web services
NFS storage centos7.6 192.168.100.28 NFS shared storage

Statement:
The basic environment of all servers has been set up! Can proceed! (Yum source, firewall, and core protection have been turned off!)
Scheduler: No gateway is configured, it is the default route! !
web1: The gateway configuration takes the intranet ip of the scheduler as the gateway!
web2: The gateway configuration takes the intranet ip of the scheduler as the gateway!
Nfs storage sharing: The gateway configuration uses the scheduler's intranet ip as the gateway!
After configuring the network, it is best to ping everything to make sure that the network is connected! ! ! !

Scheduler configuration:

First note:
There are two network cards
NAT on the scheduler server : 20.0.0.25
VM1: 192.168.100.25, you
need to configure the network card information first! ! Add network card, configure network card information! (Dual network card configuration! Not listed)

1.加载ip_vs 模块,  安装ipvsadm工具

yum -y install ipvsadm              #安装ipvsadm工具

ipvsadm -v                                #查看ipvsadm安装版本

modprobe ip_vs                        #加载ip_vs模块,是否支持加载 

cat /proc/net/ip_vs                     #查看加载模块,端口支持重发

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn

2.新建LVS虚拟服务器并添加节点服务器信息

ipvsadm -A -t 20.0.0.25:80 -s rr                                          
ipvsadm -a -t 20.0.0.25:80 -r 192.168.100.26:80 -m         
ipvsadm -a -t 20.0.0.25:80 -r 192.168.100.27:80 -m

ipvsadm -ln             # 查看添加信息

ipvsadm-save >/opt/ipvsadm      #保存信息,相当于发布出去,保存到定义的文本文件下

(注释:
#-a 表示以新建虚拟服务器,-t 用来指定vip地址的tcp协议  -s  用来指定负载调度算法   指定的轮询(rr)!!
#-r 表示指定rip地址及tcp端口, -m表示使用NAT集群模式,(-g 是DR模式(直接路由),-i  是tun模式(ip隧道))
 -m后面还可以跟-w的参数,这里没有做的-w 用来设置权重,(权重为0时,表示暂停节点))





3.开启路由转发

vi /etc/sysctl.conf     #配置文件下
添加
net.ipv4.ip_forward = 1
保存

sysctl -p               #开启路由转发


Configure NFS storage sharing server

1.安装nfs-utils   rpcbind  

yum -y install nfs-utils               #nfs必须安装的,不然无法识别nfs格式,
yum -y install rpcbind

2.创建共享测试目录,和网页文件

mkdir  /opt/as1   /opt/as2

echo 'this is as1' >/opt/as1/index.html         #写些数据定义web1
echo 'this is as2' >/opt/as2/index.html         #写些数据定义web2

3.添加共享目录,
vi /etc/exports                  #将共享目录添加在配置内,相当于发布

/opt/as1 192.168.100.0/24(rw,sync)
/opt/as2 192.168.100.0/24(rw,sync)

                                         #重启服务,设置开机自启
systemctl restart nfs      
systemctl restart rpcbind
systemctl enable nfs
systemctl enable rpcbind


showmount -e                  #查看当前共享的目录
Export list for localhost.localdomain:
/opt/as2 192.168.100.0/24
/opt/as1 192.168.100.0/24

web1 server

1.安装nfs ,rpcbind 服务

yum -y install nfs-utils               #nfs必须安装的,不然无法识别nfs格式,
yum -y install rpcbind

2.查看nfs存储服务器共享,需输入nfs地址
showmount -e 192.168.100.28
Export list for 192.168.100.28:
/opt/as2 192.168.100.0/24
/opt/as1 192.168.100.0/24

3.安装apache web服务器!咱们直接yum安装了

yum  -y install httpd

systemctl restart httpd           #开启httpd服务
systemctl enable httpd 

4.将nfs的共享目录下的测试网页,挂载到apahce下的html下

mount 192.168.100.28:/opt/as1/   /var/www/html/

vi /etc/fstab           #配置文件下添加
192.168.100.28:/opt/as1  /var/www/html/  nfs defaults,_netdev 0 0

init 6  #重启服务,验证搭建环境是否正确!

web2 server

1.安装nfs ,rpcbind 服务

yum -y install nfs-utils               #nfs必须安装的,不然无法识别nfs格式,
yum -y install rpcbind

2.查看nfs存储服务器共享,需输入nfs地址
showmount -e 192.168.100.28
Export list for 192.168.100.28:
/opt/as2 192.168.100.0/24
/opt/as1 192.168.100.0/24

3.安装apache web服务器!咱们直接yum安装了

yum  -y install httpd

systemctl restart httpd           #开启httpd服务
systemctl enable httpd 

4.将nfs的共享目录下的测试网页,挂载到apahce下的html下

mount 192.168.100.28:/opt/as2/   /var/www/html/

vi /etc/fstab           #配置文件下添加
192.168.100.28:/opt/as2  /var/www/html/  nfs defaults,_netdev 0 0

init 6  #重启服务,验证搭建环境是否正确!

verification:

1. First verify that the test webpage and the shared nfs directory are correct!
Insert picture description here
2. Enter 20.0.0.25 to access the Internet address to see if it is a polling method to access the web page!

Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_47320286/article/details/108712131