LVS load balancing cluster Detailed

This post contains the following:
1, the cluster type
2, load balanced hierarchical structure
3, load balancing operation mode
4, LVS load scheduling algorithm
5, LVS related to the basic command
6, the ipvsadm management tools
7, to build NFS Share storage server

Depending on the production environment, clustering functionality provided is also different, the technical details may be used is different. Concepts about clustering technology are as follows:

1, the cluster type

Whatever the cluster, including at least two node server, and external performance as a whole, only an access entrance (domain name or IP address), equivalent to a mainframe computer. According to the target cluster for which differences can be divided into the following three types:

  • Load balancing cluster (LB): application to improve the responsiveness of the system, more processing access requests as possible, less delay for the purpose of obtaining high concurrency, the overall performance of high load. For example, "the DNS polling", "application layer switching", "reverse proxy" and so can be used to do load balancing cluster. LB split load distribution algorithm depends on the master node, the access request from a client to a plurality of shared server nodes, thereby relieving the load pressure of the whole system.
  • High Availability Cluster (HA): application systems to improve reliability, reduce downtime as much as possible for the target to ensure continuity of service, achieve high availability (HA) fault tolerance effects, such as "failover." "Hot Standby", "multiple hot standby", belong to high availability clustering technique, the HA includes a duplexer work modes and master-slave. Duplex line is that all nodes simultaneously; master node can automatically switch from the main line is only the master node, but the node failure, and Cisco router HSRP similar principles.

  • High-performance computing clusters (HPC): to increase the CPU speed application system, expansion of hardware resources and analytical capabilities as the goal, to get the equivalent of a large, high-performance computing supercomputers (HPC) capabilities. For example, "cloud", "grid" may be as a kind of HPC. High-performance HPC cluster depends on the "distributed computing" "parallel computing", by dedicated hardware and software will be integrated with the CPU, memory and other resources of multiple servers, only to realize large-scale computing power, the supercomputer was available.

Different types of clusters can be combined according to the actual needs, such as high availability load balancing cluster.

2, load balancing hierarchy

LVS load balancing cluster Detailed

The figure is a typical load balancing cluster, a total of three, the role of each layer is as follows:

  • First layer: load balancer, which is the only entrance access to the entire cluster system, the external server using all common VIP (virtual IP) address, also known as the cluster IP. Often configured primary and backup two hot backup scheduler. Ensure high availability.

  • The second layer: a pool of servers, application services (e.g., HTTP, FTP) provided borne by the cluster server pool, wherein each node has separate the RIP (Real IP) address, only the process scheduler distributes client requests over, when being a node fails, the load balancer fault tolerance will be its isolation, wait after an error to exclude the server back into the pool.

  • Third layer: shared memory, providing stable for all nodes in a server pool, consistent file access services to ensure the unity of the entire cluster. In Linux / UNIX environments, you can use the NAS shared storage device, or to provide a dedicated server NFS (Network File System) shared services.

3, load balancing mode of operation
.
LVS load balancing cluster Detailed

  • NAT mode: similar to the structure of the private network firewall, load balancer as a gateway for all server node, that is, as a client to access the entrance, as well as access nodes in response to export client. Private IP address of the server node, and a load balancer located in the same physical network, security is better than the other two methods, but a large pressure load balancer.

  • TUN modes: an open network architecture, load balancer only as a client access entry, each node connected to the Internet through a respective direct response to a client, rather than through the load balancer, the Internet server node dispersed in different positions , having a separate public network IP address, communicate with each other via a private IP tunnel load balancer.

  • DR模式:采用半开放的网络结构,与TUN模式的结构类似,但各节点不是分散在各地的,而是与调度器位于同一个物理网络,负载调度器与各节点服务器通过本地网络连接,不需要建立专用的IP隧道。

LVS是针对Linux内核开发的一个负载均衡项目,官网是:http://www.linuxvirtualserver.org/ 可以进入官网查阅相关的技术文档。LVS现在已经成为Linux内核的一部分,默认编译为ip_vs模块,必要时能够自动调用。

4、LVS的负载调度算法

轮询(rr):将收到的访问请求按照顺序轮流分配给群集中的各节点(真实服务器),均等地对待每台服务器,而不管服务器实际的连接数和系统负载。

加权轮询(wrr):根据真实服务器的处理能力轮流分配收到的访问请求,调度器可以自动查询个节点的负载情况,并动态调整其权重。这样可以保证处理能力强的服务器承担更多的访问流量。

最少连接(lc):根据真实服务器已建立的连接数进行分配,将受到的访问请求优先分配给连接数最少的节点,如果所有的服务器节点性能相近,采用这种方式可以更好地均衡负载。

加权最少连接(wlc):在服务器节点的性能差异较大的情况下,可以为真实服务器自动调整权重,权重较高的节点将承担更大比例的活动连接负载。

5、LVS相关的基础命令

默认ip_vs模块没有加载,可以执行如下命令加载ip_vs模块:

[root@localhost ~]# modprobe ip_vs                 #加载ip_vs模块
[root@localhost ~]# lsmod | grep ip_vs             #查看ip_vs模块是否已加载
ip_vs                 141432  0 
nf_conntrack          133053  8 ip_vs,nf_nat,nf_nat_ipv4,......
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
[root@localhost ~]# modprobe -r ip_vs             #移除ip_vs模块
[root@localhost ~]# lsmod | grep ip_vs
[root@localhost ~]# modprobe ip_vs
[root@localhost ~]# cat /proc/net/ip_vs            #查看ip_vs版本信息
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn

6、使用ipvsadm管理工具

ipvsadm是在负载调度器上使用的LVS群集管理工具,通过调用ip_vs模块来添加、删除服务器节点,以及查看群集的运行状态。

[root@localhost ~]# yum -y install ipvsadm               #安装ipvsadm工具
[root@localhost ~]# ipvsadm -v                                 #查看ipvsadm版本
ipvsadm v1.27 2008/5/15 (compiled with popt and IPVS v1.2.1)

1)使用ipvsadm工具创建虚拟服务器:

若群集的VIP地址为200.0.0.1,针对TCP 80端口提供负载分流服务,使用的调度算法为轮询(rr),则对应的命令如下,对于负载均衡调度器来说,VIP必须是本机实际已启用的IP地址:

[root@localhost ~]# ipvsadm -A -t 200.0.0.1:80 -s rr

<!--以上命令中,选项-A表示添加虚拟服务器,-t用来指定虚拟VIP地址和TCP端口,
-s用来指定负载调度算法——轮询(rr)、加权轮询(wrr)、最少连接(lc)、
加权最少连接(wlc)。
若希望使用保持连接,还需要添加“-p 60”选项,其中60为保持时间(单位为s)-->

2)添加服务器节点:

为虚拟服务器200.0.0.1添加四个服务器节点,IP地址依次为192.168.1.2~5,命令如下:

[root@localhost ~]# ipvsadm -a -t 200.0.0.1:80 -r 192.168.1.2:80 -m -w 1
[root@localhost ~]# ipvsadm -a -t 200.0.0.1:80 -r 192.168.1.3:80 -m -w 1
[root@localhost ~]# ipvsadm -a -t 200.0.0.1:80 -r 192.168.1.4:80 -m -w 1
[root@localhost ~]# ipvsadm -a -t 200.0.0.1:80 -r 192.168.1.5:80 -m -w 1
<!--以上命令中,选项-a表示添加真实服务器,-t用来指定VIP地址及TCP端口,
-r用来指定RIP(真实IP)地址及TCP端口,-m表示使用NAT群集模式
(-g DR模式和-i TUN模式),-w用来设置权重(权重为0时表示暂停节点)。-->

[root@localhost ~]# ipvsadm -ln                #查看节点状态
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  200.0.0.1:80 rr
  -> 192.168.1.2:80               Masq    1      0          0         
  -> 192.168.1.3:80               Masq    1      0          0         
  -> 192.168.1.4:80               Masq    1      0          0         
  -> 192.168.1.5:80               Masq    1      0          0         
<!--上述输出结果中,forward列下的Masq对应masquerade(地址伪装),
表示采用的群集模式为NAT,如果是Route,则表示采用的群集模式为DR。-->

3)删除服务器节点:

需要从服务器池中删除某一个节点时,使用选项-d。执行删除操作必须指定目标对象,包括节点地址,虚拟IP地址,例如,如下操作将会删除LVS群集200.0.0.1中的节点192.168.1.5。

[root@localhost ~]# ipvsadm -d -r 192.168.1.5:80 -t 200.0.0.1:80

需要删除整个虚拟服务器时,使用选项-D并指定虚拟IP即可,不需指定节点。例如执行“ipvsadm -D -t 200.0.0.1:80”,则删除此虚拟服务器。

4)保存负载分配策略:

使用导出/导入工具ipvsadm-save/ipvsadm-restore可以保存、恢复LVS策略(服务器重启后策略需要重新导入)。

[root@localhost ~]# hostname lvs         #更改主机名
<!--若主机名为默认的localhost,在导出策略时,VIP地址将会自动转为127.0.0.1,
若是这样,再次导入后,将会导致负载服务器无法正常工作。-->
[root@localhost ~]# bash                 #使更改的主机名马上生效
[root@lvs ~]# ipvsadm-save > /etc/sysconfig/ipvsadm.bak                 #保存策略
[root@lvs ~]# cat /etc/sysconfig/ipvsadm.bak                          #确认保存结果
-A -t 200.0.0.1:http -s rr
-a -t 200.0.0.1:http -r 192.168.1.2:http -m -w 1
-a -t 200.0.0.1:http -r 192.168.1.3:http -m -w 1
-a -t 200.0.0.1:http -r 192.168.1.4:http -m -w 1
[root@localhost ~]# ipvsadm -C                  #清除当前策略
[root@localhost ~]# ipvsadm -ln                  #确认当前群集策略已被清除
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@localhost ~]# ipvsadm-restore < /etc/sysconfig/ipvsadm.bak     #导入刚才备份的策略
[root@localhost ~]# ipvsadm -ln              #查看群集策略是否导入成功
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  200.0.0.1:80 rr
  -> 192.168.1.2:80               Masq    1      0          0         
  -> 192.168.1.3:80               Masq    1      0          0         
  -> 192.168.1.4:80               Masq    1      0          0         

7、搭建NFS共享存储服务器

NFS是一种基于TCP/IP传输的网络文件系统协议,通过使用NFS协议,客户机可以像访问本地目录一样访问远程服务器中的资源,对于大多数负载均衡群集来说,使用NFS协议来共享数据存储是比较常见的做法,NFS也是NAS存储设备必然支持的一种协议。

使用NFS发布共享资源:

1)安装相关软件包:

[root@localhost ~]# yum -y install nfs-utils rpcbind                 #安装所需软件包
[root@localhost ~]# systemctl enable nfs                               #设置NFS开机自启
[root@localhost ~]# systemctl enable rpcbind                       #设置rpcbind开机自启

2)设置共享目录:

[root@localhost ~]# mkdir -p /opt/wwwroot                           #创建需要共享的目录
[root@localhost ~]# vim /etc/exports                         #编辑NFS的配置文件,默认为空

/opt/wwwroot  192.168.1.0/24(rw,sync,no_root_squash)

<!--上述配置中“192.168.1.0/24”表示允许访问的客户机地址,
可以是主机名、IP地址、网段地址、允许使用*、?通配符;
权限选项中的rw表示允许读写(ro为只读),
sync表示同步写入,因为在客户机挂载该共享目录后,若向该目录中写入什么东西的话,
会先保存在自己的缓存中,而不会写入到共享目录中,加上sync则不会存在自己的缓存,
直接保存到共享目录中;
no_root_squash表示当前客户机以root身份访问时赋予本地root权限
(默认是root_squash,将作为nfsnobody用户对待),若不加no_root_squash,
可能会导致被降权,而无法进行读写(wr)-->

当需要将同一个目录共享给不同的客户机,且分配不同权限时,只要用空格分隔指定多个“客户机(权限选项)”即可。如下:

[root@localhost ~]# vim /etc/exports   
/var/ftp/pub  192.168.2.1(ro,sync) 192.168.2.3(rw,sync)

3)重载NFS服务程序:

[root@localhost ~]# systemctl restart rpcbind
[root@localhost ~]# systemctl restart nfs
[root@localhost ~]# netstat -anpt | grep rpc
tcp        0      0 0.0.0.0:43759         0.0.0.0:*      LISTEN      76336/rpc.statd     
tcp        0      0 0.0.0.0:111           0.0.0.0:*       LISTEN      76307/rpcbind       
tcp        0      0 0.0.0.0:20048       0.0.0.0:*     LISTEN      76350/rpc.mountd    
tcp6       0      0 :::111                  :::*          LISTEN      76307/rpcbind       
tcp6       0      0 :::20048                :::*         LISTEN      76350/rpc.mountd    
tcp6       0      0 :::38355                :::*         LISTEN      76336/rpc.statd     
[root@localhost ~]# showmount -e                      #查看本机发布的NFS共享目录
Export list for localhost.localdomain:
/opt/wwwroot 192.168.1.0/24
/var/ftp/pub 192.168.2.3,192.168.2.1

4)在客户端访问NFS共享资源:

NFS协议的目标是提供一种网络文件系统,因此对NFS共享的访问也使用mount命令进行挂载,对应的文件系统类型为nfs,既可以手动挂载,也可以加入fstab配置文件来实现开机自动挂载,考虑到群集系统中的网络稳定性,NFS服务器与客户机之间最好使用专有网络进行连接。

1. Install rpcbind package, and start the rpcbind service, in order to use showmount query tool, so the nfs-utils also fitted with:

[root@localhost ~]# yum -y install nfs-utils rpcbind 
[root@localhost ~]# systemctl enable rpcbind
[root@localhost ~]# systemctl start rpcbind

2. Query NFS server to share what directory:

[root@localhost ~]# showmount -e 192.168.1.1               #需指定要查询的服务器地址
Export list for 192.168.1.1:
/opt/wwwroot 192.168.1.0/24
/var/ftp/pub 192.168.2.3,192.168.2.1

3. Manually mount the NFS share, and set the boot automatically mount:

[root@localhost ~]# mount 192.168.1.1:/opt/wwwroot /var/www/html      #挂载到本地
[root@localhost ~]# df -hT /var/www/html                     #查看是否挂载成功
文件系统                 类型  容量  已用  可用 已用% 挂载点
192.168.1.1:/opt/wwwroot nfs4   17G  6.2G   11G   37% /var/www/html
[root@localhost ~]# vim /etc/fstab                #设置自动挂载
                 .........................
192.168.1.1:/opt/wwwroot    /var/www/html     nfs     defaults,_netdev    0    0
<!--文件系统为nfs,挂载参数添加_netdev(设备需要网络)-->

Upon completion of the mount, to access a client / var / www / html folder is equivalent to access NFS server in / opt / wwwroot folder, where the network mapping process for the user program is completely transparent.

Guess you like

Origin blog.51cto.com/14154700/2415159