LVS load balancing cluster (theory + NAT actual deployment)

Preface

In various Internet applications, as the site has higher and higher requirements for hardware functions, response speed, service stability, data reliability, etc., it will be difficult for a single server to bear all access. In addition to using expensive mainframes and dedicated load distribution equipment, companies have another option to solve the problem, that is, to build a cluster server-by integrating multiple relatively inexpensive ordinary servers, providing the same external address with the same address service.

1. LVS cluster application foundation

The name cluster comes from the English word "Cluster", which means a group or a bunch, and when used in the server field, it means a collection of a large number of servers to distinguish it from a single server.

1.1 Overview of cluster technology

Depending on the actual environment of the enterprise, the functions provided by the cluster are also different, and the technical details used may also have their own merits. However, from an overall point of view, it is necessary to understand some common characteristics of the cluster first, so as to be well aware of the work of constructing and maintaining the cluster, and avoid operational blindness.

1.1.1, the type of cluster

(1) Load Balance Cluster
• To improve the responsiveness of the application system, process more access requests as much as possible, and reduce latency as the goal, to obtain high concurrency and high load (LB) overall performance
• LB load Allocation depends on the distribution algorithm of the master node, and the distribution algorithm is scheduling

(2) High Availability Cluster (High Availability Cluster)
• To improve the reliability of the application system and reduce the interruption time as much as possible, to ensure the continuity of services, and achieve the fault tolerance effect of high availability (HA).
The working methods of HA include Duplex and master-slave two modes
• Duplex, two parallel states work together to replace each other at any time
• With a master-slave mode, one master and multiple slaves are called centralized clusters
. Decentralized mechanism : There is no real master. If there is, it is also symbolic. All nodes are working (Redis cluster is a typical decentralized mechanism)

(3) High Performance Computing Cluster (High Performance Computer Cluster)
• With the goal of increasing the CPU computing speed of the application system, expanding hardware resources and analysis capabilities, obtaining high performance computing (HPC) capabilities equivalent to large, supercomputers
• high performance The high performance of computing clusters relies on "distributed computing" and "parallel computing". The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve computing capabilities that only large and supercomputers have.

1.1.2, the layered structure of load balancing

The first layer, load balancer (Load Balancer or Director)
second layer, server pool (Server Pool)
third layer, shared storage (Share Storage)
Insert picture description here

1.1.3, load balancing working mode

Load balancing cluster is currently the most commonly used cluster type in enterprises. The
cluster's load scheduling technology has three working modes.
Address translation,
IP tunnel, and
direct routing (DR)

(1) NAT mode
Network Address Translation is
abbreviated as NAT mode, which is similar to the private network structure of a firewall. The load scheduler serves as the gateway of all server nodes, that is, as the access entrance of the client, and each node responds to the client's access. The egress
server node uses a private IP address and is located on the same physical network as the load scheduler. The security is better than the other two methods.
Insert picture description here
(2) TUN mode
IP tunnel (IP Tunnel) is
referred to as TUN mode, and adopts an open network structure. The scheduler only serves as the client's access entrance. Each node directly responds to the client through its own Internet connection, instead of being
scattered in different locations on the Internet through the load scheduler server node, with independent public network IP addresses, and through dedicated IP The tunnel and the load scheduler communicate with each other
Insert picture description here
(3) DR mode
Direct Routing

Referred to as DR mode, it adopts a semi-open network structure, similar to the structure of TUN mode, but the nodes are not scattered in various places, but are located on the same physical network as the
scheduler. The scheduler is connected to each node server through the local network. No need to establish a dedicated IP tunnel
Insert picture description here

1.2, LVS virtual server

Linux Virtual Server

Load balancing solution for Linux kernel

Founded by Dr. Zhang Wensong in my country in May 1998

[root@localhost~]# modprobe ip_vs   '//确认内核对LVS的支持'
[root@localhost~]# cat /proc/net/ip_vs

1.2.1, LVS load scheduling algorithm

(1) Round Robin
distributes the received access requests to each node (real server) in the cluster in turn, and treats each server equally, regardless of the actual number of connections and system load of the server

(2) Weighted Round Robin
distributes the received access requests in turn according to the processing capacity of the real server. The scheduler can automatically query the load of each node and dynamically adjust its weight to
ensure that the server with strong processing capacity can take more Traffic

(3) Least Connections
are allocated based on the number of connections established by the real server, and the received access requests are prioritized to the node with the least number of connections

(4) Weighted Least Connections
can automatically adjust the weights for real servers when the performance of server nodes varies greatly. Nodes with
higher weights will bear a greater proportion of active connection load

1.2.2, use ipvsadm management tool

Create virtual server
Add, delete server node
View cluster and node status
Report error load distribution strategy

2. LVS-NAT actual deployment

2.1. Experimental environment

VMware software
One centos7 as the LVS gateway, dual network card dispatch server. The ip addresses are 192.168.100.21; 20.0.0.21

Two centos7 as Apache servers, the ip addresses are 192.168.100.22 respectively;
the gateway on 192.168.100.23 must point to 192.168.100.21

A centos7 is used as NFS storage, the ip address is 192.168.100.24

Note: You can leave a message if there is a problem with the ip address network configuration

2.2. Experimental purpose

The real machine visits the website of 20.0.0.21, through the nat address translation, polling access to the Apache1 and Apache2 hosts,
builds the nfs network file storage service, and experiment with load balancing

2.3. Experimental process

2.3.1, install ipvsadm (192.168.100.21)

[root@localhost ~]# yum -y install ipvsadm
[root@localhost ~]# ipvsadm -v
ipvsadm v1.27 2008/5/15 (compiled with popt and IPVS v1.2.1)
[root@localhost ~]# modprobe ip_vs

2.3.2, create a virtual server

The VIP address of the cluster is 192.168.100.21, and the load splitting service is provided for the TCP80 port. The scheduling algorithm used is round-robin. For the load balancing scheduler, the VIP must be the actual enabled IP address of the machine 20.0.0.21

[root@localhost ~]# ipvsadm -A -t 20.0.0.21:80 -s rr

2.3.3, add server node

[root@localhost ~]# ipvsadm -a -t 20.0.0.21:80 -r 192.168.100.22:80 -m
[root@localhost ~]# ipvsadm -a -t 20.0.0.21:80 -r 192.168.100.23:80 -m

2.3.4, save LVS strategy

[root@localhost ~]# ipvsadm-save > /opt/ipvsadm
[root@localhost ~]# cat /opt/ipvsadm 
-A -t localhost.localdomain:http -s rr
-a -t localhost.localdomain:http -r 192.168.100.22:http -m -w 1
-a -t localhost.localdomain:http -r 192.168.100.23:http -m -w 1

2.3.5, open the scheduler forwarding routing rules

[root@localhost ~]# vi /etc/sysctl.conf 
net.ipv4.ip_forward = 1   添加
[root@localhost ~]# sysctl -p
net.ipv4.ip_forward = 1

2.3.6. Storage server configuration (192.168.100.24)

First, check whether nfs-utils and rpcbind are installed, if they are not installed with yum
, start the two services after installation

[root@localhost ~]# systemctl start nfs
[root@localhost ~]# systemctl start rpcbind
[root@localhost ~]# mkdir /opt/51xit /opt/52xit
[root@localhost ~]# vi /etc/exports
/opt/51xit 192.168.100.0/24(rw,sync)
/opt/52xit 192.168.100.0/24(rw,sync)
[root@localhost ~]# systemctl restart rpcbind
[root@localhost ~]# systemctl restart nfs
[root@localhost ~]# systemctl enable nfs
[root@localhost ~]# systemctl enable rpcbind
[root@localhost ~]# echo "this is www.51xit.top" > /opt/51xit/index.html
[root@localhost ~]# echo "this is www.52xit.top" > /opt/52xit/index.html

2.3.7, configure 192.168.100.22 and 192.168.100.23

Both the firewall and core protection are turned off, check whether nfs-utils is installed

[root@localhost ~]# showmount -e 192.168.100.24
Export list for 192.168.100.24:
/opt/52xit 192.168.100.0/24
/opt/51xit 192.168.100.0/24
[root@localhost ~]# yum -y install httpd
[root@localhost ~]# mount 192.168.100.24:/opt/51xit /var/www/html/
[root@localhost ~]# vi /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Thu Aug  6 12:23:03 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=a1c935eb-f211-43a5-be35-2a9fef1f6a89 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/cdrom /mnt iso9660 defaults 0 0
192.168.100.24:/opt/51xit/ /var/www/html/ nfs defaults,_netdev 0 0
[root@localhost ~]# systemctl start httpd

Test whether the login is normal
Insert picture description here

[root@localhost ~]# showmount -e 192.168.100.24
Export list for 192.168.100.24:
/opt/52xit 192.168.100.0/24
/opt/51xit 192.168.100.0/24
[root@localhost ~]# yum -y install httpd
[root@localhost ~]# mount 192.168.100.24:/opt/52xit /var/www/html/
[root@localhost ~]# vi /etc/fstab 

#
# /etc/fstab
# Created by anaconda on Thu Aug  6 12:23:03 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=a1c935eb-f211-43a5-be35-2a9fef1f6a89 /boot                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
/dev/cdrom /mnt iso9660 defaults 0 0
192.168.100.24:/opt/52xit/ /var/www/html/ nfs defaults,_netdev 0 0
[root@localhost ~]# systemctl start httpd

Test whether the login is normal
Insert picture description here

2.3.8, verification

Enter 20.0.0.21 in the real browser to
Insert picture description here
refresh, if the website
Insert picture description here
is unsuccessful, the cache is cleared , and the polling algorithm is used to visit the two websites.

Guess you like

Origin blog.csdn.net/weixin_48191211/article/details/108709612