LVS load balancing cluster [NAT mode LVS load balancing actual deployment]

1. The meaning of cluster

Clusters and clusters are composed of multiple hosts, but externally, they only appear as a whole, providing only one access entry (domain name or IP), which is equivalent to a large computer.

1. The need for clusters

In Internet applications, as sites have increasingly higher requirements for hardware performance, response speed, service stability, and data reliability, a single server cannot meet the requirements for load balancing and high availability.

2. Solve

  • Use expensive minicomputers and mainframes.
  • Use multiple relatively inexpensive ordinary servers to build a service cluster.
    Through the integration of multiple servers, LVS is used to achieve server high availability and load balancing, and provide the same services externally from the same IP address.
    This is a cluster technology commonly used in enterprises-LVS (Linux Virtual Server, Linux Virtual Server).

2. The cluster can be divided into three types

According to the target difference of the cluster, it can be divided into three types

  • Load balancing cluster
  • Highly available cluster
  • High performance cluster

Detailed explanation:

  1. Load Balance Cluster
    improves the responsiveness of the application system, handles as many access requests as possible, reduces latency as the goal, and achieves high concurrency and high load (LB) overall performance.
    The load distribution of LB relies on the offloading algorithm of the master node, which distributes the access requests from the client to multiple server nodes, thereby alleviating the load of the entire system.

  2. High Availability Cluster (High Availability Cluster)
    1. Improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of services, and achieve the fault tolerance effect of high availability (HA).
    The working mode of HA includes duplex and master-slave modes. Duplex means that all nodes are online at the same time; master-slave means that only the master node is online, but the slave node can automatically switch to the master node when a failure occurs.
    For example: "Failover", "Dual machine hot standby", etc.

  3. High Performance Computing Cluster (High Performance Computer Cluster)
    1. With the goal of increasing the CPU computing speed of the application system, expanding hardware resources and analysis capabilities, it will obtain the equivalent of high performance computing (HPC) capabilities of large, supercomputers.
    High performance relies on "distributed computing" and "parallel computing". The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve computing capabilities that only large and supercomputers have. For example, "cloud computing", "grid computing" and so on.

Three, load balancing cluster architecture

Load balancing structure

At the first level, the load scheduler (Load Balancer or Director) accesses the unique entrance of the entire cluster system, and
externally uses the VIP address common to all servers, also known as the cluster IP address. Usually two schedulers, the main and backup schedulers, are configured to achieve hot backup. When the main scheduler fails, it can be smoothly replaced to the backup scheduler to ensure high availability.

The second layer,
the application services provided by the server pool (Server Pool) cluster are borne by the server pool, where each node has an independent RIP address (real IP), and only processes client requests distributed by the scheduler. When a node fails temporarily, the fault-tolerant mechanism of the load scheduler will isolate it and wait for the error to be eliminated before re-entering the server pool.

The third layer, shared storage (Share Storage) provides stable and consistent file access services for all nodes in the server pool to
ensure the unity of the entire cluster. Shared storage can use NAS devices or provide dedicated servers for NFS sharing services.

Fourth, analysis of load balancing cluster working mode

1. Load balancing clusters are currently the most commonly used cluster type in enterprises.
2. There are 3 working modes for cluster load scheduling technology.

  • Address translation (NAT mode)
  • IP tunnel (TUN mode)
  • Direct routing (DR mode)

Five, three load scheduling working modes

(1) NAT mode

Address translation
● Network Address Translation, referred to as NAT mode
● Similar to a firewall-like private network structure, the load scheduler serves as the gateway of all server nodes, that is, as the client's access entrance, and each node responds to the client's access exit
● Server node use The private IP address is located on the same physical network as the load scheduler, and the security is better than the other two methods

(Two), TUN mode

IP tunnel
● IP Tunnel, TUN mode for short
● Adopting an open network structure, the load scheduler only serves as the client's access entrance, and each node directly responds to the client through its own Internet connection, instead of passing through the load scheduler
● Server node Distributed in different locations on the Internet, with independent public IP addresses, communicating with the load scheduler through a dedicated IP tunnel

(Three), DR mode

Direct routing
● Direct Routing, referred to as DR mode
● It adopts a semi-open network structure, which is similar to the structure of TUN mode, but the nodes are not scattered in various places, but are located on the same physical network as the
scheduler. ● Load scheduler and each The node server is connected through the local network, no need to establish a dedicated IP tunnel

Six, LVS virtual server

1、Linux Virtual Server

  • Load balancing solution developed for Linux kernel
  • Founded by Dr. Zhang Wensong in my country in May 1998
  • Official website: http://www.linuxvirtualserver.orgl
  • LVS is actually equivalent to a virtualized application based on IP address, and proposes an efficient solution for load balancing based on IP address and content request distribution

2. LVS has now become part of the Linux kernel, compiled as an ip_ vs module by default, and can be called automatically when necessary. In the CentOS 7 system, the following operations can manually load the ip_ vs module and view the version information of the ip_ vs module in the current system.

modprobe ip_vs
cat /proc/net/ip_vs    #确认内核对LVS的支持

Seven, LVS load scheduling algorithm

1. Round Robin
● Assign the received access requests to each node (real server) in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server

2. Weighted Round Robin
distributes requests according to the weight value set by the scheduler. The node with the higher weight value gets the task first. The more requests are allocated,
the server with strong performance is guaranteed to bear more access traffic.

3. Least Connections
are allocated according to the number of connections established by the real server, and the received access requests are prioritized to the node with the least number of connections

4. Weighted L east Connections
can automatically adjust the weight for the real server when the performance difference of server nodes is large. Nodes with
higher performance will bear a larger proportion of active connection load

8. Experimental configuration

(1) Configure NFS sharing service (server IP: 192.168.90.70)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

yum -y install nfs-utils rpcbind

systemctl start rpcbind.service
systemctl start nfs.service

systemctl enable nfs.service
systemctl enable rpcbind.service

mkdir /opt/lfp
mkdir /opt/accp

chmod 777 /opt/lfp 
chmod 777 /opt/accp

vim /etc/exports
/usr/share *(ro,sync)
/opt/lfp 192.168.90.0/24(rw,sync)
/opt/accp 192.168.90.0/24(rw,sync)

exportfs -rv

Insert picture description here

(2) Web1 node configuration (server IP: 192.168.90.20)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service

yum -y install nfs-utils rpcbind
showmount -e 192.168.90.70

systemctl start rpcbind
systemctl enable rpcbind

mount.nfs 192.168.90.70:/opt/lfp /var/www/html
echo 'this is lfp web!' > /var/www/html/index.html

Insert picture description here

(2) Web2 node configuration (server IP: 192.168.90.50)

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

yum install httpd -y
systemctl start httpd.service
systemctl enable httpd.service

yum -y install nfs-utils rpcbind
showmount -e 192.168.90.70

systemctl start rpcbind
systemctl enable rpcbind

mount.nfs 192.168.90.70:/opt/lfp /var/www/html
echo 'this is lfp web!' > /var/www/html/index.html

(3) Configure the load scheduler (inner gateway ens33: 192.168.90.10, outer gateway ens37: 12.0.0.1)

Dual network card configuration

  • You can add a network adapter NAT mode in the add device
  • Then ifconfig view another network card name as ensXX
  • cd /etc/sysconfig/network-scripts Copy an ifcfg-ens33 network card file and rename it to the name of the second network card
  • Finally, modify the configuration of the second network card and restart the network card.

Insert picture description here
Configure SNAT forwarding rules

systemctl stop firewalld.service
systemctl disable firewalld.service
setenforce 0

方法一
vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
方法二
echo '1' > /proc/sys/net/ipv4/ip_forward

sysctl -p
iptables -t nat -F
iptables -F
iptables -t nat -A POSTROUTING -s 192.168.90.0/24 -o ens36 -j SNAT --to-source 12.0.0.1

Load LVS kernel module

modprobe ip_vs					#加载 ip_vs模块
cat /proc/net/ip_vs				#查看 ip_vs版本信息

Insert picture description here

Install ipvsadm management

yum -y install ipvsadm

启动服务前须保存负载分配策略
ipvsadm-save > /etc/sysconfig/ipvsadm
或
ipvsadm --save > /etc/sysconfig/ipvsadm

systemctl start ipvsadm.service

Configure load distribution strategy (NAT mode only needs to be configured on the server)

ipvsadm -C 					#清除原有策略
ipvsadm -A -t 12.0.0.1:80 -s rr
ipvsadm -a -t 12.0.0.1:80 -r 192.168.184.30:80 -m
ipvsadm -a -t 12.0.0.1:80 -r 192.168.184.40:80 -m
ipvsadm						#启用策略

ipvsadm -ln					#查看节点状态,Masq代表 NAT模式
ipvsadm-save > /etc/sysconfig/ipvsadm						#保存策略

Guess you like

Origin blog.csdn.net/weixin_51468875/article/details/112981063