LVS load balancing cluster (NAT mode)

Based on the four-layer protocol

 what is a cluster

meaning:

1, cluster cluster, cluster

2. It consists of multiple hosts, but the external performance as a whole only provides one access entry (domain name and address) which is equivalent to a large computer

In current Internet applications, as sites have higher and higher requirements for hardware performance, response speed, service stability, and data reliability, a single server can no longer meet the requirements of load balancing and high availability. Usually there are two solutions:

Vertical expansion: expand CPU, memory, use expensive minicomputers and mainframes
Horizontal expansion: use multiple relatively cheap ordinary servers to build service clusters,
but there is always an upper limit for vertical expansion, and the number of slots is limited, so we use more Horizontal expansion, through the integration of multiple servers, use LVS to achieve high availability and load balancing of servers, and provide the same service externally with the same IP address (usually called floating IP, referred to as VIP).

Enterprise cluster taxonomy

According to the difference in the target of the cluster, it can be divided into three types

1. Load balancing cluster: reduce response delay time and improve concurrent processing capabilities

Improve the responsiveness of the application system, handle as many access requests as possible, reduce the delay as the goal, and obtain high concurrency and high load (LB) overall performance

The load distribution of LB relies on the distribution algorithm of the master node, which distributes the access requests from clients to multiple server nodes, thereby alleviating the load pressure of the entire system, such as "DNS polling", "reverse proxy" , etc.

2. High-availability cluster: system stability, less interruption time of system services, and reduced losses

Improve the reliability of the application system, aim to reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the fault tolerance effect of high availability (HA).

The working mode of HA includes two modes of duplex and master-slave. Duplex means that all nodes are online at the same time, while master-slave means that only the master node is online, but when a failure occurs, the slave node can automatically switch to the master node, such as "failover" Dual machine hot standby" etc.

3. High performance computing cluster

With the goal of improving the CPU computing speed of the application system, expanding hardware resources and analysis capabilities, and obtaining high-performance computing (HPC) capabilities equivalent to large-scale, supercomputers

High performance depends on "distributed computing" and "parallel computing". Through dedicated hardware and software, the CPU, memory and other resources of multiple servers are integrated together to realize the computing power that only large-scale and supercomputers have, such as "cloud computing" "Network Computing" etc.

Structure of Load Balancing

Data flow:

User --> Access via vrrp -->

The first layer: load scheduler: send through scheduling algorithm and RIP

The only entrance to access the entire cluster system, using the vip address shared by all servers, also known as the cluster IP address, usually configures two schedulers, the active and standby, to achieve hot backup. When the primary scheduler fails, it can be smoothly replaced by the standby scheduler. Ensure high availability

The second layer: node service pool: all server resources are passed

The application services provided by the cluster are undertaken by the server pool. Each node has an independent RIP address (real ip), and only handles the client requests distributed by the scheduler. When a node fails temporarily, the fault tolerance of the load scheduler The mechanism will isolate it and wait for the error to be corrected before being reintroduced into the server pool

The third layer: shared storage: provide website, storage support

Provide stable and consistent file access services for all nodes in the server pool to ensure the uniformity of the entire cluster. Shared storage can use NAS devices, or dedicated servers that provide NFS sharing services

Working mode of load balancing:

Load balancing cluster is currently the most commonly used cluster type in enterprises

Cluster load scheduling technology has three working modes:

1. NAT mode (address translation)

Similar to the private network structure of a firewall, the load scheduler acts as the gateway of all server nodes, that is, as the access entrance of the client, and also the access exit of each node responding to the client

The server node uses a private ip address and is located on the same physical network as the load scheduler, which is more secure than the other two methods

2. ip tunnel (TUN mode)

Using an open network structure, the load scheduler is only used as the access entrance of the client, and each node directly responds to the client through its own Internet connection, without going through the load scheduler

The server nodes are scattered in different locations in the Internet, have independent public network ip addresses, and communicate with the load scheduler through dedicated ip tunnels

3. Direct routing (DR mode)

A semi-open network structure is adopted, which is closely related to the structure of TUN mode, but the nodes are not scattered in various places, but are located in the same physical network as the scheduler

The load scheduler and each node server are connected through the local network, and there is no need to establish a dedicated ip tunnel

LVS virtual server

Overview of LVS

Linux Virtual Server is a load balancing solution developed for the Linux kernel. It was created by Chinese doctor Zhang Wensong in 1998. LVS is actually equivalent to a virtualization application based on IP address. It is a load balancing solution based on IP address and content request distribution. an efficient solution

LVS has now become a part of the Linux kernel, compiled as the ip_vs module by default, and can be called automatically when necessary. In the CentOS7 system, the following operations can manually load the ip_vs module and view the version information of the ip_vs module in the current system.

LVS Load Scheduling Algorithm

Polling (rr)

Distribute the received access requests to each node (real server) in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server

weighted round robin

Distribute requests according to the weight value set by the scheduler. Nodes with higher weight values ​​get the tasks first, and the more requests are allocated

Ensure that the server with strong performance bears more access traffic

least connection

Allocate according to the number of connections established by the real server, and assign the received access requests to those with the least number of connections first

weighted least connections

When the performance of server nodes varies greatly, the weight can be automatically adjusted for the real server

Higher performing nodes will carry a greater proportion of the active connection load

ipvsadmtools

Create a virtual server

Add and delete server nodes

View cluster and node status

Save load distribution strategy

ipvsadm tool options

-A: add a virtual server

-D: delete the entire dummy fuwq

-s: Specifies the load scheduling algorithm (round robin, weighted round robin, least connections, weighted least connections, wlc)

-a: means adding a real server (node ​​server)

-d: delete a node

-t: Specify VIP address and TCP port

-r: Specify RIP address and TCP port

-m: Indicates the use of NAT cluster mode

-g: indicates the use of DR mode

-i: means use TUN mode

-w: Set the weight (when the weight is 0, it means that the node is suspended)

-p60: Indicates to maintain a long connection for 60 seconds

-l: list to view LVS virtual servers (default is to view all)

-n: Display address, port and other information in numerical form, often used in combination with the "-l" option

Guess you like

Origin blog.csdn.net/Breeze_nebula/article/details/132279776