LVS load balancing cluster concept

1. The meaning of cluster

Cluster(Or called a cluster) is composed of multiple hosts, but externally, it only appears as a whole, providing only one access entry (domain name or IP), which is equivalent to a large computer.

1.1 Generation of clusters

  • In Internet applications, as sites have increasingly higher requirements for hardware performance, response speed, service stability, and data reliability, a single server has begun to fail to meet the needs of load balancing and high availability, and clusters have emerged as the times require.

1.2 Cluster composition scheme

  • Use expensive minicomputers and mainframes to form a cluster.
  • Use multiple relatively inexpensive ordinary servers to build a service cluster.

Through the integration of multiple servers, LVS is used to achieve server high availability and load balancing, and provide the same services externally from the same IP address. This is a cluster technology commonly used in enterprises-LVS (Linux Virtual Server, Linux Virtual Server).

2. The cluster can be divided into three types

According to the target difference of the cluster, it can be divided into three types

  • Load Balance Cluster

    • Improve the responsiveness of the application system, handle as many access requests as possible, reduce latency as the goal, and obtain high concurrency and high load (LB) overall performance.
  • High Availability Cluster

    • The goal is to improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the fault tolerance effect of high availability (HA).
    • The working mode of HA includes duplex and master-slave modes. Duplex means that all nodes are online at the same time; master-slave means that only the master node is online, but the slave node can automatically switch to the master node when a failure occurs.
    • For example: "Failover", "Dual machine hot standby", etc.
  • High Performance Computer Cluster

    • With the goal of increasing the CPU computing speed of the application system, expanding hardware resources and analysis capabilities, it will obtain the equivalent of large-scale, supercomputer high-performance computing (HPC) capabilities.
    • High performance relies on "distributed computing" and "parallel computing". The CPU, memory and other resources of multiple servers are integrated through dedicated hardware and software to achieve computing capabilities that only large and supercomputers have.
    • For example, "cloud computing", "grid computing" and so on.

Three, load balancing cluster architecture

Load balancing structure

  • At the first level, the load scheduler (Load Balancer or Director)
    accesses the unique entrance of the entire cluster system, and uses the VIP address common to all servers, also known as the cluster IP address. Usually two schedulers, the main and backup schedulers, are configured to achieve hot backup. When the main scheduler fails, it can be smoothly replaced to the backup scheduler to ensure high availability.

  • The second layer,
    the application services provided by the server pool (Server Pool) cluster are borne by the server pool, where each node has an independent RIP address (real IP), and only processes client requests distributed by the scheduler. When a node fails temporarily, the fault-tolerant mechanism of the load scheduler will isolate it and wait for the error to be eliminated before re-entering the server pool.

  • The third layer, shared storage (Share Storage)
    provides stable and consistent file access services for all nodes in the server pool to ensure the unity of the entire cluster. Shared storage can use NAS devices or provide dedicated servers for NFS sharing services.

Four, load balancing cluster working mode analysis

  • Load balancing cluster is currently the most commonly used cluster type in enterprises
  • The cluster load scheduling technology has 3 working modes
    • Address translation (NAT mode)
    • IP tunnel (TUN mode)
    • Direct routing (DR mode)

Five, three load scheduling working modes

5.1 NAT mode

  • Network Address Translation, referred to as NAT mode
  • Similar to the private network structure of a firewall, the load scheduler serves as the gateway of all server nodes, that is, as the access entrance of the client, and also the access exit of each node in response to the client
  • The server node uses a private IP address and is located on the same physical network as the load scheduler. The security is better than the other two methods

5.2 TUN mode

  • IP Tunnel, referred to as TUN mode
  • Adopting an open network structure, the load scheduler only serves as the client's access portal, and each node directly responds to the client through its own Internet connection, instead of going through the load scheduler
  • The server nodes are scattered in different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel

5.3 DR mode

  • Direct Routing, referred to as DR mode
  • It adopts a semi-open network structure, which is similar to the structure of the TUN model, but the nodes are not scattered everywhere, but are located on the same physical network as the scheduler
  • The load scheduler is connected to each node server through the local network, without the need to establish a dedicated IP tunnel

Six, LVS virtual server

LVS(Linux Virtual Server)

  • Load balancing solution developed for Linux kernel
  • Founded by Dr. Zhang Wensong in my country in May 1998
  • Official website: http://www.linuxvirtualserver.orgl
  • LVS is actually equivalent to a virtualized application based on IP address, and proposes an efficient solution for load balancing based on IP address and content request distribution

LVS has now become a part of the Linux kernel, compiled as an ip_ vs module by default, and can be called automatically when necessary. In the CentOS 7 system, the following operations can manually load the ip_ vs module and view the version information of the ip_ vs module in the current system.

modprobe ip_vs
cat /proc/net/ip_vs       #确认内核对LVS的支持

Insert picture description here

Seven, LVS load scheduling algorithm

7.1 Round Robin

  • The received access requests are allocated to each node (real server) in the cluster in turn in order, and each server is treated equally, regardless of the actual number of connections and system load of the server

7.2 Weighted Round Robin

  • Distribute requests according to the weight value set by the scheduler. The node with the higher weight value will get the task first, and the more requests are allocated
  • Ensure that the server with strong performance bears more access traffic

7.3 Least Connections

  • Assign according to the number of connections established by the real server, and prioritize the access requests received to the node with the least number of connections

7.4 Weighted L east Connections

  • When the performance of server nodes differ greatly, the weight can be automatically adjusted for the real server
  • Nodes with higher performance will bear a greater proportion of active connection load

Eight, ipvsadm tool

ipvsadm function and option description

Options Features
-A Add virtual server
-D Delete the entire virtual server
-s Specify load scheduling algorithm (polling: rr, weighted round-robin: wrr, least connection: lc, weighted least connection: wlc)
-a Indicates to add a real server (node ​​server)
-d Delete a node
-t Specify VIP address and TCP port
-r Specify RIP address and TCP port
-m Indicates the use of NAT cluster mode
-g Indicates the use of DR mode
-i Means to use TUN mode
-w Set the weight (when the weight is 0, the node is suspended)
-p 60 Means to keep a long connection for 60 seconds
-l List LVS virtual servers (default is to view all)
-n Display address, port and other information in digital form, often used in combination with the "-l" option. ipvsadm -ln

LVS load balancing cluster (NAT mode) deployment experiment
LVS load balancing cluster (DR mode) deployment experiment

Guess you like

Origin blog.csdn.net/weixin_51613313/article/details/113520142