Linux lvs load balancing

Introduction to LVS: 

Linux Virtual Server (LVS) is an open source software project based on the Linux kernel for building high-performance, high-availability server clusters. LVS achieves load balancing by distributing client requests to different nodes on a set of backend servers, thereby improving the scalability and reliability of the system.

The core components of LVS:

  1. IPVS (IP Virtual Server): IPVS is the core module of LVS, which realizes the function of load balancing. The IPVS module distributes requests to actual servers on the backend by intercepting incoming traffic and scheduling. It stores and manages load balancing configurations, and forwards packets in kernel space.
  2. Load Balancer LB (Load Balancer): The load balancer is the central component of LVS, which is a device or software located between the client and the back-end server. The load balancer receives client requests and forwards the requests to the backend servers according to the configured load balancing strategy to achieve load distribution and high availability.

  3. Back-end server RS ​​(Real Server): The back-end server is the server node in the LVS cluster that actually processes client requests. They share the load and provide service responsiveness.
  4. Load balancing scheduling algorithm: LVS supports multiple load balancing scheduling algorithms to determine which backend server the request should be forwarded to. Common scheduling algorithms include Round Robin, Least Connection, Source IP Hash, etc. By choosing an appropriate scheduling algorithm, the load can be balanced among the backend servers according to application requirements.

How LVS works:

  1. The client sends a request:
    the client sends a request to the LVS load balancer, and the target of the request is the virtual IP (Virtual IP) address of the load balancer.

  2. The load balancer receives the request:
    the load balancer receives the request from the client, and selects a backend server to process the request according to the pre-configured load balancing algorithm.

  3. Load balancing algorithm to select back-end servers:
    LVS supports a variety of load balancing scheduling algorithms, such as round robin (Round Robin), least connection (Least Connection), source address hash (Source IP Hash), etc. According to the specified algorithm, the load balancer forwards the request to a backend server. Consider server load, availability, and performance metrics when choosing a backend server.

  4. Request forwarding to backend servers:
    The load balancer forwards client requests to selected backend servers. According to the configured scheduling algorithm, requests may be evenly distributed to multiple backend servers to achieve load balancing.

  5. Backend server processes the request:
    The backend server receives the request from the load balancer and processes the request. The backend server performs the tasks required by the request and generates a response.

  6. The response is returned to the client:
    the response traffic generated by the backend server passes through the load balancer, and the load balancer rewrites the target address of the response to its own address and returns the response to the client.

  7. Session persistence (optional):
    In order to maintain the continuity of the session, LVS can use the session persistence function to make requests from the same client always be distributed to the same backend server. This can be achieved by means of IP addresses, cookies, etc.

LVS load balancing method:

  • NAT (Network Address Translation) mode: The load balancer is located between the front-end and the client, and the scheduler converts the source IP address and port requested by the client into its own IP address and port , and forwards the request to the back-end server. The response from the backend server is forwarded back to the client through the scheduler.
  • DR (Direct Routing): The load balancer is located between the front-end and back-end servers, and will not modify the destination address of the traffic. The scheduler keeps the target IP address and port requested by the client unchanged, and directly forwards the request to the backend server. The response from the backend server is returned directly to the client without going through the scheduler.
  • TUN (IP Tunneling) method: Similar to the DR method, the load balancer is located between the front-end and back-end servers, and uses IP tunnels to pass requests from the client to the back-end server. The load balancer encapsulates the request in a new IP packet and sends it to the target backend server. After receiving the request, the backend server resolves the IP tunnel and responds to the client. That is, by creating a virtual device to forward packets from the scheduler to the backend server.

LVS noun explanation:

  • VS: Virtual Server virtual server, usually a distributor, the IP+Por provided externally by the load balancing cluster
  • RS: Real Server The real server that actually provides services can be divided into one or more load balancing groups by DS.
  • DS: Director Server load balancer, which distributes traffic to real servers on the backend.

  • BDS: Backup Director Server, in order to ensure the high availability of the load balancer derived backup.

  • CIP: Client IP Client IP of the client.
  • VIP: Virtual Server IP, IP of VS. The DIP (destination IP address) of the client request service is defined on the DS, and the client or its gateway needs to have its route
  • DIP: IP of Director IP distributor
  • RIP: Real Server IP IP of the real server

CIP <–> VIP == DIP <–> RIP Client accesses VIP, DIP forwards the request to RIP

1、NAT(Network Address Translation):

NAT implements scheduling through the method of network address translation.

   

  1. The client sends a request: the client sends a request to the LVS load balancer, and the target address of the request is VIP.

  2. Load balancer receives request: The load balancer receives a request from a client, which requests VIP and CIP.

  3. NAT conversion: In NAT mode, the load balancer will perform NAT conversion on the VIP of the request packet and replace it with the RIP of the load balancer. In this way, in the subsequent network communication, the response traffic will be returned to the client through the load balancer through NAT conversion.

  4. Load balancing algorithm selects a backend server: The LVS load balancer selects a backend server to process the request according to a pre-configured load balancing algorithm.

  5. Forward the request to the backend server: The load balancer forwards the NAT-translated request to the selected backend server. After the backend server receives the request, it treats it as a request from the load balancer .

  6. The backend server processes the request and generates a response

  7. The response is returned to the client: the response passes through the load balancer, and the load balancer performs NAT conversion to replace the RIP of the response with the VIP, and then returns the response to the client.

2、DR(Direct Routing):

DR implements scheduling by modifying the target MAC address. In DR mode, input passes through DR, and output does not pass through DR. In order to respond to the access to the entire cluster, both DS and RS need to be configured with VIP addresses, and DS and RS must be in the same network segment.

 

  1. The client sends a request: the client sends a request to the LVS load balancer, and the target address of the request is VIP.

  2. A load balancer receives a request and selects a backend server to handle the request based on a pre-configured load balancing algorithm.

  3. The load balancer modifies the target MAC address: In DR mode, the load balancer modifies the target MAC address of the request packet to the MAC address of the selected backend server, and forwards the packet to the backend server. This way, the packets bypass the load balancer and go directly to the backend servers.

  4. The backend server receives the request from the load balancer and performs corresponding processing and calculation.

  5. The backend server generates a response and sends the response back to the client. In DR mode, response packets will be sent directly to the client, bypassing the load balancer.

3、TUN(IP Tunneling):

        In DR mode, DS only modifies the MAC information of the data link layer in the data packet, and does not modify the IP information. So the DS locates the RS through the MAC, thus restricting the DS and the RS to be in the same network segment. Then if the DS can locate the RS without using the MAC, there is no need to restrict the RS and the DS to be in the same network segment.

        In TUN mode, the load balancer establishes an IP tunnel , and transmits request and response data between the load balancer and the backend server through the tunnel. IP tunnel can be understood as IP in IP, that is, the sender packs an IP header outside the IP header, and the receiver first decodes the first-layer IP header, and then processes the remaining IP data packets according to the normal process.

 

  1. The client sends a request: the client sends a request to the LVS load balancer, and the target address of the request is VIP.
  2. The load balancer receives the request: the load balancer receives the request from the client, and selects a backend server to process the request according to the pre-configured load balancing algorithm.

  3. The load balancer establishes a tunnel: In TUN mode, the load balancer will establish an IP tunnel with the selected backend server to transmit request and response data between the load balancer and the backend server.
  4. The load balancer forwards the request: the load balancer packages the request from the client and sends it to the backend server through the tunnel.

  5. RS finds that the IP address of the request message is its own eth0 IP address, and strips off the IP tunnel header.

  6. RS processes the request, generates a response, and tunnels the response back to the client.

        The LVS architecture can provide high-performance and high-availability load balancing solutions. By distributing client requests to multiple nodes on the backend server, it can balance system load and provide fault tolerance. At the same time, LVS has the characteristics of open source, flexibility and customization, and can be configured and expanded according to actual needs.

        It should be noted that LVS only provides load balancing function and does not handle the logic of the application layer. Session maintenance and data synchronization at the application layer need to be implemented in other ways, such as using Session Persistence and database replication.

LVS NAT mode configuration: 

Environment description:

host name  Network card information (ens160 is nat, ens192 is host only) install app  system
Client client 192.168.100.200(ens160) none RHEL8
DS DIP:192.168.253.142(ens160)–VIP:192.168.227.128(ens224)   ipvsadm RHEL8
RS1 RIP:192.168.253.143(ens160)–192.168.253.142 httpd RHEL8
RS2 RIP:192.168.253.144(ens160)–192.168.253.142 httpd RHEL8

1. The three hosts DR, RS1, and RS2 all turn off the firewall and selinux

DS:
[root@DS ~]#  systemctl stop firewalld
[root@DS ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@DR ~]# vim /etc/sysconfig/selinux 
SELINUX=disabled

RS1
[root@RS1 ~]# systemctl stop firewalld
[root@RS1 ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@RS1 ~]#  vi /etc/sysconfig/selinux
SELINUX=disabled

RS2
[root@RS2 ~]#  systemctl stop firewalld
[root@RS2 ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@RS2 ~]#  vi /etc/sysconfig/selinux
SELINUX=disabled

2. Configure IP information

DS: add ens224 network card information

[root@DR ~]#  nmcli connection add con-name ens224 ifname ens224 type ethernet
Connection 'ens224' (922bcff0-35fd-43c2-a608-edb0d58ccec3) successfully added.
[root@DR ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens160
IPADDR=192.168.253.142
PREFIX=24
DNS1=8.8.8.8
[root@DR ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens224
IPADDR=182.168.227.128
PREFIX=24
DNS1=8.8.8.8
[root@DR ~]# systemctl restart NetworkManager
[root@DR ~]# nmcli connection up ens160
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@DR ~]# nmcli connection up ens224
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/4)

RS1: 

[root@RS1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens160
IPADDR=192.168.253.143
PREFIX=24
GATEWAY=192.168.253.142
DNS1=8.8.8.8
[root@RS1 ~]# systemctl restart NetworkManager
[root@RS1 ~]# nmcli connection up ens160
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)

RS2:

[root@RS2 ~]#  vim /etc/sysconfig/network-scripts/ifcfg-ens160
IPADDR=192.168.253.144
PREFIX=24
GATEWAY=192.168.253.142
DNS1=8.8.8.8
[root@RS2 ~]# systemctl restart NetworkManager
[root@RS2 ~]# nmcli connection up ens160
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)

3. Backend RS1 and RS2 deploy WEB servers

[root@RS1 ~]# yum -y install httpd
[root@RS1 ~]# echo RS1 > /var/www/html/index.html
[root@RS1 ~]# systemctl restart httpd
[root@RS1 ~]#  systemctl enable httpd
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service → /usr/lib/systemd/system/httpd.service.
[root@RS2 ~]# yum -y install httpd
[root@RS2 ~]# echo RS2 > /var/www/html/index.html
[root@RS2 ~]# systemctl restart httpd
[root@RS2 ~]# systemctl enable httpd

4. Configure DS

4.1. Enable IP forwarding function

[root@DR ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
[root@DR ~]# sysctl -p
net.ipv4.ip_forward = 1

4.2. Install ipvsadm and add rules

[root@DR ~]# yum -y install ipvsadm
[root@DR ~]# ipvsadm -A -t 192.168.227.128:80 -s rr
[root@DR ~]#  ipvsadm -a -t 192.168.227.128:80 -r 192.168.253.143:80 -m
[root@DR ~]#  ipvsadm -a -t 192.168.227.128:80 -r 192.168.253.144:80 -m
[root@DR ~]#  ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.227.128:80 rr
  -> 192.168.253.143:80           Masq    1      0          0         
  -> 192.168.253.144:80           Masq    1      0          0         
[root@DR ~]# ipvsadm -Sn > /etc/sysconfig/ipvsadm
[root@DR ~]# systemctl restart ipvsadm.service
[root@DR ~]#  systemctl enable ipvsadm.service
Created symlink /etc/systemd/system/multi-user.target.wants/ipvsadm.service → /usr/lib/systemd/system/ipvsadm.service.

5. Client testing: 

[root@RS3 ~]# curl http://192.168.227.128
RS2
[root@RS3 ~]# curl http://192.168.227.128
RS1
[root@RS3 ~]# curl http://192.168.227.128
RS2
[root@RS3 ~]# curl http://192.168.227.128
RS1

The configuration information is reproduced from:  LVS Tutorial

Guess you like

Origin blog.csdn.net/zhoushimiao1990/article/details/132037174