##### LVS load balancing #####

### LVS (Linux Virtual Machine Server) ###

  • What .LVS that? The English name is LVS Linux Virtual Server, namely Linux virtual servers. It is our country's Dr. Zhang Wen-song an open source project. In memory linux 2.6, it has become part of the kernel, before this kernel version you will need to recompile the kernel.

  • .LVS can you do? LVS is mainly used for load balancing multiple servers. It operates at the network layer, can achieve high performance, high availability server clustering technology. It is inexpensive, the combination of a number of low-performance servers together to form a super server. It is easy to use, simple configuration, and a variety of load balancing methods. It is stable and reliable, a server does not work even in a server cluster, it does not affect the overall results. In addition scalability is also very good.

  • [1] Technical Overview
    LVS cluster IP load balancing technology and the use of content-based request distribution technology. The scheduler has a good throughput, a balanced transfer request to a different server to perform, and the scheduler failed server automatically masked, thereby constituting a group of servers is a high-performance, high-availability virtual server. Structure of the entire server cluster is transparent to customers, but without modifying the client and server-side program. For this reason, the design considerations of the system transparency, scalability, high availability and ease of management.

  • [2] Cluster three-layer structure
    in general, the LVS cluster three-layer structure, as main components:
    A, load balancer (load balancer), which is outside of the front end of the entire cluster of machines, is responsible for client requests sent to perform a set of servers, and the customer service is considered from a single IP address (we can call it a virtual IP address) on.
    B, a pool of servers (server pool), is a group of servers to perform real customer requests, the service performed there WEB, MAIL, FTP, and DNS.
    C, shared storage (shared storage), which provides a shared storage area for the server pool, it is easy to make the server pool have the same content, provide the same services.

  • [3] Scheduler
    The scheduler is the only entry point to the cluster server system (Single Entry Point), which IP load balancing techniques may be employed, based on the content request distribution technology or a combination of both.
    In IP load balancing technology, the server needs to have the same content pool to provide the same service. When a client request arrives, the scheduler according to the scheduling algorithm to select only the server load and the setting server from a server pool, forwards the request to the selected server, and records the schedule; if other packets of the request message arrival, will be forwarded to the server front selected. Distribution technique based on the content request, the server can provide different services, when the client request arrives, the scheduler may select a server request based on the content execution request. Because all operations are done in the Linux operating system kernel space, its scheduling overhead is very small, so it has a high throughput. Number of nodes in the server pool is variable. When the load of the entire system received more than the current handling capacity of all nodes, servers in the server pool can be increased to meet the growing requests load.
    Most of the network services, a strong correlation, the request can be executed in parallel on different nodes, the performance of the entire system can be substantially increased as the number of nodes of the server pool does not exist linearly increase between requests. Shared storage is typically the database, network file system or a distributed file system. Server node dynamically updated data are typically stored in a database system, and the database will ensure consistency of concurrent access to data. Static data may be stored in the network file system (e.g., NFS / CIFS) but the limited scalability of the network file system, in general, NFS / CIFS server can only support 3-6 busy server node. For large-scale cluster system, we consider using distributed file systems, such as AFS, GFS, Coda and Intermezzo and so on. Distributed File System provides each server with a storage area shared their access to distributed file system like access the local file systems, distributed file systems while providing good scalability and availability.

  • [4] distributed lock manager
    Also, when applications on different servers simultaneously read and write access to the same resources on a distributed file system, the application needs access violation digestion in order to make the resource is in a consistent state. This requires a distributed lock manager (Distributed Lock Manager), it could be distributed file system provides internal, it may be external. When developers write an application, you can use the Distributed Lock Manager to ensure consistency of application on different nodes concurrent access.
    Load balancer, server pool and the shared storage system are connected by high-speed network, switching networks such as 100Mbps, Myrinet and Gigabit networks. High-speed network, mainly to expand the size of the system to avoid when the Internet has become the bottleneck of the whole system.

  • [5] Monitor
    Graphic Monitor is to provide the entire cluster system monitor for system administrators, it can monitor the status of the system. Graphic Monitor is based, so no matter the situation at local or remote administrators can monitor system browser. For security reasons, the browser through HTTPS (Secure HTTP) protocol and authentication, the system can be monitored and the system configuration and management.
    Features:
    Scalable Networking Service several structures, they all require a front end of the load balancer (master-slave or a plurality of backup). We first analyze the major technology virtual network services, it pointed out that IP load balancing technology is the highest efficiency achieved in the technical load scheduler. In the existing IP load balancing technology, mainly through a network address translation (Network Address Translation) to a group of servers to form a high-performance, highly available virtual servers, we call VS / NAT technologies (Virtual Server via Network Address Translation). On the basis of asymmetry analysis VS / NAT shortcomings and network services, we propose a virtual server via IP tunneling method VS / TUN (Virtual Server via IP Tunneling), and methods of virtual server via direct routing VS / DR (Virtual Server via Direct Routing ), which can greatly improve the system scalability. VS / NAT, VS / TUN and VS / DR technology are the three LVS cluster IP load balancing technology implemented.

  • LVS basic working principle
    Here Insert Picture Description1. When a user initiates a request to load balancing scheduler (Director Server), the scheduler will request sent to the kernel space

  1. PREROUTING chain will first user request is received, it determines whether the target IP unit are determined IP, packets destined for the INPUT chain

  2. IPVS is working on the INPUT chain, when the user requests arrive INPUT, IPVS request and sends the user that they have defined a cluster service for comparison, if the user requests a cluster service is defined, then the time will be forced to amend IPVS destination IP address and port data of the bag, and the new packet sent POSTROUTING chain

  3. POSTROUTING found close link packet destination IP address happens to be their own back-end server, then the time will eventually send packets through routing to the back-end server

### LVS DR ### with the principles and mode
network structure .DR (Direct Routing) mode:
1. Reset request packet destination MAC address set to a MAC address of the selected RS is
Here Insert Picture Description
(a) When a user requests reach the Director Server, this time requesting the data packet will first PREROUTING chain kernel space. At this point the packet source IP CIP, the target IP as VIP
(b) PREROUTING examination revealed target IP packet is native, the packet sent to INPUT chain
© IPVS than packets requested service is a cluster service, if the source MAC address request packet modifies the MAC address DIP will modify the MAC address destination MAC address of the RIP, the data packet is then sent to the POSTROUTING chain. At this time, the source and destination IP were not modified, only the modification of the source MAC address is the MAC address of DIP, the destination MAC address is the MAC address of RIP
(D) since the DS and RS in the same network, it is through Layer to transmit. POSTROUTING chain checks the destination MAC address is the MAC address of the RIP, then the time the packet will be sent to the Real Server.
(e) RS discovery request packet is a MAC address of the own MAC address, it receives the packet. After the process is completed, the response packets transmitted by the interface to lo eth0 card then emits outwardly. At this time, the source IP address of the VIP, the destination IP is the CIP
(F) a final response packet delivered to a client
configuration experimental environment:

Three machines:
Director node: (eth0: 172.25.46.1 vip eth0 172.25.46.100)
Real server2: (eth0: 172.25.46.2 vip LO 172.25.46.100)
Real server3: (eth0: 172.25.46.3 vip LO 172.25.46.100)

  • Real serve3 and Real server2 are installed on yum install -y httpd front page and written (in order to show the experimental results, set different content)
server2主机:
[root@server2 ~]# cd /var/www/html/
[root@server2 html]# vim index.html
server2
[root@server2 html]# systemctl start httpd

server3主机:
[root@server3 ~]# cd /var/www/html/
[root@server3 html]# vim index.html
server3
[root@server3 html]# systemctl start httpd

  • Director node server1 mounted yum install -y ipvsadm (scheduler)
  • Add Policy
    -A add a new virtual service
    -a Adds operating real server
    -t TCP protocol virtual services
    -s: scheduling algorithm (rr | wrr | LC |)
    -r: corresponds to real ip
    -g rh (routing)
    rr scheduling algorithms: poll
[root@server1 ~]# ipvsadm -l   
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@server1 ~]# ipvsadm -A -t 172.25.46.100:80 -s rr  ##-A添加一条新的虚拟服务  -t TCP协议的虚拟服务  -s 对后端主机的调度算法:轮巡
[root@server1 ~]# ipvsadm -a -t 172.25.46.100:80 -r 172.25.46.2:80 -g  ##给server2加vip
[root@server1 ~]# ipvsadm -a -t 172.25.46.100:80 -r 172.25.46.3:80 -g  ##给server3加vip
[root@server1 ~]# ipvsadm -l  ##显示调度次数
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.25.46.100:http rr
  -> 172.25.46.2:http             Route   1      0          0         
  -> 172.25.46.3:http             Route   1      0          0         

Vip added
to the VIP address with all the servers on their network equipment

server1主机:

 [root@serve[root@server1 ~]# ip addr show 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:d9:67:4d brd ff:ff:ff:ff:ff:ff
    inet 172.25.46.1/24 brd 172.25.46.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fed9:674d/64 scope link 
       valid_lft forever preferred_lft forever
[root@server1 ~]# ip addr add 172.25.46.100/24 dev eth0
[root@server1 ~]# ip addr show 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:d9:67:4d brd ff:ff:ff:ff:ff:ff
    inet 172.25.46.1/24 brd 172.25.46.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.46.100/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fed9:674d/64 scope link 
       valid_lft forever preferred_lft forever
server2主机:
[root@server2 html]# ip addr add 172.25.46.100/24 dev eth0

server3主机:
[root@server3 html]# ip addr add 172.25.46.100/24 dev eth0

Test:
Server2, Server3 may be accessed (polling policy implemented at this time not provided, when the host sends a request dispatcher are two grab request)
Here Insert Picture Description

[root@foundation46 ~]# arp -an | grep 172.25.46.100  ##查看绑定的MAC地址
? (172.25.46.100) at 52:54:00:41:f0:9f [ether] on br0
[root@foundation46 ~]# curl 172.25.46.100
server3
[root@foundation46 ~]# curl 172.25.46.100
server3
[root@foundation46 ~]# arp -d 172.25.46.100  ##删除MAC绑定的地址
[root@foundation46 ~]# curl 172.25.46.100
server3
[root@foundation46 ~]# curl 172.25.46.100
server3
[root@foundation46 ~]# arp -an | grep 172.25.46.100
? (172.25.46.100) at 52:54:00:41:f0:9f [ether] on br0
[root@foundation46 ~]# arp -d 172.25.46.100
[root@foundation46 ~]# curl 172.25.46.100
server2

server1主机:  
[root@server1 ~]# ipvsadm -l  ##显示调度
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  server1:http rr
  -> 172.25.46.2:http             Route   1      0          0     ##请求并未经过调度器,而是server3自己抢占成功
  -> 172.25.46.3:http             Route   1      0          0         

In order to avoid the preparation of the above (preemption), it is necessary to bind the MAC address of the scheduler scheduling polling
to server2, server3 mounting arptables.x86_64
users arptables network filtering control daemon

server2主机
[root@server2 html]# yum install -y arptables.x86_64
[root@server2 html]# arptables -A INPUT -d 172.25.46.100 -j DROP  
[root@server2 html]# arptables -A OUTPUT -s 172.25.46.100 -j mangle --mangle-ip-s 172.25.46.2
[root@server2 html]# arptables -L
Chain INPUT (policy ACCEPT)
-j DROP -d server2 

Chain OUTPUT (policy ACCEPT)
-j mangle -s server2 --mangle-ip-s server2 

Chain FORWARD (policy ACCEPT)

server3主机:
[root@server3 html]# yum install -y arptables.x86_64
[root@server3 html]# arptables -A INPUT -d 172.25.46.100 -j DROP
[root@server3 html]#  arptables -A OUTPUT -s 172.25.46.100 -j mangle --mangle-ip-s 172.25.46.3
[root@server3 html]#  arptables -L
Chain INPUT (policy ACCEPT)
-j DROP -d server3 

Chain OUTPUT (policy ACCEPT)
-j mangle -s server3 --mangle-ip-s 172.25.46.2 
-j mangle -s server3 --mangle-ip-s server3 

Chain FORWARD (policy ACCEPT)

Test:
Here Insert Picture Description
Here Insert Picture Description
Scheduling Algorithm: WRR
scheduler editing server1, server2, server3 unchanged

[root@server1 ~]# ipvsadm -C  #删除所有策略
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
[root@server1 ~]# ipvsadm -A -t 172.25.46.100:80 -s wrr  ##加权调度策略
[root@server1 ~]# ipvsadm -a -t 172.25.46.100:80 -r 172.25.46.2:80 -g -w 3  ##设置3次
[root@server1 ~]# ipvsadm -a -t 172.25.46.100:80 -r 172.25.46.3:80 -g -w 1  ##一次
[root@server1 ~]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  server1:http wrr
  -> 172.25.46.2:http             Route   3      0          0         
  -> 172.25.46.3:http             Route   1      0          0  
       [root@server1 ~]# ipvsadm -S
-A -t server1:http -s wrr
-a -t server1:http -r 172.25.46.2:http -g -w 3
-a -t server1:http -r 172.25.46.3:http -g -w 1

Test:
Here Insert Picture Description
Here Insert Picture Description
LVS health checks under ### DR mode (ldirectord) ###
Why do we need a health check?
If the real back-end server problems, so when the test end test will return an error page; the back-end server for health check the return can only guarantee the correct page
when server3httpd close the error page :( do not do a health check)
Here Insert Picture Description
installation ldirectord
configure HighAvailability source of yum
Here Insert Picture Description
Here Insert Picture Description
install ldirectord-3.9.5-3.1.x86_64.rpm

[root@server1 ~]# ls
ldirectord-3.9.5-3.1.x86_64.rpm
[root@server1 ~]# yum install -y ldirectord-3.9.5-3.1.x86_64.rpm 
[root@server1 ~]# rpm -qpl ldirectord-3.9.5-3.1.x86_64.rpm   #安装成功自动生成以下配置文件
warning: ldirectord-3.9.5-3.1.x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 7b709911: NOKEY
/etc/ha.d
/etc/ha.d/resource.d
/etc/ha.d/resource.d/ldirectord
/etc/init.d/ldirectord
/etc/logrotate.d/ldirectord
/usr/lib/ocf/resource.d/heartbeat/ldirectord
/usr/sbin/ldirectord
/usr/share/doc/ldirectord-3.9.5
/usr/share/doc/ldirectord-3.9.5/COPYING
/usr/share/doc/ldirectord-3.9.5/ldirectord.cf
/usr/share/man/man8/ldirectord.8.gz

Edit Profile

[root@server1 ~]# cd /etc/ha.d/
[root@server1 ha.d]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/
[root@server1 ha.d]# ls
ldirectord.cf  resource.d  shellfuncs
[root@server1 ha.d]# ipvsadm -l
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  server1:http wrr
  -> 172.25.46.2:http             Route   3      0          0         
  -> 172.25.46.3:http             Route   1      0          0         
[root@server1 ha.d]# ipvsadm -C
[root@server1 ha.d]# ls
ldirectord.cf  resource.d  shellfuncs
[root@server1 ha.d]# vim ldirectord.cf 
#Global Directives
checktimeout=3
checkinterval=1
#fallback=127.0.0.1:80
#fallback6=[::1]:80
autoreload=yes
#logfile="/var/log/ldirectord.log"
#logfile="local0"
#emailalert="[email protected]"
#emailalertfreq=3600
#emailalertstatus=all
quiescent=no

 #Sample for an http virtual service
virtual=172.25.46.100:80
        real=172.25.46.2:80 gate
        real=172.25.46.3:80 gate
        fallback=127.0.0.1:80 gate
        service=http
        scheduler=rr
        #persistent=600
        #netmask=255.255.255.255
        protocol=tcp
        checktype=negotiate
        checkport=80
        request="index.html"
        #receive="Test Page"
        #virtualhost=www.x.y.z

[root @ server1 ha.d] # systemctl start ldirectord # open the service
test:
back-end servers are normal real time: the implementation of polling
Here Insert Picture Description
server3 down
[root @ server3 html] # systemctl stop httpd
will dispatch server2
Here Insert Picture Description
to server1 install httpd and front-access settings page, will shoot down all services will replace server1
Here Insert Picture Description
open server3 service will automatically jump to server3
[root @ server3 HTML] # systemctl Start httpd

Here Insert Picture Description

Guess you like

Origin blog.csdn.net/weixin_44821839/article/details/92800827