Docker build simple LVS

About LVS

       LVS (Linux Virtual Server), the Linux Virtual Server, is led by Dr. Zhang Wen-song project open source load balancing, currently LVS has been integrated into the Linux kernel module. The project implements IP-based data requested load balancing scheduling scheme, which architecture shown in Figure 1, the user terminal from the external Internet access the company's external server load balancing, the end user requests sent to a Web LVS Linux kernel scheduler , a scheduler decides to send the request to a Web server in accordance with their rear end of a preset algorithm, for example, the polling algorithm may request the external server to the average of all of the rear end, while the end user access scheduler is LVS forwarded to the real back-end servers, but if the real server connection is the same memory, the same service is also provided services to end users regardless of which real server access, services get are the same, the entire cluster to the user For all transparent. Finally, depending on the transmission mode of the LVS, the real servers will choose a different way to the end user data required by the user, the operation mode is divided into LVS NAT mode, TUN mode and DR mode.

Parsing of three operating modes

LVS load balancing NAT-based mode

      NAT (Network Address Translation) i.e., network address translation, which role is to modify the data packet header, private IP address is such that within the enterprise can access the Internet, as well as with external users can access the private IP hosts located within the company. VS / NAT operation mode topology shown in Figure 2, LVS load scheduler may use two different IP addresses NIC configuration, eth0 set private IP network are interconnected through the internal switching device, eth1 equipment and the external network IP external network connectivity.

       The first step the user through the Internet DNS server to resolve balancing device outside the network load above the company address, as opposed to a real server, LVS external network IP, also known as VIP (Virtual IP Address), through user access VIP, you can connect the end real server (real server), and all this is transparent to the user, the user thought he visit is a real server, but he did not know their own VIP access is just a scheduler, it is unclear the real back-end where server, the number of real servers.

   A second step, the user sends the request to 124.126.147.168, the rear end of this time LVS selected according to a preset algorithm for a real server (192.168.0.1 ~ 192.168.0.3), a data packet forwards the request to the real server, and before forwarding the LVS will modify the destination address of the packet and the destination port, destination address and destination port will be modified to selected real server IP address and the corresponding port.

    After the third step, the real server response data packet back to the LVS scheduler, the scheduler is VIP and modify the corresponding port in the scheduler in response to the data packet to obtain the source address and source port will modification is completed, the scheduler the response packet back to the end user, Further, since there is a connection scheduler LVS Hash table, which will be recorded and forwarding the connection request message, when the next data packet with a connection to a dispatcher, from the Hash table can be connected directly to the previous record, and elected the same real server and port information according to the recording information.

The LVS load balancing based TUN

       In a cluster environment LVS (the NAT) mode, since all data requests and data packets in response to the need to go through forward scheduler LVS, if the number is greater than the back-end server 10, the scheduler will become the bottleneck of the entire cluster environment . We know that the data request packet is always much smaller size of the response packet. Because the response packet contains data specific customer needs, the LVS (TUN) The idea is to separate request and response data, so that only the data request process scheduler, and let the real server response data packet back to the client directly. VS / TUN mode of topology as shown in Figure 3. Wherein, the IP tunnel (IP tunning) is a data packet encapsulation technique, it can be encapsulated original packet and adds a new header (including the new source address and port, destination address and port), so as to achieve a goal packet encapsulation VIP address scheduler, forwarded through the tunnel to the backend of the real server (real server), through the client sent to the scheduler of the original packet encapsulation, and add a new header on its basis (modified target IP address and the corresponding port address is selected from the dispatcher real server), LVS (TUN) mode requires real server can be directly connected to the external network, the real server to the client host in response to receipt of the request data directly to a data packet.

LVS load balancing based on the DR

In LVS (TUN) mode, due to the need for a tunnel between the dispatcher and LVS real server, which also will increase the burden on the server. And LVS (TUN) Similarly, the DR mode is also called direct routing mode, the architecture shown in Figure 4, the data mode remains liable LVS inbound requests, and only the selected algorithm according to reasonable real servers, and ultimately by the rear end real server is responsible for the response packet is sent back to the client. And tunnel mode is different, direct routing mode (DR mode) and back-end server requires the dispatcher must be in the same LAN, VIP address needs to be shared among all the servers and back-end scheduler, because in the end the real server to the client We need to set the source when the responding packet IP for the VIP address, destination IP client IP, so that clients can access the VIP address scheduler, in response to the source address also remains the VIP address (VIP on the real server), the client is imperceptible backend server exists. Since many computers are provided with the same a VIP address, so the requirements of the scheduler in direct routing mode, the VIP address is outside visible, the client needs to request packet to the scheduler host, VIP addresses of all of the real server must be configured on Non-ARP network devices, that is, the network device does not broadcast outside its own MAC and the corresponding IP address, VIP real server is not visible to the outside world, but the real server has accepted the target address VIP network requests, and in response to the data packet the source address as the VIP address. After the selected scheduler real server, without modifying the data packets, data frames will change the MAC address MAC address of the real server according to the selected algorithm, the data frame to the switch by the real server. Throughout the process, VIP real server does not need to be visible to the outside world.

LVS / NAT implementation

To be able to use the first IPVS kernel module, we will ipvsadm installed in the host machine, and can try to use:

APT- sudo GET Update # update the source 
sudo APT - GET install ipvsadm # ipvsadm installation tool 
sudo ipvsadm -L # attempt to use ipvsadm

Use docker create the desired container, created with the following command

docker run --privileged --name=RealServer1 -tdi ubuntu
docker run --privileged --name=RealServer2 -tdi ubuntu

RealServer Nginx deployment to provide Web services, and RealServer2 RealServer1 the same procedure, as shown here to RealServer1

docker attach RealServer1
apt-get update
apt-get install vim -y 
apt-get install nginx -y
service nginx start

Environment # RealServer1 container (different installation path) sudo vi /var/www/html/index.nginx-debian.html  modify the default page server shift distinguish 2

So far we have completed two Web server configuration, we can open the host firefox browser address bar enter two IP addresses, respectively, to test our successful configuration:

LoadBalancer (host) external IP address for the VIP, namely VIP address 10.2.10.10 ( 10.2.10.10 virtual out, my external network is 10.2.0.xx) , open the kernel routing forwarding LoadBalancer

echo ' 1 ' | sudo TEE / proc / SYS / NET / ipv4 / ip_forward 
CAT / proc / SYS / NET / ipv4 / ip_forward # 1  1 Description This machine is turned on core routing and forwarding

Use ipvsadm add ipvs rules. The definition of cluster services:

-t -A the ipvsadm the sudo 10.2 . 10.10 : 80 - S RR # define the cluster service 
the sudo the ipvsadm -a -t 10.2 . 10.10 : 80 -R & lt 172.17 . 0.6 - m # Add RealServer1 
the sudo the ipvsadm -a -t 10.2 . 10.10 : 80 -r 172.17 . 0.7 - m # added RealServer2 
sudo the ipvsadm - L # View ipvs defined rules 
# Add the cluster services
 - a: adding a new cluster service
 - t: using the TCP protocol
 - S: Specifies the load balancing scheduling algorithm
rr: Round Robin (LVS realized8 of the Scheduling Algorithm)
 192.168 . 100.5 : 8888   define the IP address (VIP) cluster services and port 

# to add Real Server Rules
 - A: Add a new RealServer rules
 - t: tcp protocol
 - r: Specifies the RealServer IP address
 - m: NAT is defined as 
the addition of two servers RealServer1 and RealServer2 above command

The test results (if not force a refresh of the corresponding change took close look at nginx)

LVS / DR to build

Because I can not test the same environment inside, so now we need to remove the above things

sudo ipvsadm -C #清除 ipvsadm 的规则
docker stop RealServer1
docker stop RealServer2
docker rm RealServer1
docker rm RealServer2

By creating three docker container to simulate a member server in the pool

docker run --privileged --name=LoadBalancer -tid ubuntu
docker run --privileged --name=RealServer1 -tid ubuntu
docker run --privileged --name=RealServer2 -tid ubuntu

Two RealServer configuration environment, for example has RealServer1

docker attach RealServer1
apt-get update
apt install net-tools
apt-get install vim -y 
apt-get install nginx -y
#vi /usr/share/nginx/html/index.html
vi /var/www/html/index.nginx-debian.html
service nginx start

Modify kernel parameters, suppress arp, to RealServer1 for example, log container, execute the following command:

# Set to answer only the destination IP address is visiting the network's local address of the interface ARP queries 
echo " 1 " > / proc / SYS / NET / ipv4 / conf / LO / arp_ignore 
echo " 1 " > / proc / SYS / NET / ipv4 / conf / All / arp_ignore 

# for insurance they can check to see if successfully modified 
CAT / proc / SYS / NET / ipv4 / conf / LO / arp_ignore 

query target the most appropriate local address # settings. in this mode will ignore this IP the source address of the packet and try to select a local address can communicate with the address. the primary is to select the local subnet addresses of all network interfaces in the outgoing access subnet containing the target IP address. If no suitable address to be found the selected current transmission or other network interface receives the ARP response is likely a network interface to transmit. 
echo " 2 " > / proc / SYS / nET / IPv4 / the conf / LO / arp_announce 
echo " 2 " > / proc / SYS / NET / ipv4 / conf / All / arp_announce

# Make the above configurations take effect immediately 
sysctl -p

Create aliases and add a route card

IP is the only purpose will be handled accordingly when this machine is one, we need to add an alias card (172.17.0.10 is out of the virtual, and I docker container through docker ip ip is a period of two server is 172.17.0.6 / 7) :

APT- GET install Network- Manager 
# configure virtual IP 
ifconfig LO: 0  172.17 . 0.10 Broadcast 172.17 . 0.10 Netmask 255.255 . 255.255 up 
# 172.17 . 0 and docker0 the same network segment 
# to add a route, because this is the same network segment so you can not add the routing 
route the Add -host 172.17 . 0.10 dev LO: 0 
Service Network -manager restart

Configuring a LoadBalancer environment:

docker attach LoadBalancer
apt-get update
apt install net-tools
apt-get install ipvsadm
ifconfig eth0:0  172.17.0.10:80 netmask 255.255.255.0 up
ipvsadm -A -t 172.17.0.10:80 -s rr         # 定义集群服务
ipvsadm -a -t 172.17.0.10:80 -r 172.17.0.6 -g # 添加 RealServer1
the ipvsadm -a -t 172.17 . 0.10 : 80 -R & lt 172.17 . 0.7 - G # Add RealServer2 
the ipvsadm - L # ipvs rules defined View 
# -g: DR mode is defined as

The test results (if not force a refresh of the corresponding change took close look at nginx)

reference:

LVS load balancing (LVS introduction, three operating modes, ten scheduling algorithm)

Docker's simple to use & LVSNAT & LVSDR implementation

Guess you like

Origin www.cnblogs.com/majiang/p/11402015.html