ubuntu 12.04 LVS to build some experience and data compilation

   Recent projects need to IPVS load balancing for HTTP requests coming from the outside, to share processing across multiple servers, so looked at this information, record it here.

Lvs is a method of load balancing IP layer and content-based request distribution (it can also be called IPVS). The so-called load balancing, is shared across multiple servers for processing.

There are three common IPVS load balancing, NAT / DR / TUN three ways. It can be considered have their own advantages and disadvantages of it.

_ VS / the NAT VS / TUN VS / the DR
Server the any Tunneling Non-ARP Device
Server Network Private the LAN / the WAN the LAN
Server Number Low (10 ~ 20 is) High (100) High (100)
Server Gateway Load Balancer own Router Own Router
first release table official comparison.
Because it is built in the internal network, so we use the method is to create multiple virtual machines on a single machine above approach to simulate multiple machines to test, to avoid the middle of the network because the situation is complex, but also to practice I do not quite understand when too many errors.

1.IPVS / NAT mode: In this mode consists of a load balancing function as a gateway server, forwarding the message to several web server background processing, the back-end server response corresponding to the response server load balancing, and then returned to the individual user, so the user does not perceive the existence of an actual server, similar to the reverse proxy. However, load balancing server in this mode easily become a bottleneck.

 

Here, I directly load balancing server acts as a gateway, so the next few routing servers must point to it.

Pit middle encountered are:

(1) Due to the virtual machine by the time NAT assigned automatically assigned to virtual network routing table a point to the default gateway routing rules, when if added directly, there will curl: (56) Recv failure: Connection reset by peer, this situation needs to be carefully check the routing table.

2.IPVS / DR mode, the load balancing servers and back physically real server network segment in such a mode. After the server receives requests load balancing, the data frame can be directly written in the MAC address corresponding to the selected mac address of the real server. By the server and returned directly to the corresponding requesters.

 

Allocation of time to remember to turn off the main real server of the arp tables discovery and update functions, to avoid request can not be forwarded to the appropriate server.

3 IPVS / TUN model that use IP tunneling mode, IP packets directly in even the outer encapsulation of IP packets, and then forwarding the link corresponding to the specified real server, a real server corresponding to the IP packet processing on the packet the details may be listening to see with wireshark.

 

WireShark grip with the intermediate package as shown below.

 

Here the main pit is set webServer, remember to shut off the rp_filter parameters. Rp_filter action of the control system is turned on to check the source address of the packet, and detects whether reverse path is the best route (strict) / pass or whether (loose), it is not lost. If you do not turn off the display may appear somewhat request is forwarded to the real server, but does not correspond to the real server response record.

About IPVS several load balancing algorithm, where only tested rr / wrr / lc / wlc way, while the other two do load balancing method for accessing the destination address, and then find a lot of information really can not understand the first record it's here.

About IPVS some debugging method:

1, using a network connection tool to check the corresponding route and ping, tcpdump and the like.

2, use the command to view the situation ipvsadm and forward tcp / UDP status


@ roaddb-VirtualBox root: / Home / roaddb the ipvsadm the -l --stats #
IP Virtual Server Version 1.2.1 (size = 4096)
Prot LocalAddress: Port Conns inpkts OutPkts InBytes OutBytes
-> RemoteAddress: Port
TCP 10.69.142.28:http 100 432 40110 83 556 564
-> 231 165 16995 37290 33 is 10.0.2.4:http
-> 10.0.2.5:http 33 is 231 165 16995 37290
-> 102 102 6120 8976 10.0.2.15:http 34 is
the root-VirtualBox roaddb @: / Home / roaddb the ipvsadm the -l -c #
the IPVS Connection entries It
Pro the expire State Source Virtual Where do you want
TCP 00:52 SYN_RECV 10.69.142.70:57004 10.69.142.80:http 10.69.142.73:http
3. check the corresponding Nginx log confirm the situation forward.

About keepalived, keepalived mainly used for front-end load balancing backup server, you can use the backup machine automatically be replaced when the load balancing server hang, and the real server back end can play a supervisory role, when there is a real server hang switching over rapidly reset by weight of 0-bit mode. keepalived principle is the routing algorithm elections. Document address: http: //www.keepalived.org/doc/

Keepalived configuration items in two main configuration to note:

(1) inhibit_on_failure This parameter is mainly to monitor the situation real server, the server can not be forwarded to hang in the case, from the time it can also be reconfigured back to the rule

(2) persistence_timeout this parameter is blocking the connection, you can put links are forwarded to the selected server over a period of time.

There are further backed up heartbeat and so on, said to be more powerful than keepalived function [doing]

Reference Connection page: HTTPS: //www.cnblogs.com/kevingrace/p/6248941.html
----------------
Disclaimer: This article is CSDN original blogger "skiworld" of article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
Original link: https: //blog.csdn.net/yanxiaobugyunsan/article/details/79265105

Guess you like

Origin www.cnblogs.com/ExMan/p/11831800.html