Load balancing tool commonly used to explain and illustrate

Common load balancing tool

Nginx、LVS、HAProxy、 F5

Load Balancing that is, the proxy server receives a request for a balanced distribution to each server, load balancing mainly solve network congestion problems and improve server responsiveness, the nearest service provided, to achieve better access to quality, reduce back-end server large concurrent pressure.

What is Nginx?

Nginx, is a Web server and reverse proxy for HTTP, HTTPS, SMTP, POP3 and IMAP protocols.

What is the principle Nginx load balancing is?

The client sends a request to the reverse proxy, the proxy then forwards the request to the reverse mechanism according to a certain load to the target server (servers are running the same application), and returns the obtained content to the client, period, the agent may request the configuration is sent to a different server.

Nginx advantages:

  • High concurrent connections: Nginx official test capable of supporting 50,000 concurrent connections, the actual test can reach about 30,000, can handle the traffic billion times a day; the reason is: the latest epoll (linux2.6 kernel) and kqueue (freebsd) network I / O model, but it uses a conventional Apache select model
  • Small memory consumption
  • Nginx support load balancing
  • Nginx reverse proxy support
  • low cost

Nginx forward proxy

Located between the client and server origin server (origin server), in order to obtain content from a client a request to the origin server to send and targeting agent (origin server) and then transmit a request to the proxy server and the obtained original content returned to the client. The client can use the forward proxy.

Forward Acting on the word summary: Agent Agent is a client

Nginx Reverse Proxy

Reverse proxy (Reverse Proxy) mode refers to the proxy server to accept connection requests on the internet, then the request, to the server on the internal network and the results returned from the server to the client requesting a connection to the internet terminal In this case the external proxy server on the performance of a reverse proxy server.

Reverse proxy on the word summary: Agent proxy server is

Nginx can be used for seven load balancing.
But for some large sites, usually using DNS + four seven-way load + load of multi-level load balancing.

What LVS that?

LVS (Linux Virtual Server), which is Linux Virtual Server is a sponsored by Dr. Zhang Wen-song free software projects, is one of free software projects in China first appeared.

LVS cluster which uses a three-layer structure?

In general, LVS cluster three-tier structure, the main components are:

A, load balancer (load balancer), it is the entire cluster to the outside of the front-end machine, the customer is responsible for sending a request to perform a set of servers, and the customer service is considered from a single IP address (we can call it a virtual IP address) on.

B, a pool of servers (server pool), is a group of servers to perform real customer requests, the service performed there WEB, MAIL, FTP, and DNS.

C, shared storage (shared storage), which provides a shared storage area for the server pool, it is easy to make the server pool have the same content, provide the same services.

LVS advantages:

  1. Open source, free;
  2. The Internet can find some relevant technical resources;
  3. Some software has the advantage of load balancing;

LVS by the program which make up part 2?

Including ipvs and ipvsadm.

  1. ipvs (ip virtual server): a piece of code work in kernel space, called ipvs, the code went into effect achieved scheduling.
  2. ipvsadm: another period of work in user space, called ipvsadm, is responsible for the preparation ipvs core framework of rules that define who is a cluster service, and who is the real back-end server (Real Server)

LVS There are several models?

NAT mode, TUN mode, DR Mode

NAT mode advantages and disadvantages:

Because the request and the response must go through lvs server, so if Sheremetyevo lvs will form a bottleneck, generally require 10-20 units node.

Gateway address of each server node must be a network address lvs server.

NAT mode support for IP address and port translation. That is the real server port and port requested by the user can be different.

LVS-TUN mode:

And management and VS / NAT in the same connection scheduling its use ip tunneling principle that the request header plus a layer of IP Tunnel header ip header information in the original client, without changing the whole original information request packet just add a layer of ip header information, and then use the routing request to the principle of the RS server, but the requirement is that all server must support the "IPTunneling" or "IP Encapsulation" agreement.

LVS-DR mode works:

As Director Server access entry of the cluster, but is not used as a gateway, Real Server backend server pool and the Director Server on the same physical network, packets sent to the client does not require Director Server. In response to access to the entire cluster, DS and RS you need to configure the VIP address.

LVS ten kinds of scheduling algorithms introduced

1, round-robin (Round Robin) (referred to as rr)

Scheduler through the "round-robin" scheduling algorithm external requests sequentially assigned to the cluster in turn in the real server, which equally treats each server, regardless of the actual number of connections on the server and the system load.

2, a weighted round robin (Weighted Round Robin) (referred WRR)

Scheduler through the "weighted round-robin" scheduling algorithm to schedule the access request depending on the processing capability of the real server. This will ensure a strong processing capacity of the server can handle more traffic. Queries scheduler may automatically load the real server, and dynamically adjusts the weight.

3, Least Connections (Least Connections) (LC)

Scheduler through the "minimum connection" scheduling algorithm dynamically network requests on a minimum number of links to the server schedule established. If the real server cluster systems have similar performance, the use of "minimum connection" scheduling algorithm can better balance the load.

4, Weighted Least link (Weighted Least Connections) (WLC)

The server in a cluster system performance difference is large, the scheduler uses the "weighted least link" Scheduling Algorithms server load balancing properties, having a high weight value will bear a greater proportion of active connections supported. Queries scheduler may automatically load the real server, and dynamically adjusts the weight.

5, based on the minimum local link (Locality-Based Least Connections) (LBLC)

"Locality-Based Least Connections" scheduling algorithms for load balancing target IP address, the key for Cache cluster system. The algorithm to find the IP address of the target server to the request destination IP address of the most recently used, if the server is available and is not overloaded, send a request to the server; if the server does not exist, or if the server has a server is overloaded and half workload, then use the principle of "least Connections" is available to select a server, the request is sent to the server.

6, with a copy of the Locality-Based Least Connections (Locality-Based Least Connections with Replication) (LBLCR)

"Take copy of Locality-Based Least Connections" load balancing scheduling algorithm is also for the destination IP address, the key for Cache cluster system. It differs from the LBLC algorithm is that it wants to maintain the mapping from a target IP address to a group of servers, and LBLC algorithm maintains a mapping from a target IP address to a server. The algorithm to find the IP address of the target server group corresponding to the target IP address request, according to the "minimum connection" elected by a server from a server group, if the server is not overloaded, send a request to the server; if the server is overloaded , according to "minimum connection" principle chosen from a server in the cluster, and the server is added to the server group, sends a request to the server. Meanwhile, when the server group for some time not been modified, the busiest server is removed from the server group, in order to reduce the degree of replication.

7, destination address hash (Destination Hashing) (DH)

"Target address hash" scheduling algorithm in accordance with the request destination IP address as the hash key (Hash Key) to find the corresponding server from the list of hash static allocation, if the server is overloaded and not available, the request is sent to the the server, otherwise empty.

8, the source address hash (Source Hashing) (SH)

"Source address hash" scheduling algorithm according to the source IP address of the request, as the hash key (Hash Key) to find the corresponding server from the list of hash static allocation, if the server is overloaded and not available, the request is sent to the the server, otherwise empty.

9, the shortest expected delay (Shortest Expected Delay Scheduling SED) (SED)

Based wlc algorithm. This example must be a
ABC three machines are weights 123, 123 are also connections. So if you use WLC algorithm, then a new request comes in it may give any one of ABC. Such a calculation will be performed using the algorithm sed
A (. 1. 1 +) /. 1
B (. 1 + 2) / 2
C (+. 1. 3) /. 3
based on the calculation result, the connection to the C.

10, minimum queue scheduling (Never Queue Scheduling NQ) (NQ)

No need to queue. If the number of connected realserver = 0 is assigned directly past, no operation is performed sed

LVS is mainly used to do four loads and load seven

The so-called four-layer load balancing is based on IP + port, the main representatives of lvs.

Seven load, also known as content switching, application layer load balancing is based on information such as the URL, the main representatives of nginx.

What HAProxy that?

HAProxy is written in C using a free and open-source software [1], which provides high availability, load balancing, and proxy TCP and HTTP-based applications.

HAProxy especially for those large load of web sites that usually they need to maintain or seven treatment sessions. HAProxy running on current hardware can support thousands of concurrent connections. And its mode of operation makes it really simple to integrate into your current security architecture, while protecting your web server is not exposed to the network.

HAProxy achieve a event-driven , single process model, this model supports a very large number of concurrent connections. Multi-process or multi-threaded model by memory limitations, restrictions and lock system scheduler restrictions ubiquitous, few can handle thousands of concurrent connections. Because event-driven model to achieve all these tasks in a better user space resource and time management (User-Space), so we do not have these problems. Disadvantages of this model is, on a multi-core system, these programs generally poor scalability. That is why they must be optimized so that each CPU time slice (Cycle) to do more work.

haproxy configuration is divided into five parts What?

  • global: set global configuration parameters, the configuration process belongs to, and is often associated operating system.
  • defaults: the default configuration parameters that can be used frontend, backend, Listen assembly;
  • frontend: front end virtual node that receives the request, Frontend rules can be more particularly used directly specified backend backend;
  • backend: backend services cluster configuration, a real server, a Backend Server corresponds to one or more entities;
  • Listen: frontend and backend combination thereof.

HAProxy is a use of free and open-source software written in C language, which provides high availability, load balancing, and proxy TCP and HTTP-based applications.
Haproxy mainly used for seven load balancing.

F5 is the What?

F5 Networks (Nasdaq: FFIV), the global leader in Application Delivery Networking (ADN) in the field of manufacturers, founded in 1996, is headquartered in Seattle, F5 is the world's largest enterprises, operators, government and consumer brands faster, secure and intelligent application delivery through cloud and security solutions, F5 helps businesses without losing speed and manageability they need to enjoy the application architecture.

F5 load balancing features?

1. The multi-link load balancing and redundancy

2. Firewall Load Balancing

3. Server Load Balancing

4. High Availability System

The high degree of security

7. System Management

F5 is the core Virtual Server.

Published 60 original articles · won praise 58 · views 10000 +

Guess you like

Origin blog.csdn.net/chen_jimo_c/article/details/104949769