Common load balancing and high-availability technologies

Copyright: original works, posted please indicate the source! https://blog.csdn.net/sa19861211/article/details/90441633

table of Contents

First, load balancing technology

Each load balancing technology comparison

1. Hardware Load Balancing

2. The software is responsible for balancing

3. DNS Load Balancing

Load balancing strategy:

LVS load balancing strategy

HAProxy load balancing strategy

Nginx load balancing strategy

Second, high availability technology

Third, the mature Linux load balancing and high availability architecture

DBRD Three copying agreement


First, load balancing technology

Compare each load balancing technology:

1. Hardware Load Balancing:

  • F5 BIG-IP (recommended)
  • Citrix NetScaler

2. The software is responsible for balancing:

technology Virtual server implementations Applicable scene
LVS By NAT virtual server (VS / NAT) The fourth layer load balancing, the number of real servers can not exceed 20.
Virtual server (VS / TUN) through an IP tunnel The fourth layer load balancing load. The scheduler only responsible for scheduling requests and the response directly back to the customer.
Virtual server (VS / DR) via a direct route The second layer load balancing. Dispatcher and server must have a physical network adapter connected via a local area network.
HAProxy N/A  Support fourth and seventh floor in load balancing.
Nginx N/A   Support fourth and seventh floor in load balancing.

3. DNS load balancing:

  • DNS polling

Load balancing strategy:

LVS load balancing strategy:

Load balancing strategy Policy Description Applicable scene
Simple Poll External requests sequentially in turn assigned to real servers in the cluster. The same server performance
WRR On the basis of a simple polling, according to the weight assigned to the scheduling request access. Different Server Performance
Least Connections Dynamically dispatching network requests the server to a minimum number of connections established. Different Server Performance
Weighted least connections Server based on the least connections algorithm, having a higher weight value will bear a greater proportion of the load is connected activities Quite different server performance
Locality-Based Least-Connection The destination IP address of the target server request to identify the IP address of the most recently used, if the server is available and is not overloaded, the request will be sent to the server; if the server does not exist, or if the server has a server is overloaded and half workload, elected by the principle of "minimum connection". Cache cluster system is mainly used for
Locality-based band replication least connection The target IP address of the server request to identify corresponding IP address of the target group, press the "minimum connection" elected by a server from a server group, if the server is not overloaded, send a request to the server; if the server is overloaded, the press the "minimum connection" principle chosen from a service station in this cluster, the server group is added to the server, send a request to the server. Cache cluster system is mainly used for
Destination address hashing The request destination IP address as the hash key (Hash Key) to find the corresponding server from the list of hash static allocation, if the server is overloaded and not available, send a request to the server, otherwise empty.  
Source address hashing The IP address of the request source, as a hash key to find the corresponding server from the list of hash static allocation, if the server is overloaded and not available, send a request to the server, otherwise empty.  

HAProxy load balancing strategy:

  • roundrobin: Simple Poll    
  • static-rr: WRR    
  • A method in accordance with the request source IP, session as solving the problem of synchronization: source.

Nginx load balancing strategy:

  • Simple Poll
  • WRR
  • ip_hash
  • url_hash

Second, high availability technology

Implementation technology Applicable scene Use Cases
Keepalived The front end load balancers generally use dual-programs to ensure high availability of the front end of the load balancer.  
Heartbeat In a production environment, Heartbeat highly available file system may be applied to the line with DRBD MySQL official recommended it as a means of achieving high availability MySQL
DRBD High availability device blocks. When data is written to the local file system, the data will also be sent on the network to another host, and records the same in the form of a file system. Local (master) and the remote host (node ​​apparatus) can ensure real-time data synchronization. DRBD has been the official MySQL manuals and written documentation as one of the recommended high-availability solutions

Third, the mature Linux load balancing and high availability architecture

  • LVS+Keepalived    
  • HAProxy+Keepalived    
  • Nginx+Keepalived    
  • DRBD+Heartbeat    
  • DRBD + Heartbeat + NFS: to build highly available file server

DBRD Three copying agreement

  • Replication protocol : Only if local and remote disks determine the write is complete, the master would think writing is completed.
  • Asynchronous replication protocol : as long as the master node on the local write operations that write operation is complete, and the need to replicate the packets are stored in a local TCP transmission buffer.
  • Replication memory protocol : local disk when writing has been completed, and the copy corresponding to the data packet has arrived from the node, then the master node considers a disk write has been completed.

Guess you like

Origin blog.csdn.net/sa19861211/article/details/90441633