From network to distributed-load balancing

From network to distributed-load balancing

https://www.toutiao.com/i6939329630021124620/?tt_from=weixin&utm_campaign=client_share&wxshare_count=1&timestamp=1615748619&app=news_article&utm_source=weixin&utm_medium=toutiao_ios&use_new_style=1&req_id=202103150303390102120752002E18CAC9&share_token=6AF5EC78-F17F-42C0-BEE1-85FD3B69299B&group_id=6939329630021124620

Source
: https://www.cnblogs.com/ice-image/p/14524056.html

Network concurrent load balancing

OSI seven-layer reference model

Layered model, each layer realizes its own functions and protocols, and completes the interface communication with adjacent layers. OSI's service definition specifies the services provided by each layer. The service of a certain layer is a capability of the layer and the lower layers, and is provided to the higher layer through the interface. The services provided by each layer have nothing to do with how these services are implemented.

  • Application layer: provide services for various applications
  • Presentation layer: data format conversion, data encryption
  • Session layer: establish, manage and maintain sessions
  • Transport layer: defines some protocols and port numbers for data transmission
  • Network layer: IP addressing and routing
  • Data link layer: MAC address encapsulation and decoding.
  • Physical layer: Define physical equipment standards, such as network cable interface types, optical fiber interface types, and transmission rates of various transmission media. Mainly used to transmit bit streams.

From network to distributed-load balancing

 

TCP/IP five-layer model

From network to distributed-load balancing

 

From network to distributed-load balancing

 

From network to distributed-load balancing

 

Load balancing

Usually refers to the request or data, evenly distributed to multiple operation units for execution. It belongs to the idea of ​​divide and conquer.

The purpose is to optimize resource usage, maximize throughput, minimize response time and avoid single-point overload by scheduling clusters

Load balancing algorithm

  • Static
    • RR: Polling
    • WRR: Weighted polling
    • DH
    • SH
  • dynamic
    • LC: Minimum connection number method
    • WLC: Weighted least connection
    • SED: The shortest expected delay
    • NQ:never queue
    • LBLC: Minimal connection based on local
    • Balanced response speed
    • Balanced processing power

Four-layer load balancing

Load balancing based on the transport layer, the representative protocol is TCP/UDP, in addition to including IP, it also distinguishes the port number, mainly through the forwarding of requests based on IP + port number.

From network to distributed-load balancing

 

Four-layer load balancing server LVS

  1. NAT: address translation
  2. DR: Direct routing
  3. TUN: Tunnel Technology

NAT address translation

Principle: IP address rewriting

From network to distributed-load balancing

 

DR direct routing

Principle: LAN rewrites the mac address

From network to distributed-load balancing

 

TUN tunnel technology

Principle: IP encapsulation, across network segments

From network to distributed-load balancing

 

keepalived

High availability, used to detect the status of each node and deal with single points of failure.

If a single point of LVS fails, how to deal with it, idea: multiple points to form a cluster (distributed)

  • Active/standby: Only the primary provides services to the outside world, and the standby machine is used to provide services when the host fails.
  • Main main: Provide services to the outside at the same time
  • Master and slave: the two cooperate with each other to complete the work

keepalived

  1. Monitor your own services
  2. The Master announces that he is still alive, Backup monitors the status of the Master, the Master hangs up, and a bunch of Backups recommend a new Master
  3. Configuration: VIP, add ipvs, keepalived have configuration files
  4. Monitor and check the back-end Server
  5. Keepalived is a general job, mainly implemented as HA:
    Nginx can be used as a company's load balancer. Nginx has become a single point of failure, and it can also be solved with keepalived. As long as it involves more and more situations, you can basically use it. Keepalived to solve.

Seven-layer load balancing

Load balancing based on the application layer, the representative protocol is HTTP, DNS, etc., which can be loaded according to the requested URL, which is more flexible. Nginx based on reverse proxy load balancing is one of its representatives.

Ordinary four-layer load balancing software, its function is only to forward and transfer request data packets. From the point of view of the node server under load balancing, the received request is still from the real user of the client accessing the load balancer, and based on The load balancing of the reverse proxy is that after the reverse proxy server receives the user request, it will proxy the user to re-initiate the node server under the request proxy, and finally return the data to the client user. From the point of view of the node server, the client user who visits the node server is a reverse proxy server, not a real website visitor.

Author: ice_image

Source
: https://www.cnblogs.com/ice-image/p/14524056.html

Guess you like

Origin blog.csdn.net/z136370204/article/details/115044143