Build a simple load balancing cluster by hand

Foreword:

Recently, a simple load balancing cluster has been deployed in the company. The main function of this cluster is to distribute the traffic of the company's business servers to prevent the server from being overloaded due to excessive traffic, resulting in a series of problems such as service downtime or slow response speed. Make a brief record here, which is convenient for follow-up review, and also for interested readers to learn. If there are any problems, please correct me.
After learning this series of articles, you will build a simple load balancing cluster by yourself, understand the important role that the load balancing cluster plays in the case of high traffic, and can effectively protect the stable operation of the business server.

Learn about load balancing:

Load balancing is built on the existing network structure, it provides a cheap, effective and transparent method to expand the bandwidth of network devices and servers, increase throughput, strengthen network data processing capabilities, and improve network flexibility and availability. I won't go into too much introduction here, I'll just use the cluster I built myself to explain to the watcher:
insert image description here

Description: Analyze from the three parts in the figure: users, load balancing servers, and business servers:

  • User: The user sends a request (this can be understood as a large number of users sending a large number of requests).

  • Load balancing server: It must have a public network IP before it can be opened to the outside world, bind the open port, and establish an independent communication channel with the business server.

  • Business server: Map business ports to load balancing servers and deploy business processes.

  • The overall process is roughly as follows: the user sends a request, accesses the load balancing server, distributes the traffic to the service server with the least pressure through the load balancing algorithm of HAProxy, the service server responds to the request, and returns the response content in the same way to the user. During the whole process, we can imagine that a large number of accesses reach the load balancing server, and each access is allocated to the business server with the least pressure through the load balancing algorithm, thus protecting the business server and greatly reducing the pressure on the business server.

  • Give an example that programmers can feel most: your team has 8 programmers (business servers) and a team leader (load balancing). When several requirements of the product (users) come out, do programmers need to implement them? If there are only 8 programmers to implement, but the product is more familiar with programmer No. 1, then he will give all the requirements to No. 1 to implement, other programmers are very free, programmer No. 1 may have to work overtime every day and then It takes another month to complete these requirements. What will be the result? The completion of the requirements is slow, the programmers have negative emotions in their hearts due to long-term high-intensity work, and leave (downtime), this situation is very bad! Therefore, the working method was changed, and the product handed over these requirements to the team leader. Does the team leader have 8 programmers' work reports every day to understand their work, so after the adjustment of the team leader, these requirements are compared It is evenly distributed to 8 programmers. The effect is that the workload of the 8 programmers is relatively average. They can complete all the requirements in 10 days without overtime every day. In this way, the completion speed of the requirements is accelerated, and the program Employees are also relaxed, such a situation is what the company is most willing to see!

Load balancing algorithm:

polling:

  • All requests are distributed to each server in turn, which is suitable for scenarios with the same server hardware.
    Advantages: the same number of server requests;
    Disadvantages: different server pressures, not suitable for different server configurations;

random:

  • Requests are randomly distributed to each server.
    Advantages: Simple to use;
    Disadvantages: Not suitable for scenarios with different machine configurations;

Minimal links:

  • Distribute requests to the server with the fewest connections (the server currently handling the fewest requests).
    Advantages: Dynamic allocation according to the current request processing situation of the server;
    Disadvantages: The algorithm implementation is relatively complex, and the number of server request connections needs to be monitored;

Hash (source address hash):

  • Perform Hash calculation based on the IP address to obtain the IP address.
    Advantages: Forward requests from the same IP address to the same server within the same session; achieve session stickiness.
    Disadvantages: After the target server goes down, the session will be lost;

Weighted:

  • On the basis of polling, random, least link, Hash and other algorithms, the load server distribution is carried out in a weighted way.
    Advantages: Adjust the number of requests of the forwarding server according to the weight;
    Disadvantages: The use is relatively complicated;

These are the general algorithms. Of course, the above can be combined and used more flexibly. I found 8 common methods on the Internet.

Protocols supported by load balancing:

  • HTTP
  • HTTPS
  • TCP

Summarize

The above is my simple understanding of load balancing, because I am not very familiar with it, and if I don't understand it well or have objections, you can correct it for me. In fact, I think this article is very vivid. Everyone thinks that what I explained is not very well understood. You can take a look at the interesting article by clicking on me.
Give a Jian Sanlian, and then I will bring you the detailed construction process:

Guess you like

Origin blog.csdn.net/xiaoxin_OK/article/details/120733532