Server load balancing and client load balancing

Load balancing concept

Refers to a single server performance limit is reached to increase the throughput and performance of the system laterally by the server cluster. One that we think is load balancing Ngnix, and do not deny, Ngnix load balancing rod factory implementation, one! But when the interview the interviewer often want to be able to implicate the entire plant by a screw, if only reply Ngnix , some think it might still owe the furnace through an interview.

Server load balancing and client load balancing

Server Load Balancing

Server load balancing is what we usually say the load balancing means to do service in the distribution server upstream, there are about several commonly used methods:

<pre> 1, DNS load balancing domain name resolution; us assume multiple IP addresses pointing to the domain, when a domain name request, DNS server, DNS machine translates domain names into IP addresses, in the 1: N mapping translations to achieve load balancing. DNS server provides simple load balancing algorithm, but which when a server fails, the failure to inform the DNS server to remove the current IP.
2, the reverse proxy load balancing; value only reverse proxy server proxy, the proxy server accepts the request, the load balancing algorithm, forwards the request to the back-end servers, back-end service returns to the proxy server The proxy server returns to the client . The advantages of reverse proxy server is to isolate the back-end server and the client, using a dual NIC shielded real server network, security is better, compared to the DNS domain name resolved load balancing, reverse proxy and more flexible in terms of troubleshooting, load balancing support scale algorithm. Currently used very widely. Of course, the reverse proxy also need to consider many issues, such as the single point of failure, cluster deployment.
3, IP load balancing; we all know that work to the HTTP reverse proxy layer, itself overhead is relatively larger, have a certain impact on performance, LVS-NAT load balancing is one kind of sanitary transport layer, by modifying it to accept the packet destination address of load balancing. Linux 2.6.x or later built IPVS, focused on IP used to implement load balancing, and therefore on the Linux IP load balancing is widely used. LVS-DR works at the data link layer, more than LVS-NAT overbearing when it directly modify the MAC address of the packet. LVS-TUN-- transfer request based on IP tunneling mechanism, the scheduler receives the encapsulated IP packet, transferred to the server, and the server returns the data, load balancing by the scheduler. In this way support inter-network scheduling. To summarize, LVS-DR and LVS-TUN are suitable asymmetric response and request a Web server, how to make a choice from them, depending on your network deployment needs, because the LVS-TUN can have across the region, which has a similar kinds of needs, you should choose LVS-TUN.
</ pre>

Server load balancing and client load balancing

Client load balancing

Compared server load balancing, the client load balancing is a concept of a very small minority, but when asked in an interview load balancing knowledge but deliberately the breadth of knowledge of the candidates. Client load balancing is defined in the frame assembly spring-cloud distributed in the Ribbon. When we use spring-cloud distributed framework, with a high probability to start multiple service when a request and ran, then this more service, Ribbon strategy by which this request using the service is the way to determine client load balanced. In the spring-cloud distributed framework client load balancing is transparent to the developer, adding @LoadBalanced comment on it. Client load balancing and server load balancing core difference itself, client load balancing service list maintained by the client in the list of services, server load balancing service list maintained by the intermediate service alone.
By understanding the above knowledge, we can some more comprehensive understanding of load balancing, down I would simply chat with the interviewer and the common load balancing algorithms:

Random, random selection by the service executed in this way generally use less;
rotation, load balancing default implementation, then the processing request queue;
weighted rotation, by type of server performance, for high-profile, low-load server assign a higher weight, the pressure equalization of each server;
address Hash, HASH value obtained by the client requesting an address mapping server modulo scheduling.
The minimum number of links; equalized even request, not necessarily the pressure equalization method is based on the minimum number of connections to the server, like the number of parameters such as request backlog, the request is assigned to the current minimum pressure server.
A number of other ways.

Reproduced in: https: //blog.51cto.com/14046599/2409585

Guess you like

Origin blog.csdn.net/weixin_34025151/article/details/92837635