Finally the server load balancing and client load balancing it clear

Load balancing concept

Refers to a single server performance limit is reached to increase the throughput and performance of the system laterally by the server cluster. One that we think is load balancing Ngnix, and do not deny, Ngnix load balancing rod factory implementation, one! But when the interview the interviewer often want to be able to implicate the entire plant by a screw, if only reply Ngnix , some think it might still owe the furnace through an interview.

 

 

Finally the server load balancing and client load balancing it clear

Server Load Balancing

Server load balancing is what we usually say the load balancing means to do service in the distribution server upstream, there are about several commonly used methods:

  • DNS name resolution load balancing; us assume multiple IP addresses pointing to the domain name, the domain name when a request, a server DNS name resolution machine translates domain names into IP addresses, in 1: N mapping transformation load balancing in. DNS server provides simple load balancing algorithm, but which when a server fails, the failure to inform the DNS server to remove the current IP.
  • Reverse proxy load balancing; only the value of a reverse proxy server for proxy, the proxy server accepts the request, the load balancing algorithm, the request is forwarded to the back-end servers, back-end service returns to the proxy server and the proxy server returned to the client. The advantages of reverse proxy server is to isolate the back-end server and the client, using a dual NIC shielded real server network, security is better, compared to the DNS domain name resolved load balancing, reverse proxy and more flexible in terms of troubleshooting, load balancing support scale algorithm. Currently used very widely. Of course, the reverse proxy also need to consider many issues, such as the single point of failure, cluster deployment.
  • IP load balancing; we all know that work to the HTTP reverse proxy layer, itself overhead is relatively larger, have a certain impact on performance, LVS-NAT load balancing is one kind of sanitary transport layer packet destination address by modifying it to accept data way to achieve load balancing. Linux2.6.x later built IPVS, focused on IP used to implement load balancing, and therefore on the Linux IP load balancing is widely used. LVS-DR works at the data link layer, more than LVS-NAT overbearing when it directly modify the MAC address of the packet. LVS-TUN-- transfer request based on IP tunneling mechanism, the scheduler receives the encapsulated IP packet, transferred to the server, and the server returns the data, load balancing by the scheduler. In this way support inter-network scheduling. To summarize, LVS-DR and LVS-TUN are suitable asymmetric response and request a Web server, how to make a choice from them, depending on your network deployment needs, because the LVS-TUN can have across the region, which has a similar kinds of needs, you should choose LVS-TUN.

 

Finally the server load balancing and client load balancing it clear

Client load balancing

Compared server load balancing, the client load balancing is a concept of a very small minority, but when asked in an interview load balancing knowledge but deliberately the breadth of knowledge of the candidates. Client load balancing is defined in the frame assembly spring-cloud distributed in the Ribbon. When we use spring-cloud distributed framework, with a high probability to start multiple service when a request and ran, then this more service, Ribbon strategy by which this request using the service is the way to determine client load balanced. In the spring-cloud distributed framework client load balancing is transparent to the developer, adding @LoadBalanced comment on it. Client load balancing and server load balancing core difference itself, client load balancing service list maintained by the client in the list of services, server load balancing service list maintained by the intermediate service alone.

By understanding the above knowledge, we can some more comprehensive understanding of load balancing, down I would simply chat with the interviewer and the common load balancing algorithms:

  • Random, random selection for execution by the service, typically using less in this manner;
  • Rotation, load balancing default implementation, then the processing request queue;
  • Weighted rotation, by type of server performance, for high-profile, low-load server assigns a higher weight, the pressure equalization of each server;
  • Address Hash, HASH value obtained by the client requesting an address mapping server modulo scheduling.
  • The minimum number of links; equalized even request, not necessarily the pressure equalization method is based on the minimum number of connections to the server, like the number of parameters such as request backlog, the request is assigned to the current minimum pressure server.
  • A number of other ways.

Guess you like

Origin www.cnblogs.com/gucb/p/11237765.html