Several common solutions for load balancing

Several common solutions for load balancing

Summarize the common solutions and applicable scenarios of load balancing;

Round Robin round robin scheduling

Different servers are requested to be scheduled in turn in a round-robin manner; 
when implemented, weights are generally given to servers; this has two advantages:

  1. Different loads can be allocated for the performance difference of the server;
  2. When a node needs to be eliminated, just set its weight to 0;

Advantages: Simple and efficient implementation; easy to scale horizontally; 
Disadvantages: The uncertainty from the request to the destination node makes it inapplicable to scenarios with writes (cache, database write) 
Application scenarios: only read scenarios in the database or application service layer ;

random way

Requests are randomly distributed to each node; a balanced distribution can be achieved in scenarios where the data is large enough; 
advantages: simple to implement and easy to scale horizontally; 
disadvantages: same as Round Robin, cannot be used in scenarios with writing; 
application scenarios: database load balancing, It is also a reading-only scene;

hash method

Calculating the nodes that need to fall according to the key can ensure that the same key must fall on the same server; 
advantages: the same key must fall on the same node, so it can be used for write and read cache scenarios ; 
Disadvantages: After a node failure, the hash key will be redistributed, resulting in a significant drop in the hit rate; 
Solution: Consistent hashing or use keepalived to ensure the high availability of any node, and there will be other nodes after the failure. Click on the top; 
application scenario: cache, read and write;

Consistent hashing

在服务器一个结点出现故障时,受影响的只有这个结点上的key,最大程度的保证命中率; 
如twemproxy中的ketama方案; 
生产实现中还可以规划指定子key哈希,从而保证局部相似特征的键能分布在同一个服务器上; 
优点:结点故障后命中率下降有限; 
应用场景:缓存;

根据键的范围来负载

根据键的范围来负载,前1亿个键都存放到第一个服务器,1~2亿在第二个结点; 
优点:水平扩展容易,存储不够用时,加服务器存放后续新增数据; 
缺点:负载不均;数据库的分布不均衡;(数据有冷热区分,一般最近注册的用户更加活跃,这样造成后续的服务器非常繁忙,而前期的结点空闲很多) 
适用场景:数据库分片负载均衡;

根据键对服务器结点数取模来负载;

根据键对服务器结点数取模来负载;比如有4台服务器,key取模为0的落在第一个结点,1落在第二个结点上。 
优点:数据冷热分布均衡,数据库结点负载均衡分布; 
缺点:水平扩展较难; 
适用场景:数据库分片负载均衡;

纯动态结点负载均衡

根据CPU、IO、网络的处理能力来决策接下来的请求如何调度; 
优点:充分利用服务器的资源,保证个结点上负载处理均衡; 
缺点:实现起来复杂,真实使用较少;

不用主动负载均衡;

使用消息队列转为异步模型,将负载均衡的问题消灭 
负载均衡是一种推模型,一直向你发数据,那么,将所有的用户请求发到消息队列中,所有的下游结点谁空闲,谁上来取数据处理;转为拉模型之后,消除了对下行结点负载的问题; 
优点:通过消息队列的缓冲,保护后端系统,请求剧增时不会冲垮后端服务器; 
水平扩展容易,加入新结点后,直接取queue即可; 
缺点:不具有实时性; 
应用场景:不需要实时返回的场景; 
比如,12036下订单后,立刻返回提示信息:您的订单进去排队了...等处理完毕后,再异步通知;

相关开源工具

  • HAProxy:可用来做redis多个结点的负载均衡、也可做mysql等数据库的负载、支持4层负载和7层负载;(一般配合Keepalived做高可用)

  • Twemproxy:用来做redis的结点的分片、redis的存储受限与单个结点的内存容量,数据量大到需要分片,使用twemproxy可做到对业务层透明的分片; 
    twemproxy也是使用的单线程reactor模型,一个twemproxy后端接再多的redis结点,其能够支撑的TPS不会超过单个redis结点的处理能力,使用时需要启动多个twemproxy对外提供查询结点;

  • nginx:目前的明星开源产品,只支持7层负载,除了用于反向代理负载均衡,更出名的是作为WEB服务器;

  • LVS:使用Linux内核集群实现一个高性能、高可用的负载均衡服务器,采用IP负载均衡技术和基于内容请求分发技术。未用过这个,有兴趣的同学可看看这篇文章:http://www.ha97.com/5646.html;

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326779745&siteId=291194637