No warm-up, this is not called high concurrency, high concurrent call

As we all know, high concurrency system has three axes: cache, fusing and current limiting. But there is an ax, often forgotten in the corner, frustrated, and that is preheated.

No warm-up, this is not called high concurrency, high concurrent call

 

For example phenomenon

Let me talk about two phenomena. These phenomena can only occur in highly concurrent systems.

Well, it has caused more problems.

First, the DB after the restart, the moment of death

DB under a highly concurrent environment, restart the process after death. As business at its peak during the upstream load balancing strategy occurred reallocation. Just started instantly accepted the DB-third of traffic, and then load the crazy soared, until no further response.

The reason: the new start of DB, and various Cache is not ready, very different system state and normal operation. 1/10 possible usual amount, you can put it into death.

Second, the service is restarted, access exceptions

Another common problem is: I have a server problem occurs, the load balancing effect, the rest of the machine immediately carries these requests, a good run. When the service rejoins the cluster, it has undergone a large number of requests high time-consuming and, in case of a high volume of requests, even the armies of failure.

Causes probably can be attributed to:

1, after the service starts, jvm not completely ready, JIT compilation is not.

2, various resources used by the application is not ready.

3, load balancing occurs rebalance.


These two issues are not well preheating

Warm Up, namely cold start / warm-up way. When the system is in a low level of long-term, when the sudden increase in traffic, directly to the system instantly pulled up to the high water level may overwhelm the system. By "cold start", so that the flow rate through increased slowly, gradually increased the upper threshold within a certain time, to the cooling system of a warm-up time, avoid the cooling system is overwhelmed.

I want this curve.

No warm-up, this is not called high concurrency, high concurrent call

 

And is not the case.

No warm-up, this is not called high concurrency, high concurrent call

 

The fact much more complex

Traffic is unpredictable, which is different from the flow of natural growth, or man-made attacks ---- This is a process from scratch. Some even boast ultra-fast components, such as lmax of disruptor, under this sudden arrival of the flood peak will collapse.

warmup cut level is the most appropriate gateway. FIG: node4 node is just started, integrated in the gateway load balancing assembly, will be able to recognize this example just added, and then gradually to a volume machine, until it is able to withstand high-speed real flow.

No warm-up, this is not called high concurrency, high concurrent call

 

假如所有的请求,都经过网关,一切都好办的多,也有像Sentinel 之类的组件进行切入。但现实情况往往不能满足条件。比如:1、你的应用直接获取了注册中心的信息,然后在客户端组件中进行了流量分配。

2、你的应用通过了一些复杂的中间件和路由规则,最终定位到某一台DB上。

3、你的终端,可能通过了MQTT协议,直接连上了MQTT服务端。

我们进行一下抽象,可以看到:所有这些流量分配逻辑,包括网关,都可以叫做客户端。即所有的warmup逻辑都是放在客户端的,它们都与负载均衡紧密耦合在一起。

解决方式

接口放量

按照以上的分析,通过编码手段控制住所有的客户端调用,即可解决问题。

一个简单的轮询方式

1、我要能拿到所有要调用资源的集合,以及启动时间,冷启动的配置等。

2、给这些资源分配一些权重,比如最大权重为100,配置100秒之后冷启动成功。假如现在是第15秒,则总权重就是100*(n-1)+15。

3、根据算好的权重,进行分配,流量会根据时间流逝逐步增加,直到与其他节点等同。

4、一个极端情况,我的后端只有1个实例,根本就启动不起来。

拿SpringCloud来说,我们就要改变这些组件的行为。

1、ribbon的负载均衡策略。

2、网关的负载均衡策略。

还好,它们都是基础组件,不用来回拷贝代码了。

走马观花

顾名思义,意思就是把所有的接口都提前访问一遍,让系统对资源进行提前准备。 比如,遍历所有的http连接,然后发送请求。 这种方法是部分有效的,一些懒加载的资源会在这个阶段陆续加载进来,但不是全部。 JIT等一些增强功能,可能使得预热过程变得非常的长,走马观花的方式,只能在一定程度上有作用。

再比如某些DB,在启动之后,会执行一些非常有特点的sql,使得PageCache里加载到最需要的热数据。

状态保留

系统在死亡时做一个快照,然后在启动时,原封不动的还原回来。

这个过程就比较魔幻了,因为一般的非正常关闭,系统根本没有机会发表遗言,所以只能定时的,在运行中的系统中做快照。

Node at startup, and then the snapshot is loaded into memory. It is widely used in some type of memory component.

End

By comparison, we found that the most reliable way is to encode the warmup logic is integrated in the client. This work may be painful, lengthy, but the outcome is good.

Of course, you can also "removed nginx-> modify the weights -> reload nginx" way. Sometimes very effective, but not always effective, usually but not always at ease at ease.

Everything with you. After all, no foreplay to the point, called reckless.

Guess you like

Origin www.cnblogs.com/CQqf2019/p/11095336.html
Recommended