Share learning system message queue (b) Why do we need a message queue?

Message queues are one of the oldest middleware, communication between systems from there began to demand, it naturally produces a message queue. However, a precise definition to the message queue was not easy. We know that the main function of the message queue is to send and receive messages, but its role is not just to solve the communication problems between the application so simple.

We give an example to explain the role of the message queue. Saying Xiaoyuan boss is a chocolate workshop, producing delicious chocolate requires three steps: First of all cocoa beans cocoa powder, cocoa powder and then heated and added sugar to become chocolate syrup, chocolate syrup and finally poured into the mold, sprinkle chopped nuts, chocolate after cooling is finished.

The very beginning, after each grind out a bucket of cocoa powder, cocoa workers will be sent to this bucket of chocolate liquor processing workers hands, and then come back under the barrel of processing cocoa powder. Xiaoyuan soon discovered that, in fact, workers do not have to transport semi-finished products, so he between each process adds a set of conveyor belts, abrasive workers as long as a good grinding of cocoa powder on the conveyor belt, you can go to the next processing cocoa powder a barrel. Conveyor solves the "communication" problem between the downstream process.

After the line on the conveyor belt does improve production efficiency, but it also brings new problems: each process is not the same production rate. In the chocolate shop, when a barrel of cocoa powder sent over, workers might be working on a number of cocoa powder, there is no time to receive. Workers must coordinate the different steps of the conveyor belt to place any time of semi-finished, and if not step on the downstream processing speed occurs between the upstream and downstream workers must wait for each other to ensure that no case of semi-finished products on the conveyor belt will not be received.

To solve this problem, the downstream band Xiaoyuan each equipped with a temporary transfer of the semifinished product warehouse, so that the workers do not have to wait for an upstream downstream workers available, any time of semi-finished products can be thrown on the finished belt, can not receive the goods are temporarily stored in the warehouse, workers can always be taken downstream. Conveyors are equipped warehouses in fact played a role in "communication" process "cache" of.

 

Semi-finished products conveyor belt to solve the transportation problem, can stage a number of semi-finished products warehouse, to solve the inconsistency issue on the downstream production speed, Xiaoyuan implemented a chocolate factory version of the message queue unconsciously.

What issues suitable for use message queues to solve?

Next we talk about daily development, which is suitable for use message queues to solve the problem.

1. asynchronous processing

Most programmers in the interview, I should have asked or ask a classic, but there is no standard answer to the question: how to design a spike system? This problem can have a reasonable answer to one hundred versions, but most answers are inseparable from the message queue.

Spike system needs to address the core problem is how to use the limited server resources, as much as possible in a short time processing of massive requests. We know that dealing with a spike request contains a lot of steps, such as:

  • risk control;
  • Inventory lock;
  • Generate orders;
  • SMS notification;
  • Update statistics.

If no optimization, the normal process flow is: App sends a request to the gateway, in turn calls for these five processes, and returns the result to the APP.

For these five steps for, the ability to decide spike successful, in fact, only risk control and inventory to lock these two steps. As long as the user's request spike through risk control and complete inventory locked in server, you can return to spike the results to the user, for subsequent generate orders, SMS notifications and updates statistics and other steps do not have to deal with the spike in requests carry out.

When the service is completed so that the front end of the step 2, the results of this determination spike request, can be immediately returned to the user in response, the requested data is then placed in the message queue, for subsequent operation by the asynchronous message queue.

A spike request processing, reduced from 5 steps to 2 steps, not only faster response, and during the spike, we can put a large amount of server resources to handle the request spike. After the end of the spike resources for later processing step, make full use of the limited resources of the processing server requests additional spike.

It can be seen, in this scenario, the message queue is used to implement asynchronous processing services. The advantage of this is:

  • May return results faster;
  • Reduce the wait, to achieve concurrency between natural step to enhance overall system performance.

2. Flow Control

Continued our spike systems, we have used asynchronous message queues realized some work, but we also face a problem: how to avoid too many requests overwhelm our spike systems?

Design a robust program has the ability to protect themselves, that is, it should be at the request of the mass, but also in its own capabilities as much as possible to process the request, refused to process the request and can not guarantee their normal operation. Unfortunately, in reality, many programs are not so "robust", but refused to directly return an error to the user request is also not very good experience.

Therefore, we need to design a sufficiently robust architecture to the back-end service protected. Our design idea is the use of message queues isolation gateway and back-end services, flow control and protection in order to achieve the purpose of back-end services.

After addition of the message queue, the entire process becomes spike:

  1. Gateway after receiving the request, the request into a request message queue;
  2. Backend service acquisition request message from the request queue APP, complete spike subsequent process, and returns the result.

After the spike began when a large number of short spike request to the gateway, will not impact directly to the back-end spike services, but to accumulate in the message queue, back-end services according to their maximum capacity and consumption from the message queue process the request.

Timeout request may be directly discarded, APP timeout no response to treatment failure to spike. Operation and maintenance personnel can also increase the number of instances at any time spike horizontal expansion of services, without making any changes to the rest of the system.

The advantage of this design are: to automatically adjust the flow of the processing capability of the downstream, to the role of "load shifting" of. But this is also a cost:

  • Increase the system call link in the chain, resulting in an overall response delay becomes longer.
  • The downstream system must be synchronous to asynchronous message calls, it increases the complexity of the system.

That there is no easier way to do a little traffic control? If we can estimate the processing power spike services, you can achieve a token bucket with message queues, more easily flow control.

Principle token bucket traffic control is: only paid per unit of time a fixed number of tokens to the token bucket, the provisions of the service must come up with a token start token bucket before processing the request, if there is no token bucket tokens, the request is denied. This ensures that the unit of time, can handle the request does not exceed the number of payment tokens, serving as a traffic control.

 

Manner is also very simple and does not destroy the original call chains, as long as the increase in processing APP gateway a request to obtain the token logic.

Token bucket may simply use a fixed capacity message queue plus a "token generator" to achieve: a token generator in accordance with the estimated processing power, the production of uniform and put token Token queue (the queue is full the token is discarded), upon request to the gateway queue token consume a token, a token to get the service continue to call back-end spike, if the acquisition is less than the token is returned directly to spike failure.

These are commonly used in the message queue using two kinds of flow control design methods, you can reasonably be selected according to their respective advantages and disadvantages and different application scenarios.

3. Service decoupling

消息队列的另外一个作用,就是实现系统应用之间的解耦。再举一个电商的例子来说明解耦的作用和必要性。

我们知道订单是电商系统中比较核心的数据,当一个新订单创建时:

  1. 支付系统需要发起支付流程;
  2. 风控系统需要审核订单的合法性;
  3. 客服系统需要给用户发短信告知用户;
  4. 经营分析系统需要更新统计数据;
  5. ……

这些订单下游的系统都需要实时获得订单数据。随着业务不断发展,这些订单下游系统不断的增加,不断变化,并且每个系统可能只需要订单数据的一个子集,负责订单服务的开发团队不得不花费很大的精力,应对不断增加变化的下游系统,不停地修改调试订单系统与这些下游系统的接口。任何一个下游系统接口变更,都需要订单模块重新进行一次上线,对于一个电商的核心服务来说,这几乎是不可接受的。

所有的电商都选择用消息队列来解决类似的系统耦合过于紧密的问题。引入消息队列后,订单服务在订单变化时发送一条消息到消息队列的一个主题 Order 中,所有下游系统都订阅主题 Order,这样每个下游系统都可以获得一份实时完整的订单数据。

无论增加、减少下游系统或是下游系统需求如何变化,订单服务都无需做任何更改,实现了订单服务与下游服务的解耦。

 

小结

以上就是消息队列最常被使用的三种场景:异步处理、流量控制和服务解耦。当然,消息队列的适用范围不仅仅局限于这些场景,还有包括:

  • 作为发布 / 订阅系统实现一个微服务级系统间的观察者模式;
  • 连接流计算任务和数据;
  • 用于将消息广播给大量接收者。

简单的说,我们在单体应用里面需要用队列解决的问题,在分布式系统中大多都可以用消息队列来解决。

同时我们也要认识到,消息队列也有它自身的一些问题和局限性,包括:

  • 引入消息队列带来的延迟问题;
  • 增加了系统的复杂度;
  • 可能产生数据不一致的问题。

所以我们说没有最好的架构,只有最适合的架构,根据目标业务的特点和自身条件选择合适的架构,才是体现一个架构师功力的地方。

思考题

在你工作或学习涉及到的系统中,哪些问题可以通过引入消息队列来解决?对于系统中已经使用消息队列,可以对应到这一讲中提到的哪个场景?如果没有可以对应的场景,那这个消息队列在系统中起到的是什么作用?解决了什么问题?是否又带来了什么新的问题?欢迎在留言区写下你的想法。

Guess you like

Origin www.cnblogs.com/wt645631686/p/11408452.html