kafka message queue of several common usage scenarios described

Disclaimer: This article is a blogger original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
This link: https://blog.csdn.net/huyunqiang111/article/details/99961460

I. Introduction

Message Queue Middleware is a distributed system, an important component, mainly to solve application coupled, asynchronous messaging, traffic cut front and other issues. High performance, high availability, scalable architecture and eventual consistency. Use more message queues have ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ, RocketMQ.

Second, the message queue scenarios

The following describes the message queue in practical applications common usage scenarios: asynchronous processing, decoupling applications, and message communication flow front cutting four scenarios.

1, asynchronous processing

Scene Description: After user registration, you need to send registered mail and SMS registration. Traditionally, there are two: serial and parallel manner.

Serial mode : After successful registration information into the database, send registered mail and then send the registration SMS. After more than three tasks completed, returned to the client.

Parallel way : After successful registration information into the database, and send a registered mail, send a registration SMS. After more than three tasks completed, returned to the client. The difference is that serial, parallel way of processing time can be improved.

Each service node is assumed that the use of three 50 msec clock, without considering other network overhead and the like, the time is 150 milliseconds in a serial fashion, parallel time may be 100 milliseconds.

Because the number of requests processed by the CPU per unit time is constant, is assumed that the CPU1 100 sec throughput. The amount of the serial mode request can be processed within one second CPU is 7 times (1000/150). Request an amount of parallel processing is 10 (1000/100).

Summary : As described in the above case, the performance of the traditional way of the system (amount of concurrency, throughput, response time) will be a bottleneck. How to solve this problem?

Introducing the message queue, the service logic will not necessarily, asynchronous processing. After transformation, the structure is as follows:

According to the above convention, user response time is the time corresponding to the registration information into the database, which is 50 ms. After registration messages, send SMS message queue is written directly back, so the message queue writing speed is fast, can be ignored, the user response time may be 50 ms. Therefore, after the organizational changes, the system throughput of up to 20QPS per second. Increased 3 times than serial, parallel than tripled!

2, the application of decoupling

Scene Description: After the user orders, the order system needs to notify the inventory system. The traditional approach is that the interface inventory system order system call. As shown below:

The traditional model of disadvantages:

If the inventory system is not accessible, orders minus inventories will fail, resulting in an order fails, the ordering system and inventory system coupling.

How to solve the above problem? After the introduction of the application program message queue, as shown below:

Order system: after the user orders, order processing system to complete persistence, writes messages to the message queue and returns the user orders a single Successful

Inventory System: single subscription to the message, using the pull / push manner, acquires the order information, inventory information system according to the orders, inventory operation

Given: At the next single inventory system can not work properly. Does not affect the normal order, because the order, the order system writes message queue is no longer concerned about the other follow-up operations. Decoupling order to achieve application system and inventory system.

3, cut the flow front

Feng also cut traffic message queue of common scenarios, usually in groups or spike rush activities used widely!

Scenario: spike activity, usually because of too much traffic, leading to traffic surge, application hang. To solve this problem, the general need to add the message queue in front-end applications.

The number of possible control activities, a short time can ease traffic overwhelmed by applications.

The user's request, the server receives, first written message queue. If the queue size exceeds the maximum number, the user request or discarded directly jump to the error page.

The spike service information request message queue, do the subsequent processing.

4, log processing

Log processing means in the message queue with log processing, such as Kafka applications, a number of problems to solve log transfer. Architecture simplified as follows:

The client log collection, is responsible for log data collection, timed written by Kafka write queue; Kafka message queues, log data is responsible for receiving, storing and forwarding; log processing applications: consumer subscription and log data kafka queue.

The following is Sina kafka log processing Applications:

Kafka : receiving a user log message queue;

Logstash : do log analysis, to be unified into JSON output Elasticsearch;

Elasticsearch : the core technology of real-time log analysis services, a schemaless, real-time data storage services, by organizing data index, both powerful search and statistical functions;

Kibana : Based on data visualization components Elasticsearch and powerful data visualization capabilities is one of many companies choose ELK stack of important reasons.

5, the message communications

Refers to message communication, message queues are generally built efficient communication mechanism, and therefore may be used in a pure message communication. Such as point to point message queue, or chat rooms.

Point communication:

Client A and client B using the same queue, a message communication.

Chat Newsletter:

Client A, client B, client N subscribe to the same theme, message distribution and reception. Chat achieve a similar effect.

Above it is actually two message queue message mode, point to point or publish subscription model. Model diagram for reference.

Third, the exemplary messaging middleware

1, the electricity supplier system

Message queues are highly available persistent messaging middleware. For example, Active MQ, Rabbit MQ, Rocket Mq.

After application of the trunk logic process is completed, the write message queue. If the message is sent successfully mode can be turned on confirmation message. (Return message after successful reception of the message queue status, then return to the application, and message integrity protection);

Extension process (texting, distribution processing) subscription queue message. Using push or pull mode and the message acquisition processing;

The message will also apply decoupling, bringing data consistency issues, eventual consistency may employ to solve. Such as the main data to a database, according to extend the application message queue, and the database in conjunction with the subsequent processing manner based on the message queue;

2, log collection system

Zookeeper divided registry, log collection client, a cluster of four parts Kafka and Storm cluster (OtherApp).

Zookeeper registry proposed load balancing and address lookup service;

Collecting client logs, application logs for collecting and pushing data to kafka queue;

Kafka Cluster: receiving, routing, storage, processing and forwarding messages;

Storm cluster: and OtherApp at the same level, using the pull way consumption data queue;

Guess you like

Origin blog.csdn.net/huyunqiang111/article/details/99961460