Four Application Scenarios of Message Queuing

The message queue middleware is an important component in the distributed system, which mainly solves problems such as application coupling, asynchronous messages, and traffic cutting.

Enables a high-performance, highly available, scalable and eventually consistent architecture

The most used message queues are ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ, RocketMQ

2. Application Scenario of Message Queuing

The following describes the common usage scenarios of message queues in practical applications. Four scenarios of asynchronous processing, application decoupling, traffic cutting and message communication

2.1 Asynchronous processing

Scenario description: After users register, they need to send a registration email and a registration SMS. There are two traditional methods: 1. Serial way; 2. Parallel way

(1) Serial mode: After successfully writing the registration information into the database , send the registration email, and then send the registration SMS. After all the above three tasks are completed, return to the client

 

(2) Parallel mode: After the registration information is successfully written into the database, the registration message is sent at the same time as the registration email is sent. After the above three tasks are completed, return to the client. The difference from serial is that the parallel method can improve the processing time

 

Assuming that each of the three business nodes uses 50 milliseconds, without considering other overheads such as the network, the serial time is 150 milliseconds, and the parallel time may be 100 milliseconds.

Because the number of requests processed by the CPU per unit time is fixed, it is assumed that the CPU throughput is 100 times per second. Then the number of requests that the CPU can process in 1 second in serial mode is 7 times (1000/150). The amount of requests processed in parallel is 10 times (1000/100)

Summary: As described in the above case, the performance (concurrency, throughput, response time) of the traditional system will have bottlenecks. How to solve this problem?

The introduction of message queue will not be necessary for business logic, asynchronous processing. The modified structure is as follows:

 

According to the above convention, the user's response time is equivalent to the time when the registration information is written into the database, that is, 50 milliseconds. After registering an email, sending a short message and writing it to the message queue, it returns directly, so the speed of writing to the message queue is very fast and can be basically ignored, so the user's response time may be 50 milliseconds. So after the architecture change, the throughput of the system increased to 20 QPS per second. 3 times faster than serial and 2 times faster than parallel

2.2 Application Decoupling

Scenario description: After the user places an order, the order system needs to notify the inventory system. Traditionally, the order system calls the interface of the inventory system. As shown below

 

Disadvantages of traditional mode:

  • If the inventory system is inaccessible, the order to reduce inventory will fail, resulting in order failure

  • The order system is coupled with the inventory system

How to solve the above problems? The solution after introducing the application message queue is as follows:

 

  • Order system: After the user places an order, the order system completes the persistent processing, writes the message to the message queue, and returns the success of the user's order.

  • Inventory system: subscribe to the news of the order, and use the pull/push method to obtain the order information. The inventory system performs inventory operations according to the order information.

  • If: The inventory system cannot be used normally when placing an order. It does not affect the normal order placement, because after the order is placed, the order system writes to the message queue and no longer cares about other subsequent operations. Realize application decoupling of order system and inventory system

2.3 Traffic cut

Traffic cutting is also a common scenario in message queues, and is generally widely used in spike or group grab activities

Application scenario: spike activity, generally due to excessive traffic, the traffic surges and the application hangs. To solve this problem, it is generally necessary to join a message queue at the front end of the application.

  • The number of people who can control the activity

  • It can alleviate the application of high flow in a short period of time

 

  • After the user's request is received by the server, it is first written to the message queue. If the message queue length exceeds the maximum number, directly discard the user request or jump to the error page

  • The seckill business performs follow-up processing according to the request information in the message queue

2.4 Log Processing

Log processing refers to the use of message queues in log processing, such as the application of Kafka, to solve the problem of a large number of log transmissions. The architecture is simplified as follows

 

  • The log collection client is responsible for log data collection, and writes regularly to the Kafka queue.

  • Kafka message queue, responsible for receiving, storing and forwarding log data

  • Log processing application: subscribe and consume log data in kafka queue

The following is the application case of Sina kafka log processing: from (http://cloud.51cto.com/art/201507/484338.htm)

 

(1) Kafka: message queue for receiving user logs

(2) Logstash: Do log parsing, unify it into JSON and output it to Elasticsearch

(3) Elasticsearch: The core technology of real-time log analysis service, a schemaless, real-time data storage service, organizes data through index, and has powerful search and statistical functions

(4) Kibana: Data visualization component based on Elasticsearch, super data visualization ability is an important reason why many companies choose ELK stack

2.5 Messaging

Message communication means that message queues generally have built-in efficient communication mechanisms, so they can also be used for pure message communication. For example, implementing peer-to-peer message queues, or chat rooms, etc.

Peer-to-peer communication:

 

Client A and Client B use the same queue for message communication.

Chat room communication:

 

Client A, client B, and client N subscribe to the same topic to publish and receive messages. Achieving a chat room-like effect.

The above are actually two message modes of message queues, point-to-point or publish-subscribe mode. The model is a schematic diagram for reference.

3. Example of message middleware

3.1 E-commerce system

 

The message queue adopts high-availability and persistent message middleware. Such as Active MQ, Rabbit MQ, Rocket MQ.

(1) After the application completes the main logic processing, it writes to the message queue. Whether the message is sent successfully can enable the confirmation mode of the message. (After the message queue returns the message receiving status, the application returns to ensure the integrity of the message)

(2) The extension process (sending text messages, delivery processing) subscribes to queue messages. Use push or pull to get messages and process them.

(3) While the message decouples the application, it brings about the problem of data consistency, which can be solved by the final consistency method. For example, the main data is written into the database, and the extended application realizes the follow-up processing based on the message queue in combination with the database method according to the message queue.

3.2 Log Collection System

 

It is divided into four parts: Zookeeper registry, log collection client, Kafka cluster and Storm cluster (OtherApp).

  • Zookeeper registry, proposes load balancing and address lookup services

  • The log collection client is used to collect the logs of the application system and push the data to the kafka queue

  • Kafka cluster: receiving, routing, storing, forwarding and other message processing

Storm cluster: at the same level as OtherApp, consumes data in the queue by pulling

4. JMS message service

Speaking of message queues, we have to mention JMS. The JMS ( Java  Message Service, Java Message Service) API is a message service standard/specification that allows application components to create, send, receive and read messages based on the JavaEE platform. It makes distributed communication less coupled, messaging services more reliable and asynchronous.

In the EJB architecture, there are message beans that can be seamlessly integrated with the JM message service. In the J2EE architecture pattern, there is the message server pattern, which is used to realize the direct decoupling of messages and applications.

4.1 Message Model

In the JMS standard, there are two message models P2P (Point to Point), Publish/Subscribe (Pub/Sub).

4.1.1 P2P Mode

 

The P2P model includes three roles: message queue (Queue), sender (Sender), and receiver (Receiver). Each message is sent to a specific queue, and the receiver gets the message from the queue. Queues hold messages until they are consumed or time out.

Features of P2P

  • There is only one consumer per message (that is, once consumed, the message is no longer in the message queue)

  • There is no time dependency between the sender and the receiver, that is, when the sender sends a message, whether the receiver is running or not, it will not affect the message being sent to the queue

  • The receiver needs to reply to the queue successfully after receiving the message successfully 

P2P mode is required if it is expected that every message sent will be processed successfully. (Architecture KKQ: 466097527, welcome to join)

4.1.2 Pub/sub mode

 

Contains three roles topic (Topic), publisher (Publisher), subscriber (Subscriber) Multiple publishers send messages to the Topic, and the system delivers these messages to multiple subscribers.

Features of Pub/Sub

  • Each message can have multiple consumers

  • There is a time dependency between publishers and subscribers. For a subscriber of a topic (Topic), it must create a subscriber before consuming the publisher's messages

  • In order to consume messages, subscribers must remain running

To ease such strict time dependencies, JMS allows subscribers to create a durable subscription. This way, even if the subscriber is not activated (running), it can receive messages from the publisher.

If the message you want to send can be processed without any processing, or only processed by one messager, or can be processed by multiple consumers, then the Pub/Sub model can be used.

4.2 Message consumption

In JMS, the production and consumption of messages are asynchronous. For consumption, JMS messagers can consume messages in two ways.

(1) Synchronization

Subscribers or receivers receive messages through the receive method, and the receive method will block until the message is received (or before the timeout).

(2) Asynchronous

Subscribers or receivers can be registered as a message listener. When the message arrives, the system automatically calls the listener's onMessage method.

 

JNDI: Java Naming and Directory Interface, is a standard Java naming system interface. Services can be found and accessed on the web. By specifying a resource name, the name corresponds to a record in the database or naming service, and returns the information necessary to establish a resource connection.

JNDI plays a role in finding and accessing the destination or source of the message in JMS.

4.3 JMS programming model

(1) ConnectionFactory

The factory for creating Connection objects, for two different jms message models, are QueueConnectionFactory and TopicConnectionFactory. The ConnectionFactory object can be found through JNDI.

(2) Destination

Destination means the message sending destination of the message producer or the message source of the message consumer. For message producers, its Destination is a queue (Queue) or a topic (Topic); for message consumers, its Destination is also a queue or topic (that is, a message source).

Therefore, Destination is actually two types of objects: Queue and Topic can find Destination through JNDI.

(3) Connection

Connection represents the link established between the client and the JMS system (a wrapper around a TCP/IP socket). Connection can generate one or more Sessions. Like ConnectionFactory, Connection also has two types: QueueConnection and TopicConnection.

(4) Session

Session is the interface for manipulating messages. Producers, consumers, messages, etc. can be created through sessions. Session provides the function of transaction. When multiple messages need to be sent/received using a session, these send/receive actions can be put into a transaction. Similarly, it is also divided into QueueSession and TopicSession.

(5) The producer of the message

Message producers are created by Session and used to send messages to Destinations. Likewise, there are two types of message producers: QueueSender and TopicPublisher. Messages can be sent by calling the methods of the message producer (send or publish).

(6) Message consumers

Message consumers are created by the Session to receive messages sent to the Destination. Two types: QueueReceiver and TopicSubscriber. It can be created by createReceiver(Queue) or createSubscriber(Topic) of session respectively. Of course, the session's creatDurableSubscriber method can also be used to create persistent subscribers.

(7) MessageListener

message listener. If a message listener is registered, once a message arrives, the listener's onMessage method will be called automatically. MDB (Message-Driven Bean) in EJB is a kind of MessageListener.

 

In-depth study of JMS is very helpful for mastering JAVA architecture and EJB architecture. Message middleware is also a necessary component of large-scale distributed systems. This sharing mainly gives a general introduction, and the specific in-depth needs everyone to learn, practice, summarize, and understand.

Five, commonly used message queue

General commercial containers, such as WebLogic and JBoss, all support the JMS standard, which is very convenient for development. But free ones such as Tomcat, Jetty, etc. need to use third-party message middleware. This part introduces the commonly used message middleware (Active MQ, Rabbit MQ, Zero MQ, Kafka) and their characteristics.

5.1 ActiveMQ

ActiveMQ is the most popular and powerful open source message bus produced by Apache. ActiveMQ is a JMS Provider implementation that fully supports the JMS1.1 and J2EE 1.4 specifications. Although the JMS specification has been published for a long time, JMS still plays a special role in today's J2EE applications.

ActiveMQ features are as follows:

⒈ Write clients in multiple languages ​​and protocols. Languages: Java,C,C++,C#,Ruby,Perl, Python , PHP . Application Protocol: OpenWire, Stomp REST, WS Notification, XMPP, AMQP

⒉ Fully supports JMS1.1 and J2EE 1.4 specifications (persistence, XA messages, transactions)

⒊ Support for spring , Activ

The message queue middleware is an important component in the distributed system, which mainly solves problems such as application coupling, asynchronous messages, and traffic cutting.

Enables a high-performance, highly available, scalable and eventually consistent architecture

The most used message queues are ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ, RocketMQ

2. Application Scenario of Message Queuing

The following describes the common usage scenarios of message queues in practical applications. Four scenarios of asynchronous processing, application decoupling, traffic cutting and message communication

2.1 Asynchronous processing

Scenario description: After users register, they need to send a registration email and a registration SMS. There are two traditional methods: 1. Serial way; 2. Parallel way

(1) Serial mode: After successfully writing the registration information into the database , send the registration email, and then send the registration SMS. After all the above three tasks are completed, return to the client

 

(2) Parallel mode: After the registration information is successfully written into the database, the registration message is sent at the same time as the registration email is sent. After the above three tasks are completed, return to the client. The difference from serial is that the parallel method can improve the processing time

 

Assuming that each of the three business nodes uses 50 milliseconds, without considering other overheads such as the network, the serial time is 150 milliseconds, and the parallel time may be 100 milliseconds.

Because the number of requests processed by the CPU per unit time is fixed, it is assumed that the CPU throughput is 100 times per second. Then the number of requests that the CPU can process in 1 second in serial mode is 7 times (1000/150). The amount of requests processed in parallel is 10 times (1000/100)

Summary: As described in the above case, the performance (concurrency, throughput, response time) of the traditional system will have bottlenecks. How to solve this problem?

The introduction of message queue will not be necessary for business logic, asynchronous processing. The modified structure is as follows:

 

According to the above convention, the user's response time is equivalent to the time when the registration information is written into the database, that is, 50 milliseconds. After registering an email, sending a short message and writing it to the message queue, it returns directly, so the speed of writing to the message queue is very fast and can be basically ignored, so the user's response time may be 50 milliseconds. So after the architecture change, the throughput of the system increased to 20 QPS per second. 3 times faster than serial and 2 times faster than parallel

2.2 Application Decoupling

Scenario description: After the user places an order, the order system needs to notify the inventory system. Traditionally, the order system calls the interface of the inventory system. As shown below

 

Disadvantages of traditional mode:

  • If the inventory system is inaccessible, the order to reduce inventory will fail, resulting in order failure

  • The order system is coupled with the inventory system

How to solve the above problems? The solution after introducing the application message queue is as follows:

 

  • Order system: After the user places an order, the order system completes the persistent processing, writes the message to the message queue, and returns the success of the user's order.

  • Inventory system: subscribe to the news of the order, and use the pull/push method to obtain the order information. The inventory system performs inventory operations according to the order information.

  • If: The inventory system cannot be used normally when placing an order. It does not affect the normal order placement, because after the order is placed, the order system writes to the message queue and no longer cares about other subsequent operations. Realize application decoupling of order system and inventory system

2.3 Traffic cut

Traffic cutting is also a common scenario in message queues, and is generally widely used in spike or group grab activities

Application scenario: spike activity, generally due to excessive traffic, the traffic surges and the application hangs. To solve this problem, it is generally necessary to join a message queue at the front end of the application.

  • The number of people who can control the activity

  • It can alleviate the application of high flow in a short period of time

 

  • After the user's request is received by the server, it is first written to the message queue. If the message queue length exceeds the maximum number, directly discard the user request or jump to the error page

  • The seckill business performs follow-up processing according to the request information in the message queue

2.4 Log Processing

Log processing refers to the use of message queues in log processing, such as the application of Kafka, to solve the problem of a large number of log transmissions. The architecture is simplified as follows

 

  • The log collection client is responsible for log data collection, and writes regularly to the Kafka queue.

  • Kafka message queue, responsible for receiving, storing and forwarding log data

  • Log processing application: subscribe and consume log data in kafka queue

The following is the application case of Sina kafka log processing: from (http://cloud.51cto.com/art/201507/484338.htm)

 

(1) Kafka: message queue for receiving user logs

(2) Logstash: Do log parsing, unify it into JSON and output it to Elasticsearch

(3) Elasticsearch: The core technology of real-time log analysis service, a schemaless, real-time data storage service, organizes data through index, and has powerful search and statistical functions

(4) Kibana: Data visualization component based on Elasticsearch, super data visualization ability is an important reason why many companies choose ELK stack

2.5 Messaging

Message communication means that message queues generally have built-in efficient communication mechanisms, so they can also be used for pure message communication. For example, implementing peer-to-peer message queues, or chat rooms, etc.

Peer-to-peer communication:

 

Client A and Client B use the same queue for message communication.

Chat room communication:

 

Client A, client B, and client N subscribe to the same topic to publish and receive messages. Achieving a chat room-like effect.

The above are actually two message modes of message queues, point-to-point or publish-subscribe mode. The model is a schematic diagram for reference.

3. Example of message middleware

3.1 E-commerce system

 

The message queue adopts high-availability and persistent message middleware. Such as Active MQ, Rabbit MQ, Rocket MQ.

(1) After the application completes the main logic processing, it writes to the message queue. Whether the message is sent successfully can enable the confirmation mode of the message. (After the message queue returns the message receiving status, the application returns to ensure the integrity of the message)

(2) The extension process (sending text messages, delivery processing) subscribes to queue messages. Use push or pull to get messages and process them.

(3) While the message decouples the application, it brings about the problem of data consistency, which can be solved by the final consistency method. For example, the main data is written into the database, and the extended application realizes the follow-up processing based on the message queue in combination with the database method according to the message queue.

3.2 Log Collection System

 

It is divided into four parts: Zookeeper registry, log collection client, Kafka cluster and Storm cluster (OtherApp).

  • Zookeeper registry, proposes load balancing and address lookup services

  • The log collection client is used to collect the logs of the application system and push the data to the kafka queue

  • Kafka cluster: receiving, routing, storing, forwarding and other message processing

Storm cluster: at the same level as OtherApp, consumes data in the queue by pulling

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324586469&siteId=291194637