ActiveMQ knowledge points (for review, to be sorted out)

effect;

1, 异步化提升整体系统的吞吐能力Asynchronous: ;
2, decoupling: 新模块接入时,可以做到最小代码的改动;
3, 设置流量缓冲池,让后端系统按照自身吞吐能力消费,不被冲垮clipping: ;

Queue two consumption methods:

1, synchronous blocking mode (the receive ()): 接收者调用MessageConsumer的receive()方式接收消息,receive在接收到消息之前将一直阻塞;
2, non-blocking asynchronous mode (the 接受者调用MessageConsumer的setMessageListenerz注册一个消息监听器, 当消息到达后,系统自动调用监听器MessageLister的onMessage()方法onMessage): ;

1. Produce one message first, and only start consumer number one.
Question: Can consumer number one consume the message?
yes.
2. Produce 1 message first, start consumer 1 first and then start consumer 2,
question: can consumer 2 consume the message?
Consumer No. 1 can consume, Consumer No. 2 cannot.
3. Start 2 consumers first and then produce 6 messages.
Question: What is the consumption situation?
half each.

Topic (publish and subscribe):

1、生产者将消息发送到topic中,每个消息可以有多个消费者
2、生产者和消费者之间有时间上的相关性,订阅某一个topic的消费者只能消费订阅后发布的消息;
3、生产者生产时,topic不保存消息,假如无消费者就去生产,那就是一条废消息,一般先启动消费者再启动生产者;

mode queue and queue queues topic model
1, the operation mode:
queue是负载均衡模式,如果当前没有消费者,消息也不会丢失,如果有多个消费者,一条消息也只会发送给其中一个消费者;
topic是订阅发布模式,如果当前没有消费者,消息会丢失,如果有多个订阅者,订阅者都会接收到消息;
2, presence or absence of the state:
queue数据默认在MQ服务器上默认是文件形式保存,也可配置成DB存储;
topic是无状态的;
3, transmission integrity:
queue消息不会丢弃;
topic如果没有消费者,消息会被丢弃;
4, obtaining the message by:
queue是pull模式,消费者先发送个请求给broker是否有消息,有消息拉取消息;
topic是push,不需要消费者询问,broker会主动把消息发送给订阅者;

Pull mode and push mode

a. point to point 如果没有消费者在监听队列,消息将保留在队列中,直至消息消费者连接到队列为止message .
这种消息传递模型是传统意义上的懒模型或轮询模型.
在此模型中,消息不是自动推动给消息消费者的,而是要由消息消费者从队列中请求获得(拉模式).

b. Pub / sub messaging model is a push model. 在该模型中,消息会自动广播,消息消费者无须通过主动请求或轮询主题的方法来获得新的消息.

ActiveMQ message type

1, TextMessage text 携带一个java.lang.String作为有效数据(负载)的消息,可用于字符串类型的信息交换message: ;
2, ObjectMessage object 携带一个可以序列化的Java对象作为有效负载的消息,可用于Java对象类型的信息交换message: ;
. 3, a MapMessage mapping 携带一组键值对的数据作为有效负载的消息,有效数据值必须是Java原始数据类型(或者它们的包装类)及Stringmessage: .
即:byte , short , int , long , float , double , char , boolean , String
4, BytesMessage byte 携带一组原始数据类型的字节流作为有效负载的消息message: ;
. 5, stream message 携带一个原始数据类型流作为有效负载的消息,它保持了写入流时的数据类型,写入什么类型,则读取也需要是相同的类型StreamMessage: ;

Endurance:

Queue persistence: messageProdecer.setDeliveryMode (DeliveryMode.Persistent), the default is persistence.

Local transaction: The transaction transaction is partial to the producer, and the acknowledgement is partial to the consumer.
False: As long as the send is executed, the message enters the queue. To close the transaction, the setting of the second parameter acknowledge needs to be valid.
true: execute send and then commit before the message is actually submitted to the queue. Messages need to be sent in batches and buffer processing is required.
Transactional messages will be automatically confirmed no matter what message confirmation mode is set.

Note: Producer —> broker and broker —> consumers are completely two operations.
Their transactions and acknowledge are not related at all, because their producer and consumer sessions are not related at all!

The producer's transaction is to ensure that the batch is sent to the broker, either succeeding or failing together.
The consumer's transaction ensures that the message sent by brokee reaches the consumer. .

Transaction message resend mechanism:

The situation of re-delivering the message:
1. The consumer uses a transaction and rollback () is called in the session;
2. The consumer uses a transaction and closes before calling commit;
3. The consumer is called in the session in client_acknowledge mode recover ();
4, when the consumer consumes the message down, the message is automatically transferred to other consumers.
The default number of retransmissions is 6, and the retransmission interval is 1 second;

acknowledge message confirmation mechanism:

The successful consumption of a message is divided into three stages:
1. The consumer receives the message from the broker; 2. The consumer processes the message; 3. The consumer confirms to the broker that the message has been consumed.
There are three strategies:
1. The default automatic sign-off (Session.auto_acknowledge): the consumer returns successfully from the messageListener.onMessage method, the session automatically confirms the message, and the broker deletes the message.
2. Manual signing (Session.client_acknowledge): The consumer needs to manually call the message.acknowledge () method to confirm the message, and the broker will delete the message.
3. Allow duplicate messages (Session.dups_ok_acknowledge): It is not necessary to confirm, the message may be sent repeatedly.

Transfer Protocol:

The default is the TCP protocol, and the default port for tcp is 61616.
It is recommended to use the NIO protocol, which is expanded and optimized based on the Tcp protocol and has better scalability.
Configuration modification: the name attribute of the transportConnector node under the transportConnetors node of the activemq.xml file in the conf directory is changed to auto + nio, and the uri attribute is changed to auto + nio.

Permanence:

1. The default is kahaDB: based on log files. Configure the log file directory in the persistenceAdapter node. (The default is $ {activemq} / data / kahadb /)
2. jdbc: Based on a third-party database, such as mysql, in the persistenceAdapter node, change to jdbc configuration, and configure the DataSource.
3.levelDB

Kahadb storage uses a transaction log plus an index file to store all its addresses.
db-1.log, db.data, db.free, db.redo, lock
db-num.log: store data in a predefined file size, default 32M per file, a new file when the data file is full Will be created accordingly. When there is no more data referenced by the index in the file, the file will be deleted.
db.data: The index file of the message, use BTree as the index to point to the message stored in db-num.log.
db.redo: used for message recovery, if the kahaDb message is stored and started after a forced exit, it is used to restore the BTree index.
lock: The lock file indicates a lock, indicating a broker that has read and write permissions to the current kahadb.

jdbc storage mode:

There are three tables:
1. activemq_msgs: a table used to store messages;
2. activemq_acks: a storage subscription relationship;
3. activemq_lock: used in a cluster environment, there can only be one master broker at a time;

If it is queue, the message will be stored in the activemq_msgs table without consumers. As long as any consumer has consumed it, these messages will be cleared after consumption.
If it is a topic, ordinary subscriptions will not persist messages. Persistent subscriptions need to start the producer before producing the message. Afterwards, regardless of whether the subscriber is online, the message will eventually be received.
If not, all the unreceived messages will be received and processed when connecting again.

Development notes:

  1. Put the database jar package in the activemq lib directory;
  2. The createTableOnStartup attribute is removed after startup is completed, or changed to false;
  3. The database encoding uses latin format;

You can use jdbc with journal mode, using cache write technology, which greatly improves performance, first write the journal file and then write to the database.

High availability:

How to ensure high availability after introducing message queue?
Master-slave cluster using zookeeper + replicated-leveldb-store. It is an activeMQ cluster based on zookeeper and levelDB.
The cluster provides a high-availability cluster function in the main and standby modes to avoid single points of failure.
Principle: Use the zookeeper cluster to register all activeMq, but only one of the brokers can provide services is regarded as the master, and the
other brokers are in the standby state and regarded as the slave. If the master cannot provide services, zookeeper will elect a slave to act as the master, the
slave connects to the master and synchronizes its storage state, and the slave does not accept the client's request.

Asynchronous delivery:

For slow consumers, sending messages synchronously may cause producer blocking, and slow consumers are suitable for asynchronous delivery.
ActiveMQ uses the asynchronous sending mode by default. Unless you explicitly specify synchronous sending or send persistent messages without using transactions, the two cases are synchronous.
Usage scenario: Allow a small amount of data loss in the case of failure, and send messages in a relatively dense case.
Configuration:
1, connection uri
2, connectionFactory
3, connection

How to determine the success of the asynchronous message?
The asynchronous sending method needs to receive the callback.

The difference between synchronous and asynchronous transmission.
If the send is not blocked, such as synchronous sending, it means that the sending must be successful. For
asynchronous sending, the callback needs to be received and the client determines whether the sending is successful again.

Delayed delivery:

The broker node configuration of activemq.xml file in the activemq conf directory needs to have schedulersupport = true;
there are four major parameters:
amq_scheduler_delay: delay delivery time;
amq_scheduler_period: time interval for
repeated delivery ; amq_scheduler_repeat; number of repeated delivery;
amq_scheduler_cron: cron formula;

Dead letter queue:

ACK message poisonous
一个消息超过最大的重发次数(默认为6次)时,消费端会给broker发送“poison ack”,表示这个消息有毒,告诉broker不要再发了Poison: .
这个时候broker会把这个消息放到死信队列(dead letter queue).
将所有的deadletter保存在一个共享队列中,共享队列默认为ActiveMQ.DLQ.

How to ensure that the message is not repeatedly consumed?

准备一个第三方服务来做消费记录,以redis为例,给消息分配一个全局id,只要消费过该消息,将<id,message>以k-v键值对形式存入redis, 那消费者在开始消费之前,先去redis查询有没有消费记录即可

Published 15 original articles · praised 0 · visits 67

Guess you like

Origin blog.csdn.net/xrzi2015/article/details/105604123