Concurrent consumption of the same queue in rocketmq's cluster mode to ensure that messages are not lost, consumption progress maintenance

Rocketmq ensures that messages are not lost. How to do this requires research on how to maintain the consumption progress.

It will lead to repeated consumption. This article does not consider the situation that the broker cannot be started due to downtime, hot standby, etc.

 

First, the final root data is kept in the ConsumerOffsetManager class in each broker

 

[java] view plain copy

  1.  * Consumer consumption progress management  
  2.  *   
  3.  * @author shijia.wxr<[email protected]>  
  4.  * @since 2013-8-11  
  5.  */  
  6. public class ConsumerOffsetManager extends ConfigManager {  
  7.    private ConcurrentHashMap<String/* topic@group */, ConcurrentHashMap<Integer, Long>> offsetTable =  
  8.             new ConcurrentHashMap<String, ConcurrentHashMap<Integer, Long>>(512);  
  9. }  

 

 
 

Cached copy: in the RemoteBrokerOffsetStore of each consumer 

 

 

 

The broker saves the progress using the commitOffset passed by the consumer (originally thought to be the read offsetMax). (From the offset in the consumer's local remoteOffSetStore, originally from the broker's offsetTable).

* Broker offset update process:

 

1. Regularly call RemoteBrokerOffsetStore.persistAll to update the broker                   

2. The commitOffset uploaded every time you pull_message comes from the local offsetTable.

                        This commitOffset 1. Originally comes from the consumer startup phase. The offset data obtained from the broker when the load balancing service is started is set in the pullRequest, http://blog.csdn.net/quhongwei_zhanqiu/article/details/39142693

                                                       2. The subsequent offset value from the consumer's local area (see the update process of consumer offset for details: ) The code is in

 

[java] view plain copy

  1.  DefaultMQPushConsumerImpl.pullMessage(PullRequest)  (com.alibaba.rocketmq.client.impl.consumer){  
  2.      long commitOffsetValue = 0L;  
  3.         if (MessageModel.CLUSTERING == this.defaultMQPushConsumer.getMessageModel()) {  
  4.             commitOffsetValue =  
  5.                     this.offsetStore.readOffset(pullRequest.getMessageQueue(),  
  6.                         ReadOffsetType.READ_FROM_MEMORY);  
  7.             if (commitOffsetValue > 0) {  
  8.                 commitOffsetEnable = true;  
  9.             }  
  10.         }  
  11. }  

 

 

                      Note: In push mode, subsequent pulls continuously obtain data through minOffset. It is not affected by consumption progress offset.

Consumer offset update process (three in total): 

                    1. After each successful consumption, delete the consumption message and get the local minimum value ( result = msgTreeMap.firstKey(); ) to update

 

[java] view plain copy

  1. In ConsumeMessageConcurrentlyService.java  
  2. public void processConsumeResult(ConsumeConcurrentlyStatus, ConsumeConcurrentlyContext, ConsumeRequest){  
  3.  ....  
  4.        long offset =consumeRequest.getProcessQueue().removeMessage(consumeRequest.getMsgs()); //Author's Note: The returned value is the minimum value in the current queue  
  5.         if (offset >= 0) {  
  6.             this.defaultMQPushConsumerImpl.getOffsetStore().updateOffset(consumeRequest.getMessageQueue(),  
  7.                 offset, true);  
  8.         }  
  9. }  

 

Tips:  

[java] view plain copy

  1. removeMessage(consumeRequest.getMsgs()); //The author's note: The minimum value in the current queue is returned, and the minimum value in the current queue is used, not the value of the current consumption success  

 

Example: Multi-threaded consumption, 10 messages are pulled at a time, the offset is from 100 to 110, if the last one is successfully consumed first, and the 100th is still being consumed, 100 (instead of 110) is used when changing the local offset. This point The implementation ensures that the consumption is not lost, but there will be multiple consumptions.

ps: 

    Regardless of whether the message consumption returns the successful consumption status, the above two steps will be performed. If the consumption fails, it will be sent back to the broker's retry queue. If the message consumption is successful, it will be deleted from the local queue. If it is sent back to the broker due to network reasons, it fails . will be put back into the local queue. And delete the message and increment the forward consumption progress offset.  

   If it restarts at this time, the message will be lost because the consumption progress has advanced, but the message has not been consumed or sent back to the broker. This is considered a bug.

 

 

 

        2. RequestCode.RESET_CONSUMER_CLIENT_OFFSET:命令

         3.  DefaultMQPushConsumerImpl.resetOffsetByTimeStamp 

        Advanced usage, for example: collect multiple mqs and refresh them asynchronously to the database. Then the offset may be inaccurate. By regularly saving the message timestamp, the offset is automatically rolled back every time it restarts. It ensures that the message is not lost. But it will Repeat consumption. It is more suitable for frequent messages.

Similar to the real-time computing framework, it is first stored in memory and then aggregated.

References: See the source code on github

store messages are not lost:

http://blog.csdn.net/azhao_dn/article/details/7008590

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325570546&siteId=291194637