kafka consumer spending out of order issue

 problem:

recurrent:

生产数据 先add 再delete操作:
//使用kafka模拟 先add  再delete操作
Message message = new Message(UUID.randomUUID().toString(), "add product", new Date());
kafkaTemplate.send("product", JSON.toJSONString(message));
Message message1 = new Message(UUID.randomUUID().toString(), "delete product", new Date());
kafkaTemplate.send("product", JSON.toJSONString(message1));

In this experiment kafka three nodes, so there are three partation partitions, each in rotation to transmit data selecting a transmission patition

(1) specifies patition, is used directly;

(2) Not specified patition but the specified key, the key through a hash value for patition;

(3) patition and not specified key, a select polling patition.

Consumption data
Consumer-A 

20 is 2019-12-13: 29: 26.437 [# 0-0-C-org.springframework.kafka.KafkaListenerEndpointContainer. 1] cikspring_kafka.KafkaListeners the INFO - Consumption data storage ---- Message {id = 'e819cb58-4ac2-485e 8a782c68f631--84dc ', MSG =' the Add Product ', sendTime Fri On Dec 13 is 20:29:25 CST = 2019}

2019-12-13 20:29:26.437---->add product

Consumer-A1

20 is 2019-12-13: 29: 26.384 [# 0-9-C-org.springframework.kafka.KafkaListenerEndpointContainer. 1] cikspring_kafka.KafkaListeners the INFO - Consumption data storage ---- Message {id = 'bc895e02- ce52-46a4 259d40c16b82--86de ', MSG =' Delete Product ', sendTime Fri On Dec 13 is 20:29:26 CST = 2019}

2019-12-13 20:29:26.384----->delete product

Found that consumers remove and then added, the database appeared dirty! ! !

the reason:

Producers polling send data to the partition, consumers are randomly subscription message (there may be a partition or a two consumer spending consumer spending a partition), random send data will inevitably lead to more consumer spending data, consumer confusion occurs problem

solution:

生产者生产数据,指定partation,则kafka消费机制则要求同组只有一个消费者消费数据,

生产数据:

//使用kafka模拟 先add  再delete操作
Message message = new Message(UUID.randomUUID().toString(), "add product", new Date());
//public ListenableFuture<SendResult<K, V>> send(String topic, int partition, V data)
kafkaTemplate.send("product",1, JSON.toJSONString(message));
Message message1 = new Message(UUID.randomUUID().toString(), "delete product", new Date());
//public ListenableFuture<SendResult<K, V>> send(String topic, int partition, V data)
kafkaTemplate.send("product", 1,JSON.toJSONString(message1));

另外当我指定指定partation=3时,就出现如下错误,因为集群kafka集群是三个节点的,partation是 0/1/2,没有3,所以报错

kafkaTemplate.send("product",3, JSON.toJSONString(message));

2019-12-14 10:44:27.687 [http-nio-8080-exec-1] ERROR o.a.c.c.C.[.[.[.[dispatcherServlet] - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.kafka.common.KafkaException: Invalid partition given with record: 3 is not in the range [0...3).] with root cause
org.apache.kafka.common.KafkaException: Invalid partition given with record: 3 is not in the range [0...3).

消费数据:

2019-12-13 20:39:06.784 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO  c.i.k.spring_kafka.KafkaListeners - 消费数据入库----Message{id='9aec3a47-dc5a-40de-ba93-2a60a1865d4c', msg='add product', sendTime=Fri Dec 13 20:39:06 CST 2019}
2019-12-13 20:39:06.784 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO  c.i.k.spring_kafka.KafkaListeners - 消费数据入库----Message{id='bd918926-9981-4cba-b7c8-3a220a0219de', msg='delete product', sendTime=Fri Dec 13 20:39:06 CST 2019}

 

2019-12-13 20:39:22.702 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO  c.i.k.spring_kafka.KafkaListeners - 消费数据入库----Message{id='3aac99d7-ce91-49ce-ab7e-80484c6702b7', msg='add product', sendTime=Fri Dec 13 20:39:22 CST 2019}
2019-12-13 20:39:22.702 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO  c.i.k.spring_kafka.KafkaListeners - 消费数据入库----Message{id='cf44778d-31e2-4b73-a7a3-69b42a7581b8', msg='delete product', sendTime=Fri Dec 13 20:39:22 CST 2019}

 

2019-12-13 20:39:24.050 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO  c.i.k.spring_kafka.KafkaListeners - 消费数据入库----Message{id='26705470-b9ad-46a4-8c10-754d9761537d', msg='add product', sendTime=Fri Dec 13 20:39:24 CST 2019}
2019-12-13 20:39:24.050 [org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1] INFO  c.i.k.spring_kafka.KafkaListeners - 消费数据入库----Message{id='7369b9c8-cbed-416f-bbc3-73b814cc7c39', msg='delete product', sendTime=Fri Dec 13 20:39:24 CST 2019}
 

1)多次测试后始终发现路由到Consumer-A1消费数据,则解决消费时序问题

2)当手动结束Consumer-A1所在进程,生产者重新生产数据,则又路由到Consumer-A消费到了数据

发布了135 篇原创文章 · 获赞 16 · 访问量 9万+

Guess you like

Origin blog.csdn.net/nmjhehe/article/details/103533016