kafka Development Summary

Disclaimer: This article is a blogger original article, shall not be reproduced without the bloggers allowed. https://blog.csdn.net/change_on/article/details/87451323

The last two months, project development use kafka, finally ran through the online environment, this period also stepped on a lot of pit here to do a summary.

Generally demand: to be a tool for image processing, image processing 10w every day, because the room in different places, there are two sets of kafka cluster environment, there are several business systems

Development ideas: 1 prepared beforehand within the network environment
2. Create Single Kafka
3. Development service system
4. business processes running through
5. The two sets of configuration Kafka, and Kafka cluster
6. The each subsystem dual machine, analog cluster
7. Migration to the online environment

Generally is the case, I said some kafka Notes

1. Partition is an integer multiple of the consumer, I used 2 times

kafka startup and shutdown commands:

bin/kafka-server-start.sh config/server.properties
bin/kafka-server-stop.sh config/server.properties

2. The name of the theme to regulate, topic_ role _ _ recipient of the sender

Command is created:

bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 2--topic topic_info_receiver_sender

3. Check the server status kafka consumption

kafka-consumer-groups.sh --zookeeper xxx.xxx.xxx.xxx:2181 --group groupid--describe
GROUP                          TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             OWNER
xxx                           xxx                            xxx            xxx          xxx             xxx

LAG message denotes production, but has not yet been consumed
Total LOG-END-OFFSET indicates that the message queue has been consumed

4. Delete the topic

First with the command:

bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic xxx

If the command is invalid, then enter the zookeeper / bin

./zkCli.sh
ls /brokers/topics 查看所有的topic
rmr /brokers/topics/xxx 删除

The cluster configuration

Copy the zookeeper, kafka to the new machine, change the two documents on the line, a lot of the Internet, will not be repeated

6. Code level

Because more than one service system, the kafka operation, and comsumer Produce encapsulated inside two items, via annotations configuration, introduced through other items maven

<dependency>
			<groupId>com.xxx.kafka</groupId>
			<artifactId>kafka-producer</artifactId>
			<version>0.0.1</version>
		</dependency>
		
		<dependency>
			<groupId>com.xxx.kafka</groupId>
			<artifactId>kafka-consumer</artifactId>
			<version>0.0.1</version>
		</dependency>

7. If a subsystem needs to connect two kafka cluster, i.e. there are two producers, the cache is to be noted, preferably brokerList update before sending the code

<bean id="southKafkaProducer" class="com.xxx.kafka.producer.KafkaProducer"  init-method="init"  lazy-init="true">
	    <property name="brokerlist" value="${metadata.broker.list}"></property>
	    <property name="partion" value="${partion}"></property>
	    <property name="bufferingMaxMs" value="${bufferingMaxMs}"></property>
	    <property name="bufferingmaxmessages" value="${bufferingmaxmessages}"></property>
	  </bean> 
	  
	  <bean id="northKafkaProducer" class="com.xxx.kafka.producer.KafkaProducer"  init-method="init"  lazy-init="true">
	    <property name="brokerlist" value="${metadata.broker.list2}"></property>
	    <property name="partion" value="${partion}"></property>
	    <property name="bufferingMaxMs" value="${bufferingMaxMs}"></property>
	    <property name="bufferingmaxmessages" value="${bufferingmaxmessages}"></property>
	  </bean>

8. About Tuning

  • Memory, the more the better, such as the machine has 8G, kafka to 7.9G
  • Log regular cleaning, use a script
  • To prevent the accumulation of a topic of messages, can be added to the queue, the number of restrictions, not to exceed the total number sent to kafka queue, there are specific introduction on two Bowen
  • Message size limit, due to business needs, can not take care of throughput, but also to the big settings to ensure stability
props.put("fetch.message.max.bytes", "2000000000");
props.put("fetch.message.max.bytes", "2048576000");
  • Little things do not operate kafka! Little things do not operate kafka! Little things do not operate kafka!

Guess you like

Origin blog.csdn.net/change_on/article/details/87451323