Introduction to the use of bboss kafka components

Introduction to the use of bboss kafka
components The git access address of the gradle source code project corresponding to the example used in this article:
http://git.oschina.net/bboss/bestpractice
testkafka sub-project address
http://git.oschina.net/bboss/bestpractice/tree /master/testkafka
bboss kafka component role
  • Quickly configure kafka clients and consumers
  • send data to kafka
  • Receive and process data from kafka (supports batch message processing and bar processing)

1. Import the maven coordinates of the bboss kafka component

<dependency>
    <groupId>com.bbossgroups.plugins</groupId>
    <artifactId>bboss-plugin-kafka</artifactId>
    <version>5.0.5.7</version>
</dependency>

gradle coordinates
compile 'com.bbossgroups.plugins:bboss-plugin-kafka:5.0.5.7'



2. Use kafka producer to send messages
2.1 kafka producer configuration
Write the kafka.xml configuration file and put it under the classpath and path
<properties>
	<property name="productorPropes">
		<propes>
			
			<property name="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer">
				<description> <![CDATA[ specifies the serialization processing class, the default is kafka.serializer.DefaultEncoder, which is byte[] ]]></description>
			</property>
			<property name="key.serializer" value="org.apache.kafka.common.serialization.LongSerializer">
				<description> <![CDATA[ specifies the serialization processing class, the default is kafka.serializer.DefaultEncoder, which is byte[] ]]></description>
			</property>
					
			<property name="compression.type" value="gzip">
				<description> <![CDATA[ Whether to compress, the default 0 means no compression, 1 means gzip compression, 2 means snappy compression. There will be a header in the compressed message to indicate the message compression type, so the message decompression on the consumer side is transparent and does not need to be specified]]></description>
			</property>
			<property name="bootstrap.servers" value="hadoop85:9092,hadoop86:9092,hadoop88:9092">
				<description> <![CDATA[ specifies a list of kafka nodes for obtaining metadata (metadata), not all of them are specified]]></description>
			</property>
		</propes>
	</property>
        <property name="workerThreadSize" value="100"/>
        <property name="workerThreadQueueSize" value="10240"/>
	<property name="kafkaproductor"
		class="org.frameworkset.plugin.kafka.KafkaProductor"
		init-method="init"
		f:sendDatatoKafka="true"
f:sendAsyn="true"
		f:productorPropes="attr:productorPropes"/>		 
		
</properties>


Related configuration instructions:

bootstrap.servers kafka server address configuration
value.serializer kafka message serialization plugin configuration
key.serializer kafka message key serialization plugin configuration
f:sendDatatoKafka="true" Whether to enable the message sending function, false to disable, true to enable
f: sendAsyn="true " controls whether the component sends messages asynchronously, the default is true
workerThreadSize asynchronously sends messages to the thread pool, the default is 100
workerThreadQueueSize asynchronously sends message queues, the default is 10240

2.2 Send kafka messages

Send kafka messages Related components:
org.frameworkset.plugin.kafka .KafkaUtil
org.frameworkset.plugin.kafka.KafkaProductor The

KafkaUtil component loads the configuration file and obtains KafkaProductor, and sends kafka messages through KafkaProductor
KafkaProducer producer = KafkaUtil.getKafkaProducer("kafkaproducer");
		productor.send("blackcat",//kafka topic
				1l, //message key
				"aaa");//message
		productor.send("blackcat", //kafka topic
				"bbb"); //message


Send messages asynchronously

<property name="workerThreadSize" value="100"/>
<property name="workerThreadQueueSize" value="10240"/>

<property name="kafkaproductor"
class="org.frameworkset.plugin.kafka. KafkaProductor"
init-method="init"
f:sendDatatoKafka="true"
f:sendAsyn="true"
f:productorPropes="attr:productorPropes"/>

controls whether to send messages asynchronously through the api:

//Send messages asynchronously
productor. send("blackcat",3l,"aaa",true);
productor.send("blackcat",4l,"bbb",true);

//Send message synchronously
productor.send("blackcat",5l,"aaa ",false);
producer.send(" blackcat",6l,"bbb",false);

3. Receive and process kafka messages
3.1 kafka consumer configuration
Create a new kafkaconsumer.xml file and put it under the classpath root path
<properties>
	<property name="consumerPropes">
		<propes>


			<property name="group.id" value="test">
				<description> <![CDATA[ 指定kafka group id]]></description>
			</property>
			<property name="zookeeper.session.timeout.ms" value="30000">
				<description> <![CDATA[specified kafkazk session timeout]]></description>
			</property>
			

			<property name="auto.commit.interval.ms" value="3000">
				<description> <![CDATA[specify kafka auto-commit interval]]></description>
			</property>

			<property name="auto.offset.reset" value="smallest">
				<description> <![CDATA[ ]]></description>
			</property>
			<property name="zookeeper.connect" value="hadoop85:2181,hadoop86:2181,hadoop88:2181">
				<description> <![CDATA[ specifies a list of kafka nodes for obtaining metadata (metadata), not all of them are specified]]></description>
			</property>

		</propes>
	</property>
	<property name="kafkaconsumer"
		class="org.frameworkset.plugin.kafka.KafkaBatchConsumer" init-method="init"
f:batchsize="-1"
		f:checkinterval="10000"
		f:productorPropes="attr:consumerPropes" f:topic="blackcat"
		f:storeService="attr:storeService" f:partitions="4" />
	<property name="storeService"
		 class="org.frameworkset.plugin.kafka.StoreServiceTest" />	

</properties>

Configuration description:
storeService configures the message processing component
zookeeper.connect configures and manages the kafka server and the zookeeper cluster address
f:topic="blackcat" consumes the number of partitions corresponding to the kafka topic
f:partitions="4" topic, which determines the number of parallel processing messages Worker thread
f:batchsize="-1" The number of batch messages, -1 disables batch processing, and >0 submits messages to the storeservice component in batches in batch mode
f:checkinterval="10000" specifies the maximum batch message reception Wait time, in milliseconds.

In the batch mode, if the time specified by checkinterval exceeds the time specified by the checkinterval and the arriving message does not reach the batchsize, the data of the current batch will be forced to be submitted to the storeservice
component
. kafka.KafkaConsumer
org.frameworkset.plugin.kafka.StoreService

Write message processing components, processing components need to implement the interface
org.frameworkset.plugin.kafka.StoreService
//Process data by item
public void store(MessageAndMetadata<byte[], byte[]> message)  throws Exception ;
public void closeService();
//按批处理消息
public void store(List<MessageAndMetadata<byte[], byte[]>> messages) throws Exception
StoreServiceTest实现:
package org.frameworkset.plugin.kafka;

import org.apache.kafka.common.serialization.LongDeserializer;
import org.apache.kafka.common.serialization.StringDeserializer;

import kafka.message.MessageAndMetadata;

public class StoreServiceTest extends BaseStoreService {
	StringDeserializer sd = new StringDeserializer();
	LongDeserializer ld = new LongDeserializer();
	@Override
	public void store(List<MessageAndMetadata<byte[], byte[]>> messages) throws Exception {
		for(MessageAndMetadata<byte[], byte[]> message:messages){
			String data = sd.deserialize(null,message.message());
			long key = ld.deserialize(null, message.key());
			System.out.println("key="+key+",data="+data);
		}
	}

	@Override
	public void closeService() {
		sd.close();
		ld.close();
	}

	@Override
	public void store(MessageAndMetadata<byte[], byte[]> message) throws Exception {
		String data = sd.deserialize(null,message.message());
		long key = ld.deserialize(null, message.key());
		System.out.println("key="+key+",data="+data);
	}

}


3.3 Load the kafka consumer configuration and start the message receiving thread
BaseApplicationContext context = DefaultApplicationContext.getApplicationContext("kafkaconfumer.xml");
		KafkaListener consumer = context.getTBeanObject ("kafkaconsumer", KafkaListener.class);
		Thread t = new Thread(consumer);
		t.start();



Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326331426&siteId=291194637