Kafka Getting Started Example (Java)

After talking about the construction of kafka in the window environment in the last article, we will try to write a simple producer and consumer to test this article.

Start zookeeper-start.bat and kafak-start.bat under bin/windows in turn (these two .bats are written by myself for the convenience of startup). Let's start the test: kafak jar package version: kafak_2.9.2-0.8.1.jar

 

Produce side:

import java.util.Properties;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class KafkaProducerTest {
	
	String topic = "test";
	
	public static void main(String[] args) {
		Properties props = new Properties();
//		props.put("zookeeper.connect", "10.16.0.200:2181");
		props.put("serializer.class", "kafka.serializer.StringEncoder");
		props.put("producer.type", "async");//默认是sync
		props.put("compression.codec", "1");
		props.put("metadata.broker.list", "127.0.0.1:9092");
		ProducerConfig config = new ProducerConfig(props);
		
		Producer<String, Object> producer = new Producer<String, Object>(config);
		KeyedMessage<String, Object> message =
				new KeyedMessage<String, Object>("test", "hello world");
		
		producer.send(message);
	}

 

Among them, ProducerConfig is the property configuration class on the Producer side. For more properties, please refer to the kafka.producer.ProducerConfig class after decompilation of the jar package of kakfa. The property definition of this class contains many required or optional properties on the Producer side. The properties in the above code are however

It is a property that must be configured.

Note that there will be two Producer classes in the jar package of kafak. We must refer to the ones under the kafka.javaapi.producer package.

KeyedMessage means the message. Its constructors are as follows:

public KeyedMessage(String topic, K key, Object partKey, V message) { Product.class.$init$(this);
    if (topic == null)
      throw new IllegalArgumentException("Topic cannot be null.");  }
  public KeyedMessage(String topic, V message) {
    this(topic, null, null, message);
  }
  public KeyedMessage(String topic, K key, V message) { this(topic, key, key, message); }

 The first parameter is the topic of the message, and the second parameter is the content of the message

metadata.broker.list is the attribute configured on the Producer side to specify the metadata node. The nodes are separated by . The configuration of clustered nodes will not be described here.

 

Consumer端:

public class KafkaConsumerTest {  
  
    public static void main(String[] args) {  
                // specify some consumer properties  
        Properties props = new Properties();  
        props.put("zookeeper.connect", "127.0.0.1:2181");  
        props.put("zookeeper.connectiontimeout.ms", "1000000");  
        props.put("group.id", "test_group");
        props.put("zookeeper.session.timeout.ms", "40000");
        props.put("zookeeper.sync.time.ms", "200");
        props.put("auto.commit.interval.ms", "1000");
  
                // Create the connection to the cluster  
        ConsumerConfig consumerConfig = new ConsumerConfig(props);  
        ConsumerConnector consumerConnector = Consumer.createJavaConsumerConnector(consumerConfig);  
  
                // create 4 partitions of the stream for topic “test-topic”, to allow 4 threads to consume  
        Map<String, Integer> topicCountMap = new HashMap<String, Integer>();  
        topicCountMap.put("test", new Integer(4));  
        //key--topic  
        Map<String, List<KafkaStream<byte[], byte[]>>> consumerMap =
        		consumerConnector.createMessageStreams(topicCountMap);
        KafkaStream<byte[], byte[]> stream = consumerMap.get("test").get(0);
        ConsumerIterator<byte[], byte[]> it = stream.iterator();
        StringBuffer sb = new StringBuffer();
        while(it.hasNext()) {
        	try {
				String msg = new String(it.next().message(), "utf-8").trim();
				System.out.println("receive:" + msg);
			} catch (UnsupportedEncodingException e) {
				e.printStackTrace ();
			}
        }
       
    }

 

The while(it.hasNext()) loop can be further improved as follows:

while (it.hasNext()) {
	String msg= "";
	byte[] packs= it.next().message();
	InputStream is = new ByteArrayInputStream(bPack);
        // If it is blocked, that is, the stream cannot be read, then ready() returns false.
	BufferedReader ioBuffRead = new BufferedReader(new InputStreamReader(is, charset));
	while(ioBuffRead.ready()){
		msg= ioBuffRead.readLine();
                    //The following code is omitted. .
        }
}

 

 

Also note that there are two Consumer classes in the jar package of kafak, we must refer to the kafka.javaapi.consumer package; and the specific attribute configuration on the Consumer side can also be found under the attribute definition of the ConsumerConfig class.

The code on the Producer side is relatively simple. What we need to understand is the code on the Consumer side -

1) The ConsumerConnector class is the connection class configured by the consumer according to the consumption

2) topicCountMap is the map of the topic, the key is the topic, and the value is the number of partitions of the topic's message flow

3) The consumerMap is the map of the consumer side, the key is the topic, and the value is the message queue corresponding to the topic, which is represented as a List collection, and the head element is the head element of the queue, which is why consumerMap.get("test").get (0);

The size of the collection is dynamic, because there are elements in the queue that go in and out;

4) ConsumerIterator is the iterator of the topic message stream, which is used to iterate to get the messages inside.

 

After running, you will find that: hello world is printed on the Consumer side. You can also run a command on the console to view the topic's messages.

 

At this point, an entry-level production-consumption test of Kafka is completed.

 

After successfully starting kafka under windows or Linux version of kafka, and running this simple demo example, you are likely to get the following exception:

java.nio.channels.UnresolvedAddressException

at sun.nio.ch.Net.checkAddress (Net.java:29)

at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:512)

at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)

at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)

at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)

at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)

Solving this exception is also rather strange, thanks to Brother F's help. This is because you lack the dns resolution of the necessary addresses, modify the hosts file under the C drive and add:

server ip address server domain name

That's it. It is very likely that this exception will also occur when you access the local kafka of windows, then add 127.0.0.1 localhost.

 

Note: 1 topic corresponds to 1 offset (the location where content is consumed); 1 group.id can correspond to multiple topics

 

Finally, I would like to express my sincere thanks to Brother F for your help! ! !

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326863639&siteId=291194637