Kafka installation local windows environment

1.  Download the latest Kafka version and extract:

Unzip kafka_2.13-3.4.0.tgz.

2. Start the Kafka service

Note: Java 8+ must be installed on your local environment.

1 Start the zookeeper service

cd kafka_2.13-3.4.0/bin/windows
zookeeper-server-start.bat ../../config/zookeeper.properties

2. Start the Kafka broker service

cd kafka_2.13-3.4.0/bin/windows
kafka-server-start.bat ../../config/server.properties

Ok, you now have a basic Kafka environment

3. Create a Topic to store messages


KAFKA is a distributed event streaming platform that lets you read, write, store and process events (also known as records or messages in documents) across many machines.

Example activities are payment transactions, geolocation updates from mobile phones, shipping orders, sensor measurements from IoT devices or medical devices, and many more. These events are organized and stored in topics. Very simplified, a topic is like a folder in the file system, and events are the files in that folder.

Therefore, before you write your first event, you must create a Topic. Open another terminal session and run:

kafka-topics.bat --create --topic quickstart-events --bootstrap-server localhost:9092

result:

Created topic quickstart-events.

All of Kafka's command-line tools have additional options: Run the Kafka-Topics.sh command without any arguments to display usage information. For example, it can also show you details like partition counts for new topics:

kafka-topics.bat --describe --topic quickstart-events --bootstrap-server localhost:9092

result

Topic: quickstart-events        TopicId: iQ9QVKwkQ1epRjA-BnvxvA PartitionCount: 1       ReplicationFactor: 1    Configs:
        Topic: quickstart-events        Partition: 0    Leader: 0       Replicas: 0     Isr: 0

4. Write a message to the Topic

Kafka clients communicate with Kafka brokers over the network for write (or read) events. Once received, the broker will store the activity in a durable and durable manner for as long as you need it.

Run the console producer client to write some events to your topic. By default, each line you enter will cause a separate event to be written to the Topic.

kafka-console-producer.bat --topic quickstart-events --bootstrap-server localhost:9092
This is my first event
This is my second event

You can stop the producer client at any time with CTRL-C.

Java implements sending messages

	Properties props = new Properties();
	props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
	props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,org.apache.kafka.common.serialization.StringSerializer.class);
	props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class);

	KafkaProducer producer = new KafkaProducer(props);

	try {
	    for (int i =0 ; i <100; i++) {
		String key = "key"+i;
		String message = "message"+i;

		ProducerRecord<Object, Object> record = new ProducerRecord<>("topic1", key, message);
		try {
		    System.out.println("sending message"+i);
		    producer.send(record);
		} catch(SerializationException e) {
		    e.printStackTrace();
		}
	    }
	}finally {
	    producer.flush();
	    producer.close();
	}

5. Read messages

Open another terminal session and run the console consumer client to read the events you just created:

kafka-console-consumer.bat --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
This is my first event
This is my second event

You can stop the consumer client at any time with CTRL-C.

Feel free to experiment: for example, switch back to the producer terminal (previous step) to write additional events, and see how these show up immediately in your consumer terminal.

Since events are persisted in Kafka, they can be read by as many consumers as they like. You can easily verify this by opening another terminal session and re-running the previous command again.

Java code to read messages

Properties props = new Properties();

	props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
	props.put(ConsumerConfig.GROUP_ID_CONFIG, "group1");
	props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
	props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
	props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

	String topic = "topic1";
	final Consumer<String, GenericRecord> consumer = new KafkaConsumer<String, GenericRecord>(props);
	consumer.subscribe(Arrays.asList(topic));


	try {
	    while (true) {
		ConsumerRecords<String, GenericRecord> records = consumer.poll(100);
		for (ConsumerRecord<String, GenericRecord> record : records) {
		    System.out.printf("offset = %d, key = %s, value = %s \n", record.offset(), record.key(), record.value());
		}
	    }
	} finally {
	    consumer.close();
	}

References

https://kafka.apache.org/quickstart

https://codenotfound.com/spring-kafka-apache-avro-serializer-deserializer-example.html

Guess you like

Origin blog.csdn.net/keeppractice/article/details/130639312