学习笔记:从0开始学习大数据-16. kafka安装及使用

kafka是消息处理服务的开源软件,高效高可用。可以作为大数据收集的工具或数据的管道。

1. 下载  http://kafka.apache.org/downloads
根据scala版本,我下载的是Scala 2.12  - kafka_2.12-2.1.0.tgz (asc, sha512)
2.解压
tar -zxvf  kafka_2.12-2.1.0.tgz
3.启动
(1)启动自带的zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties &
(2)启动 kafka
bin/kafka-server-start.sh config/server.properties &
jps
显示已启动
[root@centos7 kafka_2.12-2.1.0]# jps
4477 QuorumPeerMain
8637 Jps
8062 Kafka
(3)创建一个叫做“test”的topic,它只有一个分区,一个副本。 下次再启动kafka无需再次创建
 bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
查看创建的topic
bin/kafka-topics.sh --list --zookeeper localhost:2181

4.测试
(1)第一个shell终端启动作为消息消费者
bin/kafka-console-consumer.sh --bootstrap-server localhost:2181 --topic test --from-beginning

(2)第二个shell终端启动作为消息生产者
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

发送测试消息

[root@centos7 kafka_2.12-2.1.0]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
hello
>>is working?
>

在第一个shell终端可以看到发送的消息


5. java 编程发送,接受消息测试

创建maven项目 pom.xml的依赖加入

 <dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>2.1.0</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-streams</artifactId>
    <version>2.1.0</version>
</dependency>

需运行两个程序,发送消息程序(生产者)和接收信息程序(消费者)

程序源码引自用下面的参考文章,作了些修改

MyKafkaProducer.java

package com.linbin.kafka;

import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;

public class MyKafkaProducer {
	public static void main(String[] args){
	    Properties props = new Properties();
	    props.put("bootstrap.servers", "centos7:9092");
	    props.put("acks", "all");
	    props.put("retries", 0);
	    props.put("batch.size", 16384);
	    props.put("linger.ms", 1);
	    props.put("buffer.memory", 33554432);
	    props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
	    props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
	    Producer<String, String> producer = new KafkaProducer<String, String>(props);
	    for (int i = 0; i < 100; i++)
	        producer.send(new ProducerRecord<String, String>("test", Integer.toString(i), Integer.toString(i)));
	    producer.close();

	}
}

 MyKafkaConsumer.java

package com.linbin.kafka;


import java.util.Arrays;
import java.util.Collection;
import java.util.Properties;
import org.apache.kafka.clients.consumer.ConsumerRebalanceListener;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.TopicPartition;

public class MyKafkaConsumer {
	public static void main(String[] args){
	    Properties props = new Properties();
	    props.put("bootstrap.servers", "centos7:9092");
	    props.put("group.id", "test");
	    props.put("enable.auto.commit", "true");
	    props.put("auto.commit.interval.ms", "1000");
	    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
	    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
	    final KafkaConsumer<String, String> consumer = new KafkaConsumer<String,String>(props);

	    consumer.subscribe(Arrays.asList("test"),new ConsumerRebalanceListener() {
   		@Override
			public void onPartitionsAssigned(Collection<TopicPartition> arg0) {
			}
			@Override
			public void onPartitionsRevoked(Collection<TopicPartition> arg0) {

			}
	
	    });	    
	    
	    while (true) {
	        ConsumerRecords<String, String> records = consumer.poll(100);
	        for (ConsumerRecord<String, String> record : records)
	            System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
	    }
	}
}

在MyKafkaConsumer执行控制台可以看到信息输出

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
offset = 211, key = null, value = ddddd
offset = 212, key = null, value = how are you
offset = 213, key = 0, value = 0
offset = 214, key = 1, value = 1
offset = 215, key = 2, value = 2
offset = 216, key = 3, value = 3
offset = 217, key = 4, value = 4
offset = 218, key = 5, value = 5

可以从shell控制台和java程序交叉输入信息源和消费信息。

参考:
https://www.jianshu.com/p/0e378e51b442  java Kafka 简单应用实例
https://www.cnblogs.com/skying555/p/7903457.html  Kafka入门经典教程
https://www.cnblogs.com/hei12138/p/7805475.html  kafka实战

猜你喜欢

转载自blog.csdn.net/oLinBSoft/article/details/84728233
今日推荐