kafka单机搭建及操作--做个记录

最近一直在搞springboot整合kafka,于是自己搭建了一套单机的kafka环境,以便用于测试

环境搭建

1.下载解压kafka_2.11-1.1.0.tgz,创建移动到kafka文件夹中

wget http://archive.apache.org/dist/kafka/1.1.0/kafka_2.11-1.1.0.tgz

tar -xzvf kafka_2.11-1.1.0.tgz

2.kafka需要安装zookeeper使用,但kafka集成zookeeper,在单机搭建时可直接使用,只需配置kafka_2.11-1.1.0/config 下的“zookeeper.properties”

以下是修改的zookeeper.properties配置参数:

#创建zookeeper目录
mkdir /usr/local/kafka/zookeeper 
#创建zookeeper日志目录
mkdir -p /usr/local/kafka/log/zookeeper 
#进入配置目录
cd /usr/local/kafka/config 
vi zookeeper.properties 


#zookeeper数据目录
dataDir=/usr/local/kafka/zookeeper 
#zookeeper日志目录
dataLogDir=/usr/local/kafka/log/zookeeper  
clientPort=2181 
maxClientCnxns=100 
tickTime=2000 
initLimit=10
syncLimit=5

3.配置kafka_2.11-1.1.0/config下的“server.properties”,修改log.dirs和zookeeper.connect。前者是日志存放文件夹,后者是zookeeper连接地址(端口和clientPort保持一致),目录创建及参数配置如下:

#创建kafka日志目录 
mkdir /usr/local/kafka/log/kafka 
#进入配置目录
cd /usr/local/kafka/config  
#编辑修改kafka相应的参数
vi server.properties
 
broker.id=0 
#topic可以删,默认是false
delete.topic.enable=true
#端口号,可不配置
port=9092  
#服务器IP地址,也可修改为自己的服务器IP
host.name=192.254.64.128   
#日志存放路径,上面创建的目录
log.dirs=/usr/local/kafka/log/kafka
#zookeeper地址和端口,单机配置部署,localhost:2181
zookeeper.connect=192.254.64.128:2181 
listeners=PLAINTEXT://192.254.64.128:9092

到此,kafka的单机环境就搭建成功了。

启动命令

启动zookeeper

/usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &

启动启动kakfa

/usr/local/kafka/bin/kafka-server-start.sh  -daemon  /usr/local/kafka/config/server.properties

 -daemon:这个比较关键需要注意,守护线程,要不然xshell一关进程就丢了

jps进程显示如下:

shell命令

./kafka-topics.sh --zookeeper 192.168.226.129:2182  --describe --topic  orderTopic  ;查看名字为orderTopic的topic
./kafka-topics.sh --zookeeper 192.168.226.129:2182 --list   ;查看topic 列表

记一段问题:(kafka丢失最开始日志消息的问题)
props = new Properties();

props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, address);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, 1000);
props.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 100000);
props.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG,110000);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "testconsumer"+System.currentTimeMillis());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 50);

public static Map<String, Object> consume(String topics, Map<String, Object> rtnMap) {
	if (consumer == null) {
		consumer = getConsumer();
	}

	consumer.subscribe(Collections.singletonList(topics));

	StringBuffer sb = new StringBuffer();
	ConsumerRecords<String, String> records = null;

	records = consumer.poll(Duration.ofMillis(1000));

	for (ConsumerRecord<String, String> record : records) {
		sb.append(record.value());
	}
	if(StringUtils.isNotEmpty(sb.toString())){
		consumer.commitSync();
		//consumer.close();
	}
	rtnMap.put("msg", sb.toString());
	return rtnMap;
}
records = consumer.poll(Duration.ofMillis(1000));
留意一下就好,遇到具体问题具体对待

猜你喜欢

转载自blog.csdn.net/Alex_81D/article/details/105220730