Kafka HA Apache版本部署

下载Kafka

官网地址:http://kafka.apache.org/

[root@hadoop001 software]# wget https://archive.apache.org/dist/kafka/0.10.0.0/kafka_2.11-0.10.0.0.tgz

下载zookeeper

 官网地址:http://archive.cloudera.com/cdh5/cdh/5/

[root@hadoop001 software]# wget http://archive.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.12.0.tar.gz

然后scp到各个节点

[root@hadoop001 software]# scp zookeeper-3.4.5-cdh5.12.0.tar.gz hadoop002:/root/software/

[root@hadoop001 software]# scp zookeeper-3.4.5-cdh5.12.0.tar.gz hadoop003:/root/software/

 解压

[root@hadoop001 software]# tar -zxf zookeeper-3.4.5-cdh5.12.0.tar.gz -C /opt/software/

[root@hadoop002 software]# tar -zxf zookeeper-3.4.5-cdh5.12.0.tar.gz -C /opt/software/

[root@hadoop003 software]# tar -zxf zookeeper-3.4.5-cdh5.12.0.tar.gz -C /opt/software/

zoo.cfg只需要改动
 

dataDir=/opt/software/zookeeper/data 

server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888

参考下面命令

[root@hadoop001 zookeeper-3.4.5-cdh5.12.0]# cd conf

[root@hadoop001 conf]# cp zoo_sample.cfg zoo.cfg

[root@hadoop001 conf]# vim zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#dataDir=/tmp/zookeeper
dataDir=/opt/software/zookeeper/data
# the port at which the clients will connect
clientPort=2181
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888

[root@hadoop001 conf]# scp zoo.cfg hadoop002:/opt/software/zookeeper-3.4.5-cdh5.12.0/conf

[root@hadoop001 conf]# scp zoo.cfg hadoop003:/opt/software/zookeeper-3.4.5-cdh5.12.0/conf

启动zookeeper

[root@hadoop001 zookeeper]# bin/zkServer.sh start
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[root@hadoop001 zookeeper]# bin/zkServer.sh status
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: follower

[root@hadoop002 zookeeper]# bin/zkServer.sh start
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[root@hadoop002 zookeeper]# bin/zkServer.sh status
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: leader

[root@hadoop003 zookeeper]# bin/zkServer.sh start
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[root@hadoop003 zookeeper]# bin/zkServer.sh status
JMX enabled by default
Using config: /opt/software/zookeeper/bin/../conf/zoo.cfg
Mode: follower

中间还报了一个错误,通过查看日志

[root@hadoop001 zookeeper]# tail -100  zookeeper.out

2019-05-28 11:08:45,309 [myid:] - INFO  [main:QuorumPeerConfig@111] - Reading configuration from: /opt/software/zookeeper/bin/../conf/zoo.cfg
2019-05-28 11:08:45,314 [myid:] - INFO  [main:QuorumPeerConfig@374] - Defaulting to majority quorums
2019-05-28 11:08:45,314 [myid:] - ERROR [main:QuorumPeerMain@86] - Invalid config, exiting abnormally
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error processing /opt/software/zookeeper/bin/../conf/zoo.cfg
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:131)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:106)
	at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79)
Caused by: java.lang.IllegalArgumentException: /opt/software/zookeeper/data/myid file is missing
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parseProperties(QuorumPeerConfig.java:384)
	at org.apache.zookeeper.server.quorum.QuorumPeerConfig.parse(QuorumPeerConfig.java:127)
	... 2 more
Invalid config, exiting abnormally

错误已经很明显了,缺少myid文件,出现这个错误的原因是我把这个文件放错地方了。。。Anyway,把myid文件放进/opt/software/zookeeper/data重启一下就好了

部署Kafka

准备前置工作

[root@hadoop001 software]# scp kafka_2.11-0.10.0.0.tgz hadoop002:/root/software/

[root@hadoop001 software]# scp kafka_2.11-0.10.0.0.tgz hadoop003:/root/software/

开始部署

[root@hadoop001 software]# pwd
/opt/software

[root@hadoop001 software]# ln -s kafka_2.11-0.10.0.0 kafka

[root@hadoop002 software]# ln -s kafka_2.11-0.10.0.0 kafka

[root@hadoop003 software]# ln -s kafka_2.11-0.10.0.0 kafka

三个节点都要修改server.properties 文件,修改内容如下:

[root@hadoop001 software]# cd kafka

[root@hadoop001 kafka]# cd config

[root@hadoop001 config]# vim server.properties

broker.id=0

host.name=hadoop001
port=9092

log.dirs=/opt/software/kafka/kafka-logs

zookeeper.connect=hadoop001:2181,hadoop002:2181,hadoop003:2181/kafka

再server.properties文件scp过去并修改以下两个值

broker.id=1

host.name=hadoop002

[root@hadoop001 config]# scp server.properties hadoop002:/opt/software/kafka/config/

[root@hadoop001 config]# scp server.properties hadoop003:/opt/software/kafka/config/

启动Kafka

[root@hadoop001 bin]# ./kafka-server-start.sh ../config/server.properties

[root@hadoop002 bin]# ./kafka-server-start.sh ../config/server.properties

[root@hadoop003 bin]# ./kafka-server-start.sh ../config/server.properties

Kafka使用

创建一个topic

[root@hadoop003 kafka]# bin/kafka-topics.sh \
> --create \
> --zookeeper hadoop001:2181,hadoop002:2181,hadoop003:2181/kafka  \
> --replication-factor 1 \
> --partitions 3 \
> --topic hello
Created topic "hello"

列出所有的topic

[root@hadoop003 kafka]# bin/kafka-topics.sh \
> --list \
> --zookeeper hadoop001:2181,hadoop002:2181,hadoop003:2181/kafka
hello
test

启动一个生产者

[root@hadoop003 kafka]# bin/kafka-console-producer.sh \
> --broker-list hadoop001:9092, hadoop002:9092, hadoop003:9092 \
> --topic test

启动一个消费者

[root@hadoop001 kafka]# bin/kafka-console-consumer.sh \
> --zookeeper hadoop001:2181, hadoop002:2181, hadoop003:2181/kafka  \
> --topic test\
> -from-beginning

发现一个问题,No brokers found in ZK

解决方案:因为并没有test/这个topic

从上面两个图可以看出我在hadoop003启动了一个生产者,然后再hadoop001启动了一个消费者,在hadoop003输入了两行字,很快hadoop001就出来了同样的hello hi.

接着我们通过进入zookeeper的客户端来查看一些Kafka的相关信息

[root@hadoop001 bin]# ./zkCli.sh

[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper, kafka]

[zk: localhost:2181(CONNECTED) 1] ls /kafka
[controller, controller_epoch, brokers, admin, isr_change_notification, consumers, config]

[zk: localhost:2181(CONNECTED) 2] ls /kafka/brokers
[ids, topics, seqid]

[zk: localhost:2181(CONNECTED) 3] ls /kafka/brokers/topics
[hello, test]

好,至此,Kafka HA Apache版本部署大功告成!!

猜你喜欢

转载自blog.csdn.net/xiaoxiongaa0/article/details/90634581
HA