kafka环境搭建

版权声明:本文为博主原创文章,转载请注明出处 浅然的专栏 https://blog.csdn.net/w_linux/article/details/84892060

 

一、使用技术版本

kafka_2.10-0.10.2.1.tar 

zookeeper-3.4.5.tar


二、环境搭建

因为kafka环境依赖于zookeeper,所以先搭建zookeeper

1、zookeeper搭建

先创建一个文件夹,存放压缩包,并解压

root@VM-0-3-ubuntu:~# cd /wingcloud
root@VM-0-3-ubuntu:/wingcloud# ls
kafka_2.10-0.10.2.1.tar  zookeeper-3.4.5.tar
root@VM-0-3-ubuntu:/wingcloud# tar -xvf zookeeper-3.4.5.tar

解压后,将文件夹放到/usr/local下

root@VM-0-3-ubuntu:/wingcloud# mv zookeeper-3.4.5 /usr/local/zk

进入/usr/local/zk/conf执行如下

root@VM-0-3-ubuntu:/wingcloud# cp zoo_sample.cfg zo.cfg
root@VM-0-3-ubuntu:/wingcloud# mv zo.cfg zoo.cfg
root@VM-0-3-ubuntu:/wingcloud# vim zoo.cfg

进入zoo.cfg后只需修改dataDir

修改如下,保存退出。

dataDir=/usr/local/zk/data/

返回到zk目录下创建data目录

root@VM-0-3-ubuntu:/usr/local/zk# mkdir data

添加zk的环境变量

root@VM-0-3-ubuntu:/usr/local/zk# cd ..
root@VM-0-3-ubuntu:/usr/local# vim /etc/profile
root@VM-0-3-ubuntu:/usr/local# ls
bin                  etc    include  logstash  qcloud  share  yd.socket.server
elasticsearch-2.4.6  games  lib      man       sbin    src    zk
root@VM-0-3-ubuntu:/usr/local# vim /etc/profile

ZK_HOME=/usr/local/zk
PATH=$ZK_HOME/bin:$PATH
root@VM-0-3-ubuntu:/usr/local# source /etc/profile

最后启动zk,并查看其是否成功启动

root@VM-0-3-ubuntu:/usr/local# zkServer.sh start
JMX enabled by default
Using config: /usr/local/zk/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@VM-0-3-ubuntu:/usr/local# zkServer.sh status
JMX enabled by default
Using config: /usr/local/zk/bin/../conf/zoo.cfg
Mode: standalone

2、kafka搭建

解压并将文件转移到/usr/local下

root@VM-0-3-ubuntu:/wingcloud# tar -xvf kafka_2.10-0.10.2.1.tar
root@VM-0-3-ubuntu:/wingcloud# ls
kafka_2.10-0.10.2.1  kafka_2.10-0.10.2.1.tar  zookeeper-3.4.5.tar
root@VM-0-3-ubuntu:/wingcloud# mv kafka_2.10-0.10.2.1 /usr/local
root@VM-0-3-ubuntu:/wingcloud# cd /usr/local
root@VM-0-3-ubuntu:/usr/local# ls
bin                  include              man     src
elasticsearch-2.4.6  kafka_2.10-0.10.2.1  qcloud  yd.socket.server
etc                  lib                  sbin    zk
games                logstash             share   zookeeper.out

修改server.properties

root@VM-0-3-ubuntu:/usr/local# cd kafka_2.10-0.10.2.1
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1# ls
bin  config  libs  LICENSE  NOTICE  site-docs
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1# cd config
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/config# ls
connect-console-sink.properties    consumer.properties
connect-console-source.properties  log4j.properties
connect-distributed.properties     producer.properties
connect-file-sink.properties       server.properties
connect-file-source.properties     tools-log4j.properties
connect-log4j.properties           zookeeper.properties
connect-standalone.properties
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/config# vim server.properties

在server.properties中找到Log Basics注释,修改其下面的log.dirs如下

log.dirs=/usr/local/kafka_2.10-0.10.2.1/data/kafka-logs

其中也可以修改zk相关,因为是本机这边就不用配了,ok,保存退出

启动kafka

root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/config# cd ..
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1# ls
bin  config  libs  LICENSE  NOTICE  site-docs
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1# cd bin
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin# ls
connect-distributed.sh               kafka-replica-verification.sh
connect-standalone.sh                kafka-run-class.sh
kafka-acls.sh                        kafka-server-start.sh
kafka-broker-api-versions.sh         kafka-server-stop.sh
kafka-configs.sh                     kafka-simple-consumer-shell.sh
kafka-console-consumer.sh            kafka-streams-application-reset.sh
kafka-console-producer.sh            kafka-topics.sh
kafka-consumer-groups.sh             kafka-verifiable-consumer.sh
kafka-consumer-offset-checker.sh     kafka-verifiable-producer.sh
kafka-consumer-perf-test.sh          windows
kafka-mirror-maker.sh                zookeeper-security-migration.sh
kafka-preferred-replica-election.sh  zookeeper-server-start.sh
kafka-producer-perf-test.sh          zookeeper-server-stop.sh
kafka-reassign-partitions.sh         zookeeper-shell.sh
kafka-replay-log-producer.sh
root@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin# ./kafka-server-start.sh  ../config/server.properties

启动完成后,就算搭建完成了,接下来测试一下

需要另外再开一个终端,进入linux服务器

在新开的终端下需要创建topics,先进入kafka/bin下

解释下命令:--zookeeper 127.0.0.1:2181指定zk。 --partitions 1分区设为1个,因为是单机。--replication-factor 1副本设为1个。--topic wingcloud设置topic名字叫wingcloud

ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin$ ./kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --partitions 1  --replication-factor 1 --topic wingcloud
Created topic "wingcloud".

然后使用生产者,--broker-list 127.0.0.1:9092指kafka服务,集群时写多个即可

ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin$ ./kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic wingcloud

启动后,再开一个终端

在新开的终端下使用消费者

ubuntu@VM-0-3-ubuntu:~$ cd /usr/local
ubuntu@VM-0-3-ubuntu:/usr/local$ ls
bin                  include              man     src
elasticsearch-2.4.6  kafka_2.10-0.10.2.1  qcloud  yd.socket.server
etc                  lib                  sbin    zk
games                logstash             share   zookeeper.out
ubuntu@VM-0-3-ubuntu:/usr/local$ cd kafka_2.10-0.10.2.1
ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1$ ls
bin  config  data  libs  LICENSE  logs  NOTICE  site-docs
ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1$ cd bin
ubuntu@VM-0-3-ubuntu:/usr/local/kafka_2.10-0.10.2.1/bin$ ./kafka-console-consumer.sh --zookeeper 127.0.0.1:2181 --topic wingcloud

消费者启动之后,进行数据测试。

在之前的生产者终端输入数据,在消费者终端也能显示的话,测试成功,如下

好了kafka搭建完成

猜你喜欢

转载自blog.csdn.net/w_linux/article/details/84892060