1. Kafka下载:
wget https://archive.apache.org/dist/kafka/0.8.1/kafka_2.9.2-0.8.1.tgz
解压 tar zxvf kafka_2.9.2-0.8.1.tgz
cd kafka_2.9.2-0.8.1
kafka使用scala编写,需要下载scala相关的库
下载安装sbt:
wget http://repo.scala-sbt.org/scalasbt/sbt-native-packages/org/scala-sbt/sbt/0.13.1/sbt.rpm
$ rpm -ivh sbt.rpm
3. 更新scala环境:
sbt update
sbt package
#sbt assembly-package-dependency
4. 配置config/server.properties
broker.id为依次增长的:0、1、2、3、4,集群中唯一id
log.dirs设置到大硬盘路径下
num.network.threads
num.partitions ,默认分区数
num.io.threads 建议值为机器的核数;
zookeeper.connect 设置为zookeeper Servers 列表,各节点以逗号分开;
在kafka的部署目录下,在各个节点上通过如下命令来启动:
$nohup ./bin/kafka-server-start.sh ./config/server.properties &
$nohup kafka-server-start.sh /myhome/usr/kafka/config/server.properties &
创建topic:
kafka-topics.sh --zookeeper 192.168.3.130:2181,192.168.3.140:2181,192.168.3.142:2181 --topic ordertrack --replication-factor 1 --partitions 2 --create
其中, --topic 定义topic名
--replication-factor 定义副本数
--partitions 定义分区数
查看全部Topic:
kafka-topics.sh --zookeeper 192.168.3.130:2181,192.168.3.140:2181,192.168.3.142:2181 --list
删除Topic:
kafka-topics.sh --topic track --delete --zookeeper 192.168.3.130:2181,192.168.3.140:2181,192.168.3.142:2181
查看Topic明细:
kafka-topics.sh --topic track --describe --zookeeper 192.168.3.130:2181,192.168.3.140:2181,192.168.3.142:2181
Productor:
kafka-console-producer.sh --broker-list 192.168.3.130:9092,192.168.3.140:9092,192.168.3.142:9092 --topic track
Consumer:
kafka-console-consumer.sh --zookeeper 192.168.3.130:2181,192.168.3.140:2181,192.168.3.142:2181 --topic topicName --from-beginning
创建topic:
$ bin/kafka-topics.sh --zookeeper 192.168.3.130:2181,192.168.3.140:2181,192.168.3.142:2181 --topic topicName --replication-factor 1 --partitions 1 --create
其中, --topic 定义topic名
--replication-factor 定义副本数
--partitions 定义分区数
server.properties
broker.id
log.dirs
port
zookeeper.connnect
message.max.bytes
num.network.thread
num.io.threads
queued.max.requests
host.name
num.partitions
log.retention.hours
auto.create.topics.enable
default.replication.factor
num.replica.fetchers
delete.topic.enable=true
Consumer
group.id
zookeeper.connect
consumer.id
socket.timeout.ms
socket.receive.buffer.bytes
auto.commit.enable true
auto.commit.interval.ms 60 * 1000
auto.offset.reset
consumer.timeout.ms -1
client.id group id value
zookeeper.session.timeout.ms 6000
zookeeper.connection.timeout.ms 6000