安装Zookeeper+kafka

Kafka 使用Zookeeper 保存集群的元数据信息和消费者信息

安装目录为 /usr/local/zookeeper-3.4.11

解压:

[root@U10-33 local]# tar -zxf zookeeper-3.4.11.tar.gz

配置文件(zoo.cfg):

[root@U10-33 ~]# cd /usr/local/zookeeper-3.4.11/conf/
[root@U10-33 conf]# ll
总用量 16
-rw-r--r-- 1  502 games  535 11月  2 02:47 configuration.xsl
-rw-r--r-- 1  502 games 2161 11月  2 02:47 log4j.properties
-rw-r--r-- 1 root root  1005 3月  23 17:21 zoo.cfg
-rw-r--r-- 1  502 games   57 3月  23 14:42 zoo_sample.cfg
[root@U10-33 conf]# vi zoo.cfg 
tickTime=2000
dataDir=/var/lib/zookeeper
clientPort=2181
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/var/lib/zookeeper/ #这个目录是预先创建的
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

创建 数据目录( /var/lib/zookeeper):

[root@U10-33 ~]# cd /var/lib
[root@U10-33 lib]# mkdir zookeeper

配置环境变量

[root@U10-33 ~]# vi  /etc/profile

添加如下信息:

ZOOKEEPER_HOME=/usr/local/zookeeper-3.4.11
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$ZOOKEEPER_HOME/lib:
export PATH=$ZOOKEEPER_HOME/bin:$PATH

加载刚设置的变量

[root@U10-33 ~]# source /etc/profile

启动:

[root@U10-33 ~]# cd /usr/local/zookeeper-3.4.11/bin/
[root@U10-33 bin]# ll
总用量 36
-rwxr-xr-x 1 502 games  232 11月  2 02:47 README.txt
-rwxr-xr-x 1 502 games 1937 11月  2 02:47 zkCleanup.sh
-rwxr-xr-x 1 502 games 1056 11月  2 02:47 zkCli.cmd
-rwxr-xr-x 1 502 games 1534 11月  2 02:47 zkCli.sh
-rwxr-xr-x 1 502 games 1628 11月  2 02:47 zkEnv.cmd
-rwxr-xr-x 1 502 games 2696 11月  2 02:47 zkEnv.sh
-rwxr-xr-x 1 502 games 1089 11月  2 02:47 zkServer.cmd
-rwxr-xr-x 1 502 games 6773 11月  2 02:47 zkServer.sh
[root@U10-33 bin]# ./zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper-3.4.11/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

验证:

[root@U10-33 ~]# telnet localhost 2181
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
srvr
Zookeeper version: 3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0, built on 11/01/2017 18:06 GMT
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x0
Mode: standalone
Node count: 4
Connection closed by foreign host.

现在可以连到Zookepr 端口上,通过发送四字命令srvr 来验证Zookeeper 是否安装正确。

[root@U10-33 ~]# telnet localhost 2181
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
srvr
Zookeeper version: 3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0, built on 11/01/2017 18:06 GMT
Latency min/avg/max: 0/0/0
Received: 1
Sent: 0
Connections: 1
Outstanding: 0
Zxid: 0x0
Mode: standalone
Node count: 4
Connection closed by foreign host.
--------------------------------------------------------------------------------------

Zookeeper 群组( Ensemble)

Zoo keeper 集群被称为群组。Zookeeper 使用的是一致性协议,所以建议每个群组里应该包含奇数个节点(比如3 个、5 个等),因为只有当群组里的大多数节点(也就是越定人数)处于可用状态, Zookeeper 才能处理外部的请求。也就是说,如果你有一个包含3 个节点的群组,那么它允许一个节点失效。如果群组包含5 个节点,那么它允许2 个节点失效。


假设有一个包含5 个节点的群组,如果要对群组做一些包括更换节点在内的配置更改,需要依次重启每一个节点。如果你的群组无法容忍多个节点失效,那么在进行群组维护时就会存在风险。不过,也不建议一个群组包含超过7 个节点,因为Zoo keeper 使用了一致性协议,节点过多会降低整个群组的性能。


群组需要有一些公共配置,上面列出了所有服务器的清单,并且每个服务器还要在数据目录中创建一个myid 文件,用于指明自己的ID 。如果群组里服务器的机器名是zool、example.com 、zoo2.example.com 、zoo3 . example.com ,那么配置文件可能是这样的:
tickTime=2000
dataDir= /var/lib/zookeeper
clientPort=2181
initLimit=20
syncLimit=5
server.1=zool.example.com:2888:3888
server.2=zoo2.example.com:2888:3888

server.3=zoo3.example. com:2888:3888

initLimit 表示用于在从节点与主节点之间建立初始化连接的时间上限,

syncLimit 表示允许从节点与主节点处于不同步状态的时间上限,

这两个值都是tickTime 的倍数,所以initLimit=20*2000ms ,也就是40s 。

服务器地址遵循server.X=hostname:peerPort:leaderPort 格式,参数说明
x                服务器的D,必须是一个整数,不一定要从0 开始,也不要求是连续的;
hostnam    服务器的机器名或IP 地址:
peerPort    用于节点间通信的TCP 端口;

leaderPort    用于首领选举的TCP 端口。

客户端只需要通过clientPort 就能连接到群组,
而群组节点间的通信则需要同时用到这3 个端口(peerPort 、leaderPort 、clientPort )。

----------------------------------------------------------------------------

kafka安装:

解压(/usr/local/kafka_2.12-1.0.1):

[root@U10-33 local]# tar -zxf zookeeper-3.4.11.tar.gz

3) 配置(server.properties):

[root@U10-33 ~]# mkdir /tmp/kafka-logs #保存消息日志


[root@U10-33 config]# pwd
/usr/local/kafka_2.12-1.0.1/config
[root@U10-33 config]# vi server.properties

添加

listeners=PLAINTEXT://localhost:9092
log.dirs=/tmp/kafka-logs
broker.id=1

启动kafka

[root@U10-33 config]# /usr/local/kafka_2.12-1.0.1/bin/kafka-server-start.sh -daemon  /usr/local/kafka_2.12-1.0.1/config/server.properties

一且Kafka 创建完毕,就可以对这个集群做一些简单的操作来验证它是否安装正确,比如创建一个测试主题,发布一些消息,然后读取它们。

创建并验证主题:

[root@U10-33 ~]# /usr/local/kafka_2.12-1.0.1/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".

查看所有topic:
[root@U10-33 ~]# /usr/local/kafka_2.12-1.0.1/bin/kafka-topics.sh --list --zookeeper  localhost:2181
test

查看指定topic
[root@U10-33 ~]# /usr/local/kafka_2.12-1.0.1/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test      PartitionCount:1        ReplicationFactor:1     Configs:
        Topic: test     Partition: 0    Leader: 1       Replicas: 1     Isr: 1

 把KAFKA的生产者启动起来,往测试主题上发布消息::

[root@U10-33 ~]# /usr/local/kafka_2.12-1.0.1/bin/kafka-console-producer.sh --broker-list localhost:9092 --sync --topic test

从测试主题上读取消息:

[root@U10-33 ~]# /usr/local/kafka_2.12-1.0.1/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
[2018-03-23 18:46:32,381] ERROR Unknown error when running consumer:  (kafka.tools.ConsoleConsumer$)
java.net.UnknownHostException: U10-33: U10-33: unknown error
        at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
        at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:135)
        at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:159)
        at kafka.consumer.Consumer$.create(ConsumerConnector.scala:112)
        at kafka.consumer.OldConsumer.<init>(BaseConsumer.scala:130)
        at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
        at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
        at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: java.net.UnknownHostException: U10-33: unknown error
        at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
        at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
        at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
        ... 7 more

猜你喜欢

转载自blog.csdn.net/a0001aa/article/details/79667558