十三:kafka分布式部署

一:kafka概述:

就一个消息中间件,当前官网叫做:分布式流平台a distributed streaming platform:
A streaming platform has three key capabilities:

1.Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
2.Store streams of records in a fault-tolerant durable way.
3.Process streams of records as they occur.

Kafka is generally used for two broad classes of applications:

1.Building real-time streaming data pipelines that reliably get data between systems or applications
2.Building real-time streaming applications that transform or react to the streams of data

二:kafak对比Flume:

Flume is a distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of log data from many different sources to a centralized data store.

Flume: 只有1个进程:含Source Channel Sink
kafka : 有3个进程: producer broker cousumer
生成者 服务进程(存储)(需配置) 消费者(sparkstreaming flik 结构化流)
编程语言为Scala
几个概率:主题Topic 区分不同的业务系统。落在磁盘上就是不同的文件夹。
在这里插入图片描述

三:kafka分布式部署:

3.1 分布式安装准备:

3.1.1:先安装Zookeeper CDH5.7.0

安装已经在前期HA时已安装好,具体情况如下:bin/zkServer.sh status
在这里插入图片描述

3.1.2:安装好scala 2.11版本:

tar -xzvf scala-2.11.8.tgz -C …/app
chown -R scala-2.11.8
ln -s scala-2.11.8 scala

添加环境变量:
cat /etc/profile

export SCALA_HOME=/home/hadoop/app/scala-2.11.8
export PATH= P A T H : PATH: SCALA_HOME/bin

3.2 kafka 安装布署

3.2.1:kafa选型:

kafka 没有CDH5.7.0 ,在CDH中,kafka是独立分支。
kafka_2.11 - 0.10.0.1.tgz
scala2.11 版本
0.10.0.1 kafka版本
http://mirror.bit.edu.cn/apache/kafka/
在这里插入图片描述

3.2.2:kafka 布署:

tar -xzvf kafka_2.11-0.10.2.2.tgz -C …/app
ln -s kafka_2.11-0.10.2.2 kafka
在这里插入图片描述
首选:kafka数据是落到Linux磁盘上的,所有首选创建个存储目录:mkdir logs
其次:配置服务进程:

The id of the broker. This must be set to a unique integer for each broker.

broker.id=0 (每个机器按序排)

阿里云需要留意地方
(# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1/2/3
host.name=阿里云内网地址
advertised.host.name=阿里云外网地址
advertised.port=9092

A comma seperated list of directories under which to store log files

log.dirs=/home/hadoop/app/kafka/logs

默认伪分布配置在本地电脑的zookeeper

root directory for all kafka znodes.

zookeeper.connect=39.105.98.82:2181,39.105.123.53:2181,39.106.106.185:2181/kafka (带个目录好删除)
zookeeper.connect=172.17.4.16:2181,172.17.4.17:2181,172.17.217.124:2181/kafka

然后看看是否能启动运行:

看看是否能找到:which kafka-server-start.sh 一般找不到,需要进行路径补全:
nohup bin/kafka-server-start.sh config/server.properties &
tail -F nohup.out

在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weizhonggui/article/details/86701497
今日推荐