大数据组件flume部署安装与测试练习

环境说明

10.176.2.101 master
10.176.2.103 zjx03
10.176.2.105 zjx05

cent-os6.5
zookeeper cdh 3.4.5
hadoop apache 2.7.7
mysql 5.17
jdk 1.8.191
sqoop 1.4.7 (2.x不稳定,使用1.x)
flume 1.7.0 (安装于10.176.2.105 zjx05)

flume下载

http://apache.mirrors.hoobly.com/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz
https://mirrors.tuna.tsinghua.edu.cn/apache/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz
推荐下述下载:

mkdir -p /opt/softwares/
cd /opt/softwares/
wget  https://mirrors.tuna.tsinghua.edu.cn/apache/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz  -c /opt/softwares/
tar -zxvf apache-flume-1.7.0-bin.tar.gz
rm -rf apache-flume-1.7.0-bin.tar.gz
flume安装

vim flume-env.sh

cd apache-flume-1.7.0-bin/
cd conf/
cp flume-env.ps1.template flume-env.sh
echo $JAVA_HOME
vim flume-env.sh
#注释掉FLUME_CLASSPATH例句与export JAVA_OPTS并添加JAVA_HOME等环境变量
export JAVA_HOME=/usr/lib/java/jdk1.8.0_191
export FLUME_HOME=/opt/softwares/apache-flume-1.7.0-bin
export FLUME_CLASSPATH=$FLUME_HOME/lib
export HADOOP_HOME=/opt/softwares/hadoop-2.7.7
export HADOOP_CLASSPATH=$HADOOP_HOME:$HADOOP_HOME/lib:$HDFS_HOME/lib
#$JAVA_OPTS="-Xms100m -Xmx200m -Dcom.sun.management.jmxremote"
#$FLUME_CLASSPATH=""   # Example:  "path1;path2;path3"

环境变量配置

[root@zjx05 bin]# pwd
/opt/softwares/apache-flume-1.7.0-bin/bin
[root@zjx05 bin]# vim /etc/profile
export FLUME_HOME=/opt/softwares/apache-flume-1.7.0-bin
export FLUME_CLASSPATH=$FLUME_HOME/lib:$CLASSPATH
export PATH=$FLUME_HOME/bin:$PATH
[root@zjx05 bin]# source /etc/profile
[root@zjx05 bin]# flume-ng version
Flume 1.7.0
配置flume

新建Flume配置文件,设置source类型为spooldir、sink类型为hdfs、channel类型为memory

[root@zjx05 softwares]# cd /mnt/
[root@zjx05 mnt]# touch flume.conf
[root@zjx05 mnt]# vim flume.conf 
# 定义Agent组件名
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# 配置Souce组件
a1.sources.r1.type = spooldir
# 表示监控的这个目录
# cd /mnt/disk1/zhoujixiang/flumeSpool 运行flume后往这个目录放文件就会放入hdfs
a1.sources.r1.spoolDir = /mnt/zhoujixiang/flumeSpool/
a1.sources.r1.fileHeader = true
# 配置Sink组件
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
# hdfs存入路径
a1.sinks.k1.hdfs.path = /tmp/zhoujixiang/flume_data/
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 0
# 配置Channel组件
a1.channels.c1.type = memory
a1.channels.c1.capacity = 5000
a1.channels.c1.transactionCapacity = 5000
# 设置Source、Sink和Channel的关系
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
创建Flume监听和存储目录

在本地创建Flume监听目录并复制日志文件,在HDFS中创建Flume存储目录。

# 在本地创建Flume监听目录
[root@zjx05 mnt]# mkdir -p /mnt/zhoujixiang/flumeSpool
[root@zjx05 mnt]# cd /mnt/zhoujixiang/flumeSpool
# 创建测试log文件
[root@zjx05 flumeSpool]# cat /opt/softwares/hadoop-2.7.7/logs/hadoop-root-datanode-master.log > test_flume.log
# 在HDFS上创建Flume存储目录,并设置权限为777
[root@zjx05 flumeSpool]# hadoop fs -mkdir -p  /tmp/zhoujixiang/flume_data
[root@zjx05 flumeSpool]# hadoop fs -chmod -R 777  /tmp/zhoujixiang/flume_data
运行测试

运行Flume Agent,对本地目录进行监听,如有新增文件,则将文件内容采集到HDFS存储目录中。

[root@zjx05 bin]# /opt/softwares/apache-flume-1.7.0-bin/bin
[root@zjx05 bin]# ./flume-ng agent -n a1 -c conf -f /mnt/flume.conf
# 重新开个session,并往flumeSpool目录中传入文件
[root@zjx05 zhoujixiang]# cd flumeSpool/
[root@zjx05 flumeSpool]# vim test_zjx_flume.log
hello flume!
# 查看Flume是否采集到新文件
[root@zjx03 ~]# hadoop fs -ls   /tmp/zhoujixiang/flume_data
-rw-r--r--   3 root supergroup        135 2019-01-21 21:53 /tmp/zhoujixiang/flume_data/FlumeData.1548078893343.tmp

参考链接:
https://www.cnblogs.com/netbloomy/p/6666683.html
https://blog.csdn.net/qq_39160721/article/details/80255194

猜你喜欢

转载自blog.csdn.net/weixin_34380948/article/details/87052764
今日推荐