Flume Installation and Configuration

1, Downloads:

	进入到此网址:http://archive.cloudera.com/cdh5/cdh/5/,找到flume-ng-1.6.0-cdh5.7.0.tar.gz,

Hadoop into the next machine, cd / Home / hadoop / App, wget http://archive.cloudera.com/cdh5/cdh/5/flume-ng-1.6.0-cdh5.7.0.tar.gz, download, download later extracting, configure the environment variables FLUME_HOME, Source ~ / .bash_profile

  1. export FLUME_HOME=/home/hadoop/app/apache-flume-1.6.0-cdh5.7.0-bin
    export PATH= F L The M E H O M E / b i n : FLUME_HOME/bin: PATH
  2. [hadoop@hadoop002 apache-flume-1.6.0-cdh5.7.0-bin]$ cd conf
    [hadoop@hadoop002 conf]$ ll
    -rw-r–r-- 1 hadoop hadoop 1253 Jun 11 10:34 flume-env.sh
  3. VI flume-env.sh
    #export JAVA_HOME = / usr / lib / JVM / Java-Sun. 6-
    Export JAVA_HOME = / usr / Java / jdk1.8.0_45 // new path JAVA_HOME
  4. vi examples.conf
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1
    #配置netcat source
    a1.sources.r1.type=netcat
    a1.sources.r1.bind=localhost
    a1.sources.r1.port=44444
    #配置memory channel
    a1.channels.c1.type=memory
    a1.channels.c1.capacity=10000
    a1.channels.c1.transactionCapacity=100
    #配置logger sinks
    a1.sinks.k1.type=logger
    #配置完成后需要连线
    a1.sources.r1.channels=c1
    a1.sinks.k1.channel=c1
  5. Start:
    Flume-ng Agent
    -name a1
    -conf $ FLUME_HOME / conf
    -conf-File Katex the parse error: the Expected 'EOF', GOT '\' AT position 30: ... f / example.conf \-Dflume.root. ... LO FLUME_HOME / conf directory, examples.conf for the file you want to start at $ FLUME_HOME / conf directory, the end of the new words mean logger level is the root, info message to the console.
  6. After you start using a telnet localhost 44444 command to view, enter any character returns ok, configuration is complete.
    Here Insert Picture Description
    Note: telnet services need to be configured: https://www.cnblogs.com/zuochuang/p/6511285.html

Two, example

1、编辑flume-exec-hdfs文件
vi flume-exec-hdfs.conf
agent.sources = exec-source
exec-hdfs-agent.sinks = hdfs-sink
exec-hdfs-agent.channels = memory-channel
#配置source
exec-hdfs-agent.sources.exec-source.type=exec
exec-hdfs-agent.sources.exec-source.command=tail -F /home/hadoop/data/g6/data.log
exec-hdfs-agent.sources.exec-source.shell=/bin/sh -c
#describe the memory channel
exec-hdfs-agent.channels.memory-channel.type=memory
exec-hdfs-agent.channels.memory-channel.capacity=10000
exec-hdfs-agent.channels.memory-channel.transactionCapacity=100
#describe the hdfs sink
exec-hdfs-agent.sinks.hdfs-sink.type=hdfs
exec-hdfs-agent.sinks.hdfs-sink.hdfs.path=hdfs://hadoop002:9000/g6flume/tail
exec-hdfs-agent.sinks.hdfs-sink.hdfs.writeFormat=Text
exec-hdfs-agent.sinks.hdfs-sink.hdfs.fileType=DataStream
#配置完成后需要连线
exec-hdfs-agent.sources.exec-source.channels=memory-channel
exec-hdfs-agent.sinks.hdfs-sink.channel=memory-channel
2、启动:
 flume-ng agent \
> --name exec-hdfs-agent \
> --conf $FLUME_HOME/conf \
> --conf-file $FLUME_HOME/conf/flume-exec-hdfs.conf \
> -Dflume.root.logger=INFO,console

Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description

Guess you like

Origin blog.csdn.net/zhikanjiani/article/details/91973663