Spark Streaming整合Flume(scala)

Spark Streaming整合Flume有两种方式:

(1)方式一:Push方式

方法步骤:
1)启动sparkstreaming作业
2) 启动flume agent
3))通过telnet输入数据

1、Flume Agent的编写:

$ vi $FLUME_HOME/conf/flume_push_streaming.conf

push-agent.sources = netcat-source
push-agent.sinks = avro-sink
push-agent.channels = memory-channel

push-agent.sources.netcat-source.type = netcat
push-agent.sources.netcat-source.bind = 01.server.bd
push-agent.sources.netcat-source.port = 6666

push-agent.sinks.avro-sink.type = avro
push-agent.sinks.avro-sink.hostname = 01.server.bd
push-agent.sinks.avro-sink.port = 5555

push-agent.channels.memory-channel.type = memory

push-agent.sources.netcat-source.channels = memory-channel
push-agent.sinks.avro-sink.channel = memory-channel

Flume的启动程序:

1)方法一:将输出打印到控制台,多用于测试

flume-ng agent  \
--name push-agent   \
--conf $FLUME_HOME/conf    \
--conf-file $FLUME_HOME/conf/flume_push_streaming.conf  \
-Dflume.root.logger=INFO,console

2)方法二:将输入放入后台

nohup flume-ng agent  \
--name push-agent   \
--conf $FLUME_HOME/conf    \
--conf-file $FLUME_HOME/conf/flume_push_streaming.conf  \ 
> /dev/null 2>&1 &

补充:

  • (nohup + 命令 + &)解析

nohup:不挂断地运行命令

& (最后的&):表示在后台运行

  • (> /dev/null 2>&1) 解析

其中/dev/null可以看做一个“黑洞”。它等价于只写文件,而写入的内容永远不会丢失,且不能读取。

> 代表定向到哪里。

1 表示stdout 标准输出,系统默认值为1,所以“>/dev/null”等同于“1>/dev/null”

2 表示stderr 标准错误

& 表示等同于的意思,2>&1,表示2的输出重定向等同于1

2、编写Spark Streaming程序(FlumePushWordCount.scala)

package com.fyy.spark.streaming

import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

/**
  * @Title: FlumePushWordCount
  * @ProjectName SparkStreamingProject
  * @Description: Spark Streaming整合Flume的push方式
  * @author fanyanyan
  */
object FlumePushWordCount {
  def main(args: Array[String]): Unit = {
    if (args.length != 2) {
      System.err.println("请输入参数 <hostname> <port>")
      System.exit(1)
    }

    val Array(hostname, port) = args
    val sparkConf = new SparkConf()
    val ssc = new StreamingContext(sparkConf, Seconds(5))

    // 使用SparkStreaming整合Flume
    val flumeStream = FlumeUtils.createStream(ssc, hostname, port.toInt)

    flumeStream.map(x => new String(x.event.getBody.array()).trim)
      .flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _).print()

    ssc.start()
    ssc.awaitTermination()

  }

}

通过spark-submit运行程序:

spark-submit \
--class com.fyy.spark.FlumePushWordCount \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/home/hadoop/lib/SparkStreamingProject-1.0.jar \
01.server.bd 5555

(2)方式二:Pull方式

1、Flume Agent的编写:

$ vi $FLUME_HOME/conf/flume_pull_streaming.conf

pull-agent.sources = netcat-source
pull-agent.sinks = spark-sink
pull-agent.channels = memory-channel

pull-agent.sources.netcat-source.type = netcat
pull-agent.sources.netcat-source.bind = 01.server.bd
pull-agent.sources.netcat-source.port = 6666

pull-agent.sinks.spark-sink.type = org.apache.spark.streaming.flume.sink.SparkSink
pull-agent.sinks.spark-sink.hostname = 01.server.bd
pull-agent.sinks.spark-sink.port = 5555

pull-agent.channels.memory-channel.type = memory

pull-agent.sources.netcat-source.channels = memory-channel
pull-agent.sinks.spark-sink.channel = memory-channel

注意点:先启动flume 后启动Spark Streaming应用程序

Flume的启动程序:

1)方法一:将输出打印到控制台,多用于测试

flume-ng agent  \
--name pull-agent   \
--conf $FLUME_HOME/conf    \
--conf-file $FLUME_HOME/conf/flume_pull_streaming.conf  \
-Dflume.root.logger=INFO,console

2)方法二:将输入放入后台

nohup flume-ng agent  \
--name push-agent   \
--conf $FLUME_HOME/conf    \
--conf-file $FLUME_HOME/conf/flume_pull_streaming.conf  \ 
> /dev/null 2>&1 &

2、编写Spark Streaming程序(FlumePullWordCount.scala)

package com.fyy.spark.streaming

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.flume.FlumeUtils

/**
  * @Title: FlumePullWordCount
  * @ProjectName SparkStreamingProject
  * @Description: Spark Streaming整合Flume的pull方式
  * @author fanyanyan
  */
object FlumePullWordCount {
  def main(args: Array[String]): Unit = {

    if(args.length != 2) {
      System.err.println("请输入参数 <hostname> <port>")
      System.exit(1)
    }

    val Array(hostname, port) = args

    val sparkConf = new SparkConf()
    val ssc = new StreamingContext(sparkConf, Seconds(5))

    // 使用SparkStreaming整合Flume
    val flumeStream = FlumeUtils.createPollingStream(ssc, hostname, port.toInt)

    flumeStream.map(x=> new String(x.event.getBody.array()).trim)
      .flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()

    ssc.start()
    ssc.awaitTermination()
  }

}

通过spark-submit运行程序:

spark-submit \
--class com.fyy.spark.FlumePushWordCount \
--master local[2] \
--packages org.apache.spark:spark-streaming-flume_2.11:2.2.0 \
/home/hadoop/lib/SparkStreamingProject-1.0.jar \
01.server.bd 5555

注意:

在执行spark-submit时,一定要联网,因为--packages中的包是运行时从网上下载的。

官方文档:

http://spark.apache.org/docs/2.2.0/streaming-flume-integration.html

猜你喜欢

转载自blog.csdn.net/adayan_2015/article/details/88425496
今日推荐