Flume+Kafka+Spark Steaming demo

一.准备flume配置
a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
a1.sources.r1.spoolDir = /var/log/test
a1.sources.r1.fileHeader = true

a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000
a1.channels.c1.byteCapacityBufferPercentage = 20
a1.channels.c1.byteCapacity = 800000

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = spark
a1.sinks.k1.brokerList = master1:9092,master2:9092,slave3:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

二,spark代码
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}

object SparkStreamDemo {
  def main(args: Array[String]) {

    val conf = new SparkConf()
    conf.setAppName("spark_streaming")
    conf.setMaster("local[2]")

    val sc = new SparkContext(conf)
    sc.setCheckpointDir("D://checkpoints")
    sc.setLogLevel("ERROR")

    val ssc = new StreamingContext(sc, Seconds(5))

    val topics = Map("spark" -> 2)
    val lines = KafkaUtils.createStream(ssc, "master2:2181,slave2:2181,slave4:2181", "spark", topics).map(_._2)

    val ds1 = lines.flatMap(_.split(" ")).map((_, 1))

    val ds2 = ds1.updateStateByKey[Int]((x:Seq[Int], y:Option[Int]) => {
      Some(x.sum + y.getOrElse(0))
    })

    ds2.print()

    ssc.start()
    ssc.awaitTermination()

  }
}

三,注意的事项
1.kafka的topic是自动创建的,如果启动了配置没有的话,会建一个新的
2.记得flume读取文件夹是有权限的chown -R  flume:flume /var/log/test
3.echo "my my last test test test" > logs5
4.sc.setCheckpointDir("D://checkpoints")这里的文件路径

猜你喜欢

转载自lakerhu.iteye.com/blog/2400377