Spark series (xv) - Spark Streaming integrate Flume

I. Introduction

Apache Flume is a distributed, highly available data collection system may transmit different data source to collect data from after the polymerization or to the distributed computing frameworks storage system. Spark Straming provides two ways for the integration of the Flume.

Second, push approach

In the push approach (Flume-style Push-based Approach ) in, Spark Streaming program needs to listen on a port to a server, Flume by avro Sinkthe steady flow of data to push to the port. Here to listen to the log file, for example, specific integration as follows:

2.1 Configure log collection Flume

New Configuration netcat-memory-avro.properties, use the tailcommand listener file content changes, then the new file content through avro sinkport 8888 sent to hadoop001 this server:

#指定agent的sources,sinks,channels
a1.sources = s1
a1.sinks = k1
a1.channels = c1

#配置sources属性
a1.sources.s1.type = exec
a1.sources.s1.command = tail -F /tmp/log.txt
a1.sources.s1.shell = /bin/bash -c
a1.sources.s1.channels = c1

#配置sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop001
a1.sinks.k1.port = 8888
a1.sinks.k1.batch-size = 1
a1.sinks.k1.channel = c1

#配置channel类型
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

2.2 Project Dependencies

Engineering project uses Maven to build, is mainly dependent spark-streamingand spark-streaming-flume.

<properties>
    <scala.version>2.11</scala.version>
    <spark.version>2.4.0</spark.version>
</properties>

<dependencies>
    <!-- Spark Streaming-->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming_${scala.version}</artifactId>
        <version>${spark.version}</version>
    </dependency>
    <!-- Spark Streaming 整合 Flume 依赖-->
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-streaming-flume_${scala.version}</artifactId>
        <version>2.4.3</version>
    </dependency>
</dependencies>

2.3 Spark Streaming received log data

Call FlumeUtils tools of createStreamthe method, for 8888 port hadoop001 eavesdropping, get into the flow of data and print:

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.flume.FlumeUtils

object PushBasedWordCount {
    
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf()
    val ssc = new StreamingContext(sparkConf, Seconds(5))
    // 1.获取输入流
    val flumeStream = FlumeUtils.createStream(ssc, "hadoop001", 8888)
    // 2.打印输入流的数据
    flumeStream.map(line => new String(line.event.getBody.array()).trim).print()

    ssc.start()
    ssc.awaitTermination()
  }
}

2.4 Packaging Project

Because Spark installation directory does not contain spark-streaming-flumedependent packages, so submit to a cluster running time must provide the dependencies, you can use submit command --jarto specify uploaded to the server the dependencies, or use --packages org.apache.spark:spark-streaming-flume_2.12:2.4.3the specified dependencies of the full name, this program will be downloaded to go to the central warehouse at startup.

Here I used a third way: Using the maven-shade-pluginplug-in ALL IN ONEpackage, all dependent Jar together into the final package. Note that the spark-streamingpackage Spark installation directory jarshas been provided in the catalog, there is no need to break into. Plug-in configuration is as follows:

<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <configuration>
                <source>8</source>
                <target>8</target>
            </configuration>
        </plugin>
        <!--使用 shade 进行打包-->
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-shade-plugin</artifactId>
            <configuration>
                <createDependencyReducedPom>true</createDependencyReducedPom>
                <filters>
                    <filter>
                        <artifact>*:*</artifact>
                        <excludes>
                            <exclude>META-INF/*.SF</exclude>
                            <exclude>META-INF/*.sf</exclude>
                            <exclude>META-INF/*.DSA</exclude>
                            <exclude>META-INF/*.dsa</exclude>
                            <exclude>META-INF/*.RSA</exclude>
                            <exclude>META-INF/*.rsa</exclude>
                            <exclude>META-INF/*.EC</exclude>
                            <exclude>META-INF/*.ec</exclude>
                            <exclude>META-INF/MSFTSIG.SF</exclude>
                            <exclude>META-INF/MSFTSIG.RSA</exclude>
                        </excludes>
                    </filter>
                </filters>
                <artifactSet>
                    <excludes>
                        <exclude>org.apache.spark:spark-streaming_${scala.version}</exclude>
                        <exclude>org.scala-lang:scala-library</exclude>
                        <exclude>org.apache.commons:commons-lang3</exclude>
                    </excludes>
                </artifactSet>
            </configuration>
            <executions>
                <execution>
                    <phase>package</phase>
                    <goals>
                        <goal>shade</goal>
                    </goals>
                    <configuration>
                        <transformers>
                            <transformer 
                              implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
                            <transformer 
                              implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                            </transformer>
                        </transformers>
                    </configuration>
                </execution>
            </executions>
        </plugin>
        <!--打包.scala 文件需要配置此插件-->
        <plugin>
            <groupId>org.scala-tools</groupId>
            <artifactId>maven-scala-plugin</artifactId>
            <version>2.15.1</version>
            <executions>
                <execution>
                    <id>scala-compile</id>
                    <goals>
                        <goal>compile</goal>
                    </goals>
                    <configuration>
                        <includes>
                            <include>**/*.scala</include>
                        </includes>
                    </configuration>
                </execution>
                <execution>
                    <id>scala-test-compile</id>
                    <goals>
                        <goal>testCompile</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

See full source code of the project: the Spark-Streaming-Flume

Use mvn clean packagewill produce the following two commands Jar package after package, submitted 非 originalat the beginning of Jar can be.

2.5 start the service and submit jobs

Start Flume services:

flume-ng agent \
--conf conf \
--conf-file /usr/app/apache-flume-1.6.0-cdh5.15.2-bin/examples/netcat-memory-avro.properties \
--name a1 -Dflume.root.logger=INFO,console

Submit Spark Streaming job:

spark-submit \
--class com.heibaiying.flume.PushBasedWordCount \
--master local[4] \
/usr/appjar/spark-streaming-flume-1.0.jar

2.6 Test

As used herein, echothe command log generated simulation scenario, the additional data to the log file, and then outputs the viewer:

Spark Streaming successfully received program data and print output:

2.7 Considerations

1. Start sequence

It should be noted that, whether you start the program or the Flume Spark program first, due to the start of the two requires a certain time, then the program will be started briefly refused to throw an exception port connection, in which case no action wait to start two programs can be completed.

2. The same version

The best guarantee for a consistent local development and compiled versions of Scala Scala and Spark's version, at least to ensure consistent large version, as is 2.11.


Third, the pull approach

Pull approach (Pull-based Approach using a Custom Sink) is a push data to SparkSinka receiver, in which case the data will remain a buffer status, Spark Streaming pulls data from the timing receiver. This approach is that only after receiving Spark Streaming and copied the data, cached data will be deleted transaction-based. Compared with the first way, with more assurance of reliability and fault tolerance. Integration steps are as follows:

3.1 Configure log collection Flume

新建 Flume 配置文件 netcat-memory-sparkSink.properties,配置和上面基本一致,只是把 a1.sinks.k1.type 的属性修改为 org.apache.spark.streaming.flume.sink.SparkSink,即采用 Spark 接收器。

#指定agent的sources,sinks,channels
a1.sources = s1
a1.sinks = k1
a1.channels = c1

#配置sources属性
a1.sources.s1.type = exec
a1.sources.s1.command = tail -F /tmp/log.txt
a1.sources.s1.shell = /bin/bash -c
a1.sources.s1.channels = c1

#配置sink
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname = hadoop001
a1.sinks.k1.port = 8888
a1.sinks.k1.batch-size = 1
a1.sinks.k1.channel = c1

#配置channel类型
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

2.2 新增依赖

使用拉取式方法需要额外添加以下两个依赖:

<dependency>
    <groupId>org.scala-lang</groupId>
    <artifactId>scala-library</artifactId>
    <version>2.12.8</version>
</dependency>
<dependency>
    <groupId>org.apache.commons</groupId>
    <artifactId>commons-lang3</artifactId>
    <version>3.5</version>
</dependency>

注意:添加这两个依赖只是为了本地测试,Spark 的安装目录下已经提供了这两个依赖,所以在最终打包时需要进行排除。

2.3 Spark Streaming接收日志数据

这里和上面推送式方法的代码基本相同,只是将调用方法改为 createPollingStream

import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.flume.FlumeUtils

object PullBasedWordCount {

  def main(args: Array[String]): Unit = {

    val sparkConf = new SparkConf()
    val ssc = new StreamingContext(sparkConf, Seconds(5))
    // 1.获取输入流
    val flumeStream = FlumeUtils.createPollingStream(ssc, "hadoop001", 8888)
    // 2.打印输入流中的数据
    flumeStream.map(line => new String(line.event.getBody.array()).trim).print()
    ssc.start()
    ssc.awaitTermination()
  }
}

2.4 启动测试

启动和提交作业流程与上面相同,这里给出执行脚本,过程不再赘述。

启动 Flume 进行日志收集:

flume-ng agent \
--conf conf \
--conf-file /usr/app/apache-flume-1.6.0-cdh5.15.2-bin/examples/netcat-memory-sparkSink.properties \
--name a1 -Dflume.root.logger=INFO,console

提交 Spark Streaming 作业:

spark-submit \
--class com.heibaiying.flume.PullBasedWordCount \
--master local[4] \
/usr/appjar/spark-streaming-flume-1.0.jar

参考资料

更多大数据系列文章可以参见 GitHub 开源项目大数据入门指南

Guess you like

Origin www.cnblogs.com/heibaiying/p/11355734.html