版权声明: https://blog.csdn.net/xianpanjia4616/article/details/82504001
在上一篇中已经把flink的集群搭建好了,然后我们就先来写一个wordcount示例,直接看代码吧:
pom文件如下:
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_2.11</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
<version>1.6.0</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.11</artifactId>
<version>1.6.0</version>
</dependency>
wordcount代码如下:
package flink
import org.apache.flink.api.scala.ExecutionEnvironment
import org.apache.flink.streaming.api.scala._
/**
* flink简单的wordcount的实现(本地运行)
*/
object flinkwordcount {
def main(args: Array[String]): Unit = {
val env = ExecutionEnvironment.createLocalEnvironment(1)
val text = env.readTextFile("D:/test.txt")
val word = text.flatMap(_.split(" ")).map((_,1)).groupBy(0).sum(1)
word.print()
word.writeAsText("D:/result.txt")
env.execute("flink wordcount demo")
}
}
可以看出flink的wordcount和spark的wordcount是非常像的,基本上一样.直接在idea运行即可.
如果有写的不对的地方,欢迎大家指正,如果有什么疑问,可以加QQ群:340297350,谢谢