大数据学习之路69-scala编写Spark的WordCount程序

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_37050372/article/details/82531122

首先创建一个Maven工程:

为了既可以写scala又可以写java程序。我们的pom文件为:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.xiaoniu</groupId>
    <artifactId>spark</artifactId>
    <version>1.0-SNAPSHOT</version>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <encoding>UTF-8</encoding>
    </properties>
    <dependencies>
        <!-- https://mvnrepository.com/artifact/org.scala-lang/scala-library -->
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>2.11.8</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>2.3.1</version>
        </dependency>

        <!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-client -->
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.8.4</version>
        </dependency>
    </dependencies>

    <build>
        <pluginManagement>
            <plugins>
               <plugin>
                   <groupId>net.alchim31.maven</groupId>
                   <artifactId>scala-maven-plugin</artifactId>
                   <version>3.2.2</version>
               </plugin>

                <plugin>
                    <groupId>org.apache.maven.plugins</groupId>
                    <artifactId>maven-compiler-plugin</artifactId>
                    <version>3.5.1</version>
                </plugin>

            </plugins>

        </pluginManagement>
        <plugins>
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>add-source</goal>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>scala-test-compile</id>
                        <phase>process-test-resources</phase>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <executions>
                    <execution>
                        <phase>compile</phase>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>3.1.1</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                       <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>


    </build>

</project>

如果plugins出现又红线的问题,我们可以将project structure的libraries全部删除重新刷新。

我们一开始写程序的时候

setAppName("ScalaWordCount")是给程序起名字,
setMaster("local")是设置Master为本地。

以下贴出代码:

package com.test.day01

import org.apache.spark.{SparkConf, SparkContext}

object ScalaWordCount {
  def main(args: Array[String]): Unit = {
    //创建SparkConf并设置配置信息
    val conf = new SparkConf().setAppName("ScalaWordCount").setMaster("local")
    //sc是SparkContext,他是spark程序执行的入口
    val sc = new SparkContext(conf)
    //编写spark程序
    sc.textFile(args(0)).flatMap(_.split(" ")).map((_,1))
      .reduceByKey(_+_).sortBy(_._2,false).saveAsTextFile(args(1))
     //释放资源
    sc.stop()
  }
}

可是我们这样写出来的代码的可读性太差,所以我们换一种写法:

//指定从哪里读取数据并生成RDD
val lines: RDD[String] = sc.textFile(args(0))

这里的RDD是分布在多台机器上的数据的集合这里的String就是一行一行的内容。RDD就是分布式数据集。

//将一行内容进行切分压平
val words: RDD[String] = lines.flatMap(_.split(" "))
//将单词和一放到一个元组
val wordAndOne: RDD[(String, Int)] = words.map((_,1))
//继续聚合
      val reduced: RDD[(String, Int)] = wordAndOne.reduceByKey(_+_)
    //排序
   val value: RDD[(String, Int)] = reduced.sortBy(_._2)
    //保存数据
    value.saveAsTextFile(args(1))

猜你喜欢

转载自blog.csdn.net/qq_37050372/article/details/82531122
今日推荐