使用Spark实现词频统计


一,词频统计准备工作

insert image description here

  1. 单词计数是学习分布式计算的入门程序,有很多种实现方式,例如MapReduce;使用Spark提供的RDD算子可以更加轻松地实现单词计数。
  2. 在IntelliJ IDEA中新建Maven管理的Spark项目,在该项目中使用Scala语言编写Spark的WordCount程序,可以本地运行Spark项目查看结果,也可以将项目打包提交到Spark集群(Standalone模式)中运行。

(一)版本选择问题

前面创建了Spark集群(Standalone模式),采用的是Spark3.3.2版本
insert image description here
Spark3.3.2用的Scala库是2.13,但是Spark-Shell里使用的Scala版本是2.12.15
insert image description here
为了Spark项目打成jar包能够提交到这个Spark集群运行,本地就要安装Scala2.12.15

Since the Spark project requires that the Spark kernel version and the Scala library version (major version. minor version) be consistent, otherwise the project cannot be run locally. Starting from Spark3.2.0, the Scala library version is required to be updated to 2.13. Only Spark3.1.3 uses the Scala library version is still 2.12, so the Spark project chooses to use Spark3.1.3.
insert image description here

If the Spark project is based on JDK11, there is no problem running locally, but it will report an error if it is packaged as a Jar package and submitted to the cluster for operation.

(2) Install Scala2.12.15

Download Scala2.12.15 from Scala official website - https://www.scala-lang.org/download/2.12.15.html
insert image description here

(3) Start the HDFS and Spark of the cluster

Start the HDFS service
insert image description here
Start the Spark cluster
insert image description here

(4) Prepare word files on HDFS

Create a word file - words.txt under the Txt folder of the master virtual machine
insert image description here

Upload the word file to the HDFS specified directory /wordcount/input
insert image description here

2. Run the Spark project in local mode

(1) Create a new Maven project

Create a new Maven project, note that based on JDK8, set the project information (project name, save location, group number and product number) and
insert image description here
click the [Create] button
insert image description here
to change the java directory to the scala directory
insert image description hereinsert image description here

The source program directory becomes scala
insert image description here

(2) Add project-related dependencies

Add dependencies in the pom.xml file and inform that the source program directory has been changed to scala

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
          http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>net.army.rdd</groupId>
    <artifactId>SparkRDDWordCount</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>2.12.15</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>3.1.3</version>
        </dependency>
    </dependencies>
    <build>
        <sourceDirectory>src/main/scala</sourceDirectory>
    </build>
</project>

Since the source program directory has been changed to scala, sub-elements must be added to the element, specifying the directory src/main/scala
insert image description here
insert image description here

(3) Create a log property file

Create a log properties file in the resources directory - log4j.properties

insert image description here

log4j.rootLogger=ERROR, stdout, logfile
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n
log4j.appender.logfile=org.apache.log4j.FileAppender
log4j.appender.logfile.File=target/spark.log
log4j.appender.logfile.layout=org.apache.log4j.PatternLayout
log4j.appender.logfile.layout.ConversionPattern=%d %p [%c] - %m%n

(4) Add Scala SDK

We have installed and configured Scala 2.12.15 earlier

Add Scala 2.12.15 to Global Libraries in the project structure window
insert image description here
insert image description here
insert image description here
insert image description here
insert image description here

(5) Create HDFS configuration file

在resources目录里创建hdfs-site.xml文件,允许客户端使用数据节点(因为本机外网访问私有云上的集群)
insert image description here

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <property>
        <description>only config in clients</description>
        <name>dfs.client.use.datanode.hostname</name>
        <value>true</value>
    </property>
</configuration>

如果不添加这个配置文件,那么运行词频统计程序会报错,比如Failed to connect to /192.168.1.102:9866 for file /wordcount/input/words.txt

(六)创建词频统计单例对象

在net.huawei.rdd包下创建WordCount单例对象

insert image description here

package net.army.rdd

import org.apache.spark.{SparkConf, SparkContext}
/**
 * 作者:梁辰兴
 * 日期:2023/4/26
 * 功能:利用RDD实现词频统计
 */
object WordCount {
  def main(args: Array[String]): Unit = {
    // 创建Spark配置对象
    val conf = new SparkConf()
      .setAppName("SparkRDDWordCount") // 设置应用名称
      .setMaster("local[*]") // 设置主节点位置(本地调试)
    // 基于Spark配置对象创建Spark容器
    val sc = new SparkContext(conf)
    // 定义输入路径
    val inputPath = "hdfs://master:9000/wordcount/input"
    // 定义输出路径
    val outputPath = "hdfs://master:9000/wordcount/output"
    // 进行词频统计
    val wc = sc.textFile(inputPath) // 读取文件,得到RDD
      .flatMap(_.split(" ")) // 扁平化映射,得到单词数组
      .map((_, 1)) // 针对每个单词得到二元组(word, 1)
      .reduceByKey(_ + _) // 按键进行聚合(key相同,value就累加)
      .sortBy(_._2, false) // 按照单词个数降序排列
    // 在控制台输出词频统计结果
    wc.collect.foreach(println)
    // 将词频统计结果写入指定文件
    wc.saveAsTextFile(outputPath)
    // 停止Spark容器,结束任务
    sc.stop
  }
}

(七)运行程序,查看结果

首先看控制台输出结果
insert image description here
然后查看HDFS上的结果文件
insert image description here

显示结果文件内容
insert image description here

有两个结果文件,我们可以分别查看其内容
insert image description here

再次运行程序,会报错说输出目录已经存在
insert image description here

执行命令: hdfs dfs -rm -r /wordcount/output,删除输出目录
insert image description here

再次运行程序,查看结果
insert image description here

(八)解析程序代码

1,Spark配置对象

SparkConf对象的setMaster()方法用于设置Spark应用程序提交的URL地址。若是Standalone集群模式,则指Master节点的访问地址;若是本地(单机)模式,则需要将地址改为local或local[N]或local[*],分别指使用1个、N个和多个CPU核心数。本地模式可以直接在IDE中运行程序,不需要Spark集群。

此处也可不设置。若将其省略,则使用spark-submit提交该程序到集群时必须使用–master参数进行指定。

2,Spark容器对象

SparkContext对象用于初始化Spark应用程序运行所需要的核心组件,是整个Spark应用程序中很重要的一个对象。启动Spark Shell后默认创建的名为sc的对象即为该对象。
insert image description here

3,读取文本文件方法

textFile()方法需要传入数据来源的路径。数据来源可以是外部的数据源(HDFS、S3等),也可以是本地文件系统(Windows或Linux系统),路径可使用以下3种方式。

路径方式 说明
文件路径 例如textFile("/wordcount/input/words.txt "),此时将只读取指定的文件。
目录路径 例如textFile(“/wordcount/input/”),此时将读取指定目录input下的所有文件,不包括子目录。
路径包含通配符 例如textFile(“/wordcount/input/*.txt”),此时将读取input目录下的所有TXT文件。

该方法将读取的文件中的内容按行进行拆分并组成一个RDD集合。

(九)修改程序,使用命令行参数

package net.army.rdd

import org.apache.spark.{SparkConf, SparkContext}
/**
 * 作者:梁辰兴
 * 日期:2023/4/26
 * 功能:利用RDD实现词频统计
 */
object WordCount {
  def main(args: Array[String]): Unit = {
    // 创建Spark配置对象
    val conf = new SparkConf()
      .setAppName("SparkRDDWordCount") // 设置应用名称
      .setMaster("local[*]") // 设置主节点位置(本地调试)
    // 基于Spark配置对象创建Spark容器
    val sc = new SparkContext(conf)
    // 声明输入输出路径
    var inputPath = ""
    var outputPath = ""
    // 判断命令行参数个数
    if (args.length == 0) {
      inputPath = "hdfs://master:9000/wordcount/input"
      outputPath = "hdfs://master:9000/wordcount/output"
    } else if (args.length == 2) {
      inputPath = args(0)
      outputPath = args(1)
    } else {
      println("温馨提示:命令行参数个数只能是0或2~")
      return
    }
    // 进行词频统计
    val wc = sc.textFile(inputPath) // 读取文件,得到RDD
      .flatMap(_.split(" ")) // 扁平化映射,得到单词数组
      .map((_, 1)) // 针对每个单词得到二元组(word, 1)
      .reduceByKey(_ + _) // 按键进行聚合(key相同,value就累加)
      .sortBy(_._2, false) // 按照单词个数降序排列
    // 在控制台输出词频统计结果
    wc.collect.foreach(println)
    // 将词频统计结果写入指定文件
    wc.saveAsTextFile(outputPath)
    // 停止Spark容器,结束任务
    sc.stop
  }
}

创建/Txt/test1.txt文件,上传到HDFS指定目录
insert image description here
打开配置窗口
insert image description here配置命令行参数,注意两个参数之间必须有空格
insert image description hereinsert image description here
运行程序,查看结果
insert image description here
命令行参数只设置一个
insert image description here
运行程序,查看结果
insert image description here

三,集群模式运行Spark项目

(一)利用Maven打包

pom.xml文件添加如下内容:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
          http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>net.army.rdd</groupId>
    <artifactId>SparkRDDWordCount</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>2.12.15</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.12</artifactId>
            <version>3.1.3</version>
        </dependency>
    </dependencies>
    <build>
        <sourceDirectory>src/main/scala</sourceDirectory>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.3.0</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.3.2</version>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>add-source</goal>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>scala-test-compile</id>
                        <phase>process-test-resources</phase>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

单击刷新按钮,添加了两个插件
insert image description hereinsert image description here
双击LifeCycle下的package命令
insert image description here
insert image description here

(二)利用IDEA打包

删除pom.xml文件里的构建插件
insert image description here单击刷新按钮,发现两个构建插件已删除
insert image description here打开项目结构窗口,选择Artifacts栏目
insert image description here
在JAR子菜单里选择第二项From modules with dependencies…,设置主类以及JAR文件
insert image description here单击【OK】按钮
insert image description here
修改名称,将输出目录里的依赖包全部移除
insert image description here单击【OK】按钮
insert image description here
生成Artifact
insert image description here
insert image description here单击【Build】之后,项目里会出现out目录
insert image description here由于没有将依赖包添加到生成的jar包,所以生成的jar包很小,只有5KB。如果将全部依赖包都打进jar包,那么生成的jar包就会有几十兆。
insert image description here
将生成的jar包上传到master虚拟机/home目录
insert image description here
查看上传的jar包
insert image description here

(三)执行提交命令

1,不带参数执行

(1)采用client提交方式

执行命令:spark-submit --master spark://master:7077 --class net.army.rdd.WordCount SparkRDDWordCount.jar
insert image description here

在一堆输出信息中查看词频统计结果
insert image description here查看结果文件内容
insert image description here删除输出目录
insert image description here

(2)采用cluster提交方式

首先将词频统计jar包上传到HDFS指定目录
insert image description here
执行命令:spark-submit --master spark://master:7077 --deploy-mode cluster --class net.army.rdd.WordCount --driver-memory 512m --executor-memory 1g --executor-cores 2 hdfs://master:9000/park/SparkRDDWordCount.jar

insert image description here

在Spark WebUI里查看(Driver running on 192.168.1.102:45870,表明Driver是在slave1节点上运行)
insert image description here
单击stdout超链接
insert image description here

2,带参数执行

(1)采用client提交方式

执行命令:spark-submit --master spark://master:7077 --class net.army.rdd.WordCount SparkRDDWordCount.jar hdfs://master:9000/wc/input hdfs://master:9000/wc/output
insert image description here在一堆输出信息里查看词频统计结果
insert image description here删除输出目录
insert image description here
执行命令:spark-submit --master spark://master:7077 --class net.army.rdd.WordCount SparkRDDWordCount.jar hdfs://master:9000/wc/input(只设置输入路径参数,没有设置输出路径参数)
insert image description hereinsert image description here

(2)采用cluster提交方式

Execute the command: spark-submit --master spark://master:7077 --deploy-mode cluster --class net.army.rdd.WordCount --driver-memory 512m --executor-memory 1g --executor-cores 2 hdfs://master:9000/park/SparkRDDWordCount.jar hdfs://master:9000/wc/input hdfs://master:9000/wc/output
insert image description here
View in Spark WebUI
insert image description here
Click the stdout hyperlink
insert image description here

(3) Submit command parameter analysis

  • –master: The access path of the Spark Master node. Since the path has been specified by the setMaster() method in the WordCount program, this parameter can be omitted.

  • –class: The full access path of the main class of the SparkWordCount program (package name.class name).

  • hdfs://master:9000/wc/input: source path of word data. All files under this path will participate in statistics.

  • hdfs://master:9000/wc/output: the output path of statistical results. As with MapReduce, this directory should not exist in advance, Spark will create it automatically.

(4) View application information on the Spark WebUI interface

During the running of the application, you can also visit Spark's WebUI http://master:4040/ to view the status information of the running Job (job), including job ID, job description, job running time, and job running The number of stages, the total number of job stages, the number of tasks that have been run by the job, etc. (this interface will be inaccessible after the job is finished running)
insert image description here

Guess you like

Origin blog.csdn.net/m0_62617719/article/details/130235264