Using idea tools scala and Java to develop spark case: WordCount

Table of contents

An environment preparation

2. Scala code writing

Three java code writing


An environment preparation

        Create a maven project

        Add the following dependencies

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-core_2.12</artifactId>
      <version>${spark.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.12</artifactId>
      <version>${spark.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-mllib_2.12</artifactId>
      <version>${spark.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.12</artifactId>
      <version>${spark.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-graphx_2.12</artifactId>
      <version>${spark.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-hive_2.12</artifactId>
      <version>${spark.version}</version>
    </dependency>
    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>${mysql.version}</version>
    </dependency>
    <dependency>
      <groupId>com.alibaba</groupId>
      <artifactId>fastjson</artifactId>
      <version>1.2.62</version>
    </dependency>

        If you have downloaded these dependencies before, there is no need to download them again. You can use the previous ones, such as json, mysql, mysq. The version here is mysql 5. Please pay attention to modifications if they are different.

        

2. Scala code writing

        First prepare the data, that is, add some words to a txt text, which can be placed in hdfs or locally or elsewhere. Pay attention to changing the code when reading. Here is the txt text on hdfs, pay attention to changing it to your own address.

         Create a new scala object and write the code:

import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession
import org.apache.spark.{SparkConf, SparkContext}

object WordCountDemo {
  def main(args: Array[String]): Unit = {
    val conf : SparkConf = new SparkConf().setMaster("local[*]").setAppName("wordCount")
    val sc : SparkContext = SparkContext.getOrCreate(conf)

    var spark : SparkSession = SparkSession.builder().config(conf).getOrCreate()

//    val rdd1: RDD[String] = sc.textFile("hdfs://101.200.63.3:9000/kb23/tmp/*.txt")
//    val rdd2: RDD[String] = rdd1.flatMap(x => x.split(" "))
//    val rdd3: RDD[(String, Int)] = rdd2.map(x => (x, 1))
//    val result: RDD[(String, Int)] = rdd3.reduceByKey(_ + _)

    val result2: RDD[(String, Int)] = sc.textFile("hdfs://101.200.63.3:9000/kb23/tmp/*.txt").flatMap(x=>x.split(" ")).map(x=>(x,1)).reduceByKey((x,y)=>x+y)
    //打印到 console
    //    result2.glom().collect.foreach(x=>println(x.toList))
    //保存到 hdfs
    result2.saveAsTextFile("hdfs://101.200.63.3:9000/kb23/sparkoutput/wordcount")
  }

}

        Here is a brief explanation of some functions in the code:

        map: conversion function, each element in the data collection performs the method we define once

        flatMap: similar to map, but maps to 0 or more

        collect: Returns all elements in the dataset as an array 

        glom: Convert the data of the same partition directly into the same type of memory array for processing, and the partition remains unchanged.

 

        Cloud server friends may report errors.

22/05/0305:48:53 WARN DFSClient: Failed to connect to /10.0.24.10:9866 for block, add to deadNodes and continue. org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.0.24.10:9866]
org.apache.hadoop.net.ConnectTimeoutException: 60000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=/10.0.24.10:9866]

        When this kind of error occurs, it is easy to understand by looking at the literal meaning. When communicating locally with the datanode, the namenode gives the intranet IP of the datanode, so it cannot be found locally.

        The solution is also very simple. Set it up so that the namenode passes the server name instead of the ip.

        In the idea, add the file hdfs-site.xml to the resource folder

        hdfs-site.xml content:

<!-- datanode 通信是否使用域名,默认为false,改为true -->
    <property>
        <name>dfs.client.use.datanode.hostname</name>
        <value>true</value>
        <description>Whether datanodes should use datanode hostnames whenconnecting to other datanodes for data transfer.
        </description>
    </property>

Three java code writing

        The original data here is stored locally and the file name is input.txt

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;

import java.util.Arrays;
import java.util.Map;

public class WordCount {
    public static void main(String[] args) {
        // 创建SparkConf对象
        SparkConf conf = new SparkConf()
                .setAppName("WordCount")
                .setMaster("local");

        // 创建JavaSparkContext对象
        JavaSparkContext sc = new JavaSparkContext(conf);

        // 读取文本文件
        JavaRDD<String> lines = sc.textFile("input.txt");

        // 计算单词出现次数
        JavaRDD<String> words = lines.flatMap(line -> Arrays.asList(line.split(" ")).iterator());
        JavaRDD<String> filteredWords = words.filter(word -> !word.isEmpty());
        JavaPairRDD<String, Integer> wordCounts = filteredWords.mapToPair(word -> new Tuple2<>(word, 1))
                .reduceByKey((x, y) -> x + y);
        Map<String, Integer> wordCountsMap = wordCounts.collectAsMap();

        // 输出结果
        for (Map.Entry<String, Integer> entry : wordCountsMap.entrySet()) {
            System.out.println(entry.getKey() + ": " + entry.getValue());
        }

        // 关闭JavaSparkContext对象
        sc.close();

    }
}

Guess you like

Origin blog.csdn.net/jojo_oulaoula/article/details/133695169