Big data-Spark task execution

Spark task execution

Spark-submit

(1) Modify the conf / slaves configuration file
hadoop1

(2) Start the pseudo-distributed cluster of spark

./sbin/start-all.sh

(3) Spark-submit submission task (taking Monte Carlo to calculate pi as an example)

spark-submit --master spark://hadoop1:7077 --class org.apache.spark.examples.SparkPi /usr/local/spark/spark-2.1.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.1.0.jar 100

(4) Spark-submit operation results

Insert picture description here
Insert picture description here

Spark-shell

Local mode

(1) Modify the conf / slaves configuration file
hadoop1

(2) Start the pseudo-distributed cluster of spark

./sbin/start-all.sh

(3) Start spark-shell

spark-shell

(4) Submit task

sc.textFile("spark_workCount.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect

Insert picture description here

Cluster mode

(1) Modify the conf / slaves configuration file
hadoop1

(2) Start the pseudo-distributed cluster of spark

./sbin/start-all.sh

(3) Start spark-shell

spark-shell --master spark://hadoop1:7077

(4) hdfs creates / spark / tmp folder

hdfs dfs -mkdir -p /spark/tmp

(5) hdfs upload spark_workCount.txt file

hdfs dfs -put spark_workCount.txt /spark/tmp

(6) Submit task

sc.textFile("hdfs://hadoop1:9000/spark/tmp/spark_workCount.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).saveAsTextFile("hdfs://hadoop1:9000/spark/output")

Insert picture description here
Insert picture description here

Spark的WordCount

Scala local mode

(1) Put the jar package under the jars folder under the resource of the IDEA project

Note: Keep the scala version consistent with the jar package version

Insert picture description here
(2) Start spark-shell

./sbin/start-all.sh
spark-shell

(3) WorkCount running in local mode

package spark

import org.apache.spark.{SparkConf, SparkContext}

object WordCount {
  def main(args: Array[String]): Unit = {
    //创建一个spark的配置文件
    val conf = new SparkConf().setAppName("Scala WorkCount").setMaster("local")
    //实例化SparkContext对象
    val sc = new SparkContext(conf)

    //本地模式
    val result = sc.textFile("hdfs://192.168.138.130:9000/spark/tmp/spark_workCount.txt")
      .flatMap(_.split(" "))
      .map((_,1))
      .reduceByKey(_+_)

    //输出结果
    result.foreach(println)
  }
}

Insert picture description here

Scala cluster mode

(1) Write scala code
import org.apache.spark.{SparkConf, SparkContext}

object WorkCount {
  def main(args: Array[String]): Unit = {
    //创建一个spark的配置文件
    val conf = new SparkConf().setAppName("Scala WorkCount")
    //实例化SparkContext对象
    val sc = new SparkContext(conf)

    //集群模式
    val result = sc.textFile(args(0))
      .flatMap(_.split(" "))
      .map((_,1))
      .reduceByKey(_+_)
      .saveAsTextFile(args(1));

    //关闭
    sc.stop();
  }
}

(2) Put the code into a jar package and put it on Linux

Insert picture description here
(3) Run spark

./sbin/start-all.sh

(4) Submit task

./sbin/start-all.sh
spark-submit --master spark://hadoop1:7077 --class spark.WordCount /root/Spark-1.0-SNAPSHOT.jar hdfs://192.168.138.130:9000/spark/tmp/spark_workCount.txt hdfs://192.168.138.130:9000/spark/wordcount

Insert picture description here
Insert picture description here

Java native mode

(1) Put the jar package under the jars folder under the resource of the IDEA project

Note: Keep the scala version consistent with the jar package version

Insert picture description here
(2) Start spark-shell

./sbin/start-all.sh
spark-shell

(3) WorkCount running in local mode

package com.spark.util;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import scala.Tuple2;

import java.util.Arrays;
import java.util.Iterator;
import java.util.List;

/**
 * Spark WordCount
 * 
 * @author Jabin
 * @version 1.00 2019/
 */
public class WordCount {
    public static void main(String[] args) {
        //创建Spark配置
        SparkConf conf = new SparkConf().setAppName("Spark.WordCount").setMaster("local");
        //加载Spark配置
        JavaSparkContext sc = new JavaSparkContext(conf);
        //本地模式
        JavaRDD<String> textFile = sc.textFile("hdfs://192.168.138.130:9000/spark/tmp/spark_workCount.txt");

        JavaRDD<String> flatMap = textFile.flatMap(new FlatMapFunction<String, String>() {
            public Iterator<String> call(String s) {
                return Arrays.asList(s.split(" ")).iterator();
            }
        });

        JavaPairRDD<String, Integer> map = flatMap.mapToPair(new PairFunction<String, String, Integer>() {
            public Tuple2<String, Integer> call(String s) {
                return new Tuple2<String, Integer>(s, 1);
            }
        });

        JavaPairRDD<String, Integer> reduce = map.reduceByKey(new Function2<Integer, Integer, Integer>() {
            public Integer call(Integer a, Integer b) {
                return a + b;
            }
        });

        List<Tuple2<String, Integer>> list = reduce.collect();

        for (Tuple2<String, Integer> tuple: list){
            System.out.println(tuple._1+" : "+tuple._2);
        }
    }
}

Insert picture description here

Spark's task scheduling architecture diagram

Insert picture description here

Published 131 original articles · won 12 · 60,000 views +

Guess you like

Origin blog.csdn.net/JavaDestiny/article/details/94493861