Spark submit task parameter description

1. Parameter selection

When our code is finished and the jar is typed, it can be submitted to the cluster through bin/spark-submit. The command is as follows:

./bin/spark-submit \ 
--class <main-class>
--master < master-url> \
--deploy-mode <deploy-mode> \
--conf <key>=<value> \ 
     ... # other options 
<application-jar> \ 
[application-arguments]
Generally use the above A few parameters are enough

--class: The entry point for your application (eg org.apache.spark.examples.SparkPi)

--master: The master URL for the cluster (eg spark://23.195.26.187:7077 )

--deploy-mode: Whether to deploy your driver on the worker nodes (cluster) or locally as an external client (client) (default: client) †

--conf: Arbitrary Spark configuration property in key=value format. For values that contain spaces wrap “key=value” in quotes (as shown).

application-jar: Path to a bundled jar including your application and all dependencies. The URL must be globally visible inside of your cluster, for instance, an hdfs:// path or a file:// path that is present on all nodes.

application-arguments: Arguments passed to the main method of your main class, if any

对于不同的集群管理,对spark-submit的提交列举几个简单的例子

# Run application locally on 8 cores

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master local[8] \
  /path/to/examples.jar \
100

# Run on a Spark standalone cluster in client deploy mode

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://207.184.161.138:7077 \
--executor-memory 20G \
--total-executor-cores 100 \
/path/to/examples.jar \
1000

# Run on a Spark standalone cluster in cluster deploy mode with supervise
# make sure that the driver is automatically restarted if it fails with non-zero exit code

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://207.184.161.138:7077 \
--deploy-mode cluster
--supervise
--executor-memory 20G \
--total-executor-cores 100 \
/path/to/examples.jar \
   1000
  
# Run on a YARN cluster export HADOOP_CONF_DIR=XXX

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \ # can also be `yarn-client` for client mode
--executor-memory 20G \
  --num-executors 50 \
  /path/to/examples.jar \
1000

# Run a Python application on a Spark standalone cluster

./bin/spark-submit \
  --master spark://207.184 .161.138:7077 \
examples/src/main/python/pi.py \
1000
2. Specific submission steps The

code implements a simple statistic

public class SimpleSample {
public static void main(String[] args) {
String logFile = "/home /bigdata/spark-1.5.1/README.md";
SparkConf conf = new SparkConf().setAppName("Simple Application");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> logData = sc.textFile(logFile).cache();

long numAs = logData.filter(new Function<String, Boolean>() {
public Boolean call(String s) {
return s.contains("a");
}
}).count();

long numBs = logData.filter(new Function<String, Boolean>() {
public Boolean call(String s) {
return s.contains("b");
}
}).count();

System.out.println("Lines with a: " + numAs + ", lines with b: " + numBs);
}

}
打成jar



上传命令

./bin/spark-submit --class cs.spark.SimpleSample --master spark://spark1:7077 /home/jar/spark-test-0.0.1-SN


This article is from: https://my.oschina. net/u/2529303/blog/541685

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326475405&siteId=291194637