Spark Standalone与Spark on YARN的几种提交方式

Spark:Yarn-cluster和Yarn-client区别与联系

Spark Standalone的几种提交方式

别忘了先启动spark集群!!!

spark-shell用于调试,spark-submit用于生产。

1.spark-shell client

guo@drguo1:/opt/spark-1.6.1-bin-hadoop2.6$ bin/spark-shell --master spark://drguo1:7077 --deploy-mode client --total-executor-cores 4 --executor-cores 1 --executor-memory 1g

2.spark-submit client/cluster

guo@drguo1:/opt/spark-1.6.1-bin-hadoop2.6$ bin/spark-submit --master spark://drguo1:7077 --deploy-mode client --name "test" --class org.apache.spark.examples.SparkPi /opt/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
16/04/26 19:48:21 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Pi is roughly 3.141968 


guo@drguo1:/opt/spark-1.6.1-bin-hadoop2.6$ bin/spark-submit --master spark://drguo1:7077 --deploy-mode cluster --name "test" --class org.apache.spark.examples.SparkPi /opt/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar 10

Spark on YARN的几种提交方式

官方文档:http://spark.apache.org/docs/latest/running-on-yarn.html

在spark-env.sh里加入HADOOP_CONF_DIR=/opt/Hadoop/hadoop-2.7.2/etc/hadoop就可以了,设置之后,再读文件时默认在hdfs的/user/guo/下读该文件。

guo@drguo1:/opt/spark-1.6.1-bin-hadoop2.6/conf$ cp spark-env.sh.template spark-env.sh
guo@drguo1:/opt/spark-1.6.1-bin-hadoop2.6/conf$ gedit spark-env.sh

别忘了先启动yarn和hdfs!!!

1.spark-shell client

guo@drguo1:/opt/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-shell --master yarn-client

./bin/spark-shell --master yarn --deploy-mode client

或者设置下内存数和cpu数,不设就会用默认的

guo@drguo:~$ spark-shell --master yarn-client --executor-memory 2g --executor-cores 2

然后就可以看一下drguo1:8088 RM的UI有没有应用

2.spark-submit cluster

guo@drguo1:/opt/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --num-executors 3 --driver-memory 1g --executor-memory 1g --executor-cores 1 --queue thequeue lib/spark-examples-1.6.1-hadoop2.6.0.jar 10


 ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
    --master yarn \
    --deploy-mode cluster \
    --driver-memory 4g \
    --executor-memory 2g \
    --executor-cores 1 \
    --queue thequeue \
    lib/spark-examples*.jar \
    10


原创文章 135 获赞 266 访问量 85万+

猜你喜欢

转载自blog.csdn.net/Dr_Guo/article/details/51254396