spark运行模式全解析

1.local 本地模式:不需要hadoop(除非用到),不需要启动Master,Worker
spark-shell(spark-shell –master local[n])
spark-submit (spark-submit –master local[n])

2.local cluster 模式:不需要hadoop(除非用到),不需要启动Master,Worker
spark-submit –master local-cluster[x,y,z]
x :executor 数量 ,y: 每个executor核数,z:每个executor内存

3.standalone 模式:spark自带clusterManager的方式(不需要hadoop(除非用到))
①启动Master,Worker
spark-submit –master spark://wl1:7077 –deploy-mode client
4.standalone 模式:spark自带clusterManager的方式(不需要hadoop(除非用到))
①启动Master,Worker
spark-submit –master spark://wl1:6066 –deploy-mode cluster

5.基于yarn的client模式,无需启动Master,Worker,需要启动Hadoop
基于yarn的模式,不需要启动Master,Worker,yarn充当cluster Manager
spark-submit –master yarn –deploy-mode client

6.基于yarn的cluster模式,无需启动Master,Workder,需要启动Hadoop
spark-submit –master yarn –deploy-mode cluster

猜你喜欢

转载自blog.csdn.net/java_soldier/article/details/80395138