ecplise提交JOB到spark on yarn/standalone

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/tom_fans/article/details/85006281

以前我通常是把scala或者java程序打包,这样在发布的时候可以结合传统运维的jekins发布规则,只需要运维手动点击发布即可,不需要每次手动发布。

最近我手动使用ecplise来提交JOB,碰到一些问题做个记录:

1. ecplise提交JOB到spark on yarn

下面是一个很简单的程序,统计a.sql行数

public class App {
	public static void main(String[] args) {
		//System.setProperty("HADOOP_USER_NAME", "hdfs");
		SparkConf sparkConf = new SparkConf().setAppName("JavaWordCount");
		sparkConf.setMaster("yarn-client");	
		//sparkConf.set("spark.yarn.jar", "hdfs:///tmp/spark-assembly_2.10-1.6.0-cdh5.10.2.jar");
	//	sparkConf.set("spark.yarn.appMasterEnv.CLASSPATH","$CLASSPATH:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/*");
		JavaSparkContext ctx = new JavaSparkContext(sparkConf);
		JavaRDD<String> lines = ctx.textFile("hdfs:///tmp/a.sql");
		System.out.println(lines.count());
	}
}

提交之后 yarn的日志显示找不到mapreduce的类的错误:

18/12/18 08:48:46 INFO yarn.ApplicationMaster$AMEndpoint: Add WebUI Filter. AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS -> tsczbdnndev1.trinasolar.com, PROXY_URI_BASES -> http://tsczbdnndev1.trinasolar.com:8088/proxy/application_1542688914382_0086),/proxy/application_1542688914382_0086)
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/MRJobConfig
	at org.apache.spark.deploy.yarn.Client$$anonfun$21.apply(Client.scala:1206)
	at org.apache.spark.deploy.yarn.Client$$anonfun$21.apply(Client.scala:1205)
	at scala.util.Try$.apply(Try.scala:161)
	at org.apache.spark.deploy.yarn.Client$.getDefaultMRApplicationClasspath(Client.scala:1205)
	at org.apache.spark.deploy.yarn.Client$.getMRAppClasspath(Client.scala:1182)
	at org.apache.spark.deploy.yarn.Client$.populateHadoopClasspath(Client.scala:1167)
	at org.apache.spark.deploy.yarn.Client$.populateClasspath(Client.scala:1269)
	at org.apache.spark.deploy.yarn.ExecutorRunnable.prepareEnvironment(ExecutorRunnable.scala:284)
	at org.apache.spark.deploy.yarn.ExecutorRunnable.launchContextDebugInfo(ExecutorRunnable.scala:70)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$registerAM$1.apply(ApplicationMaster.scala:297)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$registerAM$1.apply(ApplicationMaster.scala:291)
	at org.apache.spark.Logging$class.logInfo(Logging.scala:58)
	at org.apache.spark.deploy.yarn.ApplicationMaster.logInfo(ApplicationMaster.scala:51)
	at org.apache.spark.deploy.yarn.ApplicationMaster.registerAM(ApplicationMaster.scala:291)
	at org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:377)
	at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:199)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:681)
	at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:69)
	at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:68)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
	at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:68)
	at org.apache.spark.deploy.yarn.ApplicationMaster$.main(ApplicationMaster.scala:679)
	at org.apache.spark.deploy.yarn.ExecutorLauncher$.main(ApplicationMaster.scala:698)
	at org.apache.spark.deploy.yarn.ExecutorLauncher.main(ApplicationMaster.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.mapreduce.MRJobConfig
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

查看spark提交的环境变量,spark.yarn.appMasterEnv.CLASSPATH,添加mapreduce的lib目录$CLASSPATH:/opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/*

2. ecplise提交给spark standalone

这个地方没有什么特殊的,就是要注意版本一定要一致。pom文件的JAR版本和服务器的版本能对应上。 否者容易出现不能初始化SparkContext的错误。

猜你喜欢

转载自blog.csdn.net/tom_fans/article/details/85006281