Spark:遇到问题汇总

问题一:Map output statuses were bytes which exceeds spark.akka.frameSize

17/10/12 00:15:38 INFO spark.MapOutputTrackerMasterActor: Asked to send map output locations for shuffle 1 to sparkExecutor@titannew134:46695


17/10/12 00:15:38 ERROR spark.MapOutputTrackerMasterActor: Map output statuses were 14371441 bytes which exceeds spark.akka.frameSize (10485760 bytes).

org.apache.spark.SparkException: Map output statuses were 14371441 bytes which exceeds spark.akka.frameSize (10485760 bytes).

at org.apache.spark.MapOutputTrackerMasterActor$$anonfun$receiveWithLogging$1.applyOrElse(MapOutputTracker.scala:59)

at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)

at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)

at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)

at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)

at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)

at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)

at org.apache.spark.util.ActorLogReceive$$anon$1.applyOrElse(ActorLogReceive.scala:42)

at akka.actor.Actor$class.aroundReceive(Actor.scala:465)

at org.apache.spark.MapOutputTrackerMasterActor.aroundReceive(MapOutputTracker.scala:42)

at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)

at akka.actor.ActorCell.invoke(ActorCell.scala:487)

at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)

at akka.dispatch.Mailbox.run(Mailbox.scala:220)

at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)

at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

17/10/12 00:15:38 INFO scheduler.TaskSetManager: Starting task 1.3 in stage 1.0 (TID 5653, titannew134, PROCESS_LOCAL, 1045 bytes)

17/10/12 00:15:38 WARN scheduler.TaskSetManager: Lost task 8.2 in stage 1.0 (TID 5649, titannew134): org.apache.spark.SparkException:Error communicating with MapOutputTracker

at org.apache.spark.MapOutputTracker.askTracker(MapOutputTracker.scala:116)

at org.apache.spark.MapOutputTracker.getServerStatuses(MapOutputTracker.scala:163)

at org.apache.spark.shuffle.hash.BlockStoreShuffleFetcher$.fetch(BlockStoreShuffleFetcher.scala:42)

at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:40)

at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)

at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:61)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:245)

at org.apache.spark.rdd.MappedValuesRDD.compute(MappedValuesRDD.scala:31)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)

at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)

at org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)

at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:280)

at org.apache.spark.rdd.RDD.iterator(RDD.scala:247)

at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)

at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)

at org.apache.spark.scheduler.Task.run(Task.scala:56)

at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:748)

分析:spark.akka.frameSize 是worker和driver通信的每块数据大小,控制Spark中通信消息的最大容量 (如 task 的输出结果),默认为10M。从错误中可以看出,我们的程序需要的实际大小是14M多,因此我们把这个参数调整到20M。(也可以从 worker的日志中进行排查。通常 worker 上的任务失败后,master 的运行日志上出现”Lost TID: “的提示,可通过查看失败的 worker 的日志文件($SPARK_HOME/worker/下面的log文件) 中记录的任务的 Serialized size of result 是否超过10M来确定。)

解决方案:提交spark-sumbit脚本时,--conf spark.akka.frameSize=20 (默认单位大小是M) ,来相应的设置Driver内存。

 

猜你喜欢

转载自blog.csdn.net/weixin_38750084/article/details/82958491