【异常】理由:エグゼハートビートは140927ミリ秒後にタイムアウトになりました

1つの詳細異常

ERROR scheduler.JobScheduler:エラー実行中のジョブストリーミングジョブ1559791512000ミリ。0 
org.apache.spark.SparkException:ジョブによるステージの失敗に中止:タスク0  段階0.0は失敗4、回
ロストタスク:最新の故障0.3 段階0.0(TID 8、エグゼキュータ5):ExecutorLostFailure(エグゼキュータ5が引き起こさ終了します)実行中のタスクの一つ理由:エグゼキュータのハートビートがタイムアウトアウト140927 ミリ秒 ドライバースタックトレース: :org.apache.spark.scheduler.DAGScheduler.org $ Apacheの$スパーク$スケジューラ$ DAGScheduler $ failJobAndIndependentStages(DAGScheduler.scalaで1887 org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $で1 DAGScheduler(.apply .scala:1875 org.apache.spark.scheduler.DAGScheduler $$ anonfun $ abortStage $で1 .apply(DAGScheduler.scala:1874 scala.collection.mutable.ResizableArrayの$でクラスforeachの(ResizableArray.scala:59 scala.collection.mutable.ArrayBufferで。foreachの(ArrayBuffer.scala:48 :org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scalaで1874 org.apache.spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $で1 .apply(DAGScheduler.scala:926 org.apacheで。 spark.scheduler.DAGScheduler $$ anonfun $ handleTaskSetFailed $ 1 .apply(DAGScheduler.scala:926 scala.Optionで。foreachの(Option.scala 257 :org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scalaにおける926 org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scalaで:2108 :org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scalaで2057 org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:で2046 org.apache.spark.util.EventLoopで$$ $アノン1:.RUN(EventLoop.scala 49 :org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scalaで737 :org.apache.spark.SparkContext.runJob(SparkContext.scalaで2061 ORGで:.apache.spark.SparkContext.runJob(SparkContext.scala 2082 org.apache.spark.SparkContext.runJob(SparkContext.scala時:2101 :org.apache.spark.SparkContext.runJob(SparkContext.scalaに2126 org.apache.spark.rdd.RDD $$ anonfun $ foreachPartition $に1を .apply(RDD.scala:935 org.apache.sparkします。 rdd.RDD $$ anonfun $ foreachPartition $ 1 .apply(RDD.scala:933 org.apache.spark.rdd.RDDOperationScope $ .withScope(RDDOperationScope.scala:まで151 org.apache.spark.rdd.RDDOperationScope $へ:.withScope(RDDOperationScope.scala 112 :org.apache.spark.rdd.RDD.withScope(RDD.scalaに363 org.apache.spark.rdd.RDD.foreachPartition(RDD.scalaへ:933 com.wm.bigdata.phoenix.etl.WmPhoniexEtlToHbase $$ anonfun $メイン$で1 .apply(WmPhoniexEtlToHbase.scala:108 com.wm.bigdata.phoenix.etl.WmPhoniexEtlToHbase $$ anonfun $メイン$で1 .apply( WmPhoniexEtlToHbase.scala:102 org.apache.spark.streaming.dstream.DStream $$ anonfun $ foreachRDD $で1 $$ anonfun $は$ MCV $ SP $適用3:.apply(DStream.scala 628 org.apacheで。 spark.streaming.dstream.DStream $$ anonfun $ foreachRDD $ 1 $$ anonfun $ $ $ MCVのsp $適用3:.apply(DStream.scala 628 org.apache.spark.streaming.dstream.ForEachDStream $$ anonfun $で1 $$ anonfun $ $ MCV $ SP $適用1 .apply $ MCV $属(ForEachDStream.scala:51 org.apache.spark.streaming.dstream.ForEachDStream $$ anonfun $で1 $$ anonfun $アプライ$のMCVの$ SP $ 1:.apply(ForEachDStream.scala 51 org.apache.spark.streaming.dstream.ForEachDStream $$ anonfun $で1 $$ anonfun $適用$ MCV $ SP $ 1 .apply(ForEachDStream.scala:51 で、 org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416 org.apache.spark.streaming.dstream.ForEachDStream $$ anonfun $で1 .apply $ MCV $ SP(ForEachDStream.scala:50 org.apache.spark.streaming.dstream.ForEachDStream $$ anonfun $で1 .apply(ForEachDStream.scala:50 org.apache.spark.streaming.dstream.ForEachDStream $$ anonfun $で1 .apply(ForEachDStream。スカラ:50 scala.util.Try $ .apply(Try.scala時:192 org.apache.spark.streaming.scheduler.Job.run(Job.scala時:39 org.apache.spark.streamingで。 scheduler.JobScheduler $ JobHandler $$ anonfun $ $実行1:(JobScheduler.scala .apply $ MCV $ SP 257 )を $実行org.apache.spark.streaming.scheduler.JobScheduler $ JobHandler $$ anonfun $で1を.apply(JobScheduler.scala:257 org.apache.spark.streaming.scheduler.JobScheduler $ JobHandler $$ anonfun $には$実行1:.apply(JobScheduler.scala 257 scala.util.DynamicVariable.withValue(DynamicVariableで。スカラ:58 org.apache.spark.streaming.scheduler.JobScheduler $ JobHandler.run(JobScheduler.scala時:256 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java時:1149 はjava.util.concurrentで:.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java 624 java.lang.Thread.runで(Thread.java:748

2 Q&スタックオーバーフロー内部クエリ

 
解決するために3
提出火花がタスクを送信すると、タイムアウト設定を増やします
--conf spark.network.timeout 10000000 --conf spark.executor.heartbeatInterval = 10000000    --conf spark.driver.maxResultSize = 4グラム

 

おすすめ

転載: www.cnblogs.com/QuestionsZhang/p/10991582.html
おすすめ