Spark abnormal Summary and solutions

My original address: https://dongkelun.com/2019/01/09/sparkExceptions/

Foreword

Summary anomalies encountered in the development of Spark and solutions, also written several before, the reason is no longer an exception write a blog, because now Spark is more familiar with some, I feel no need to write so detailed exception information , so I put abnormal summarized in an article in the blog, so that both memo also easy to find.

1, before several

2、 spark.executor.memoryOverhead

External memory heap (default 10% executor memory), when a large amount of data time, if by default the following there will be abnormal, leading to crashes

abnormal

Container killed by YARN for exceeding memory limits. 1.8 GB of 1.8 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

solve

A specific value according to the actual situation

The new

--conf spark.executor.memoryOverhead=2048

Old version

--conf spark.yarn.executor.memoryOverhead=2048

If the new version with the old version, it will:

WARN SparkConf: The configuration key 'spark.yarn.executor.memoryOverhead' has been deprecated as of Spark 2.3 and may be removed in the future. Please use the new key 'spark.executor.memoryOverhead' instead.

3、No more replicas available for rdd_

abnormal

19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_73 !
19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_12_38 !
19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_38 !
19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_148 !
19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_6 !
19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_3250_112 !
19/01/08 12:36:46 WARN BlockManagerMasterEndpoint: No more replicas available for rdd_12_100 !

solve

Increasing the executor memory

--executor-memory 4G

4、Failed to allocate a page

abnormal

19/01/09 09:12:39 WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again.
19/01/09 09:12:41 WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again.
19/01/09 09:12:41 WARN NioEventLoop: Unexpected exception in the selector loop.
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.lang.Integer.valueOf(Integer.java:832)
        at sun.nio.ch.EPollSelectorImpl.updateSelectedKeys(EPollSelectorImpl.java:120)
        at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:98)
        at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
        at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
        at io.netty.channel.nio.SelectedSelectionKeySetSelector.select(SelectedSelectionKeySetSelector.java:62)
        at io.netty.channel.nio.NioEventLoop.select(NioEventLoop.java:753)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:409)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
        at java.lang.Thread.run(Thread.java:748)
19/01/09 09:12:46 WARN TransportChannelHandler: Exception in connection from /172.16.29.236:47012
java.lang.OutOfMemoryError: GC overhead limit exceeded
19/01/09 09:12:44 WARN AbstractChannelHandlerContext: An exception 'java.lang.OutOfMemoryError: GC overhead limit exceeded' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.lang.OutOfMemoryError: GC overhead limit exceeded
19/01/09 09:12:42 WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again.
Exception in thread "dispatcher-event-loop-11" java.lang.OutOfMemoryError: GC overhead limit exceeded
19/01/09 09:12:51 WARN TaskMemoryManager: Failed to allocate a page (1048576 bytes), try again.
19/01/09 09:12:53 WARN TransportChannelHandler: Exception in connection from /172.16.29.233:34226
java.lang.OutOfMemoryError: GC overhead limit exceeded

solve

Increasing the driver's memory

--driver-memory 6G 

reference

TaskMemoryManager: Failed to allocate a page, try again

5、Uncaught exception in thread task-result-getter-3

abnormal

19/01/10 09:31:50 ERROR Utils: Uncaught exception in thread task-result-getter-3
java.lang.OutOfMemoryError: Java heap space
    at java.lang.reflect.Array.newArray(Native Method)
    at java.lang.reflect.Array.newInstance(Array.java:75)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1938)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2286)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2210)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2068)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1572)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1974)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
    at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:88)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:94)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1991)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Exception in thread "task-result-getter-3" java.lang.OutOfMemoryError: Java heap space
    at java.lang.reflect.Array.newArray(Native Method)
    at java.lang.reflect.Array.newInstance(Array.java:75)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1938)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2286)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2210)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2068)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1572)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1974)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
    at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:88)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:94)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1991)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
19/01/10 09:31:51 ERROR Utils: Uncaught exception in thread task-result-getter-0
java.lang.OutOfMemoryError: Java heap space
    at java.lang.reflect.Array.newArray(Native Method)
    at java.lang.reflect.Array.newInstance(Array.java:75)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1938)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2286)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2210)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2068)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1572)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1974)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
    at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:88)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:94)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1991)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Exception in thread "task-result-getter-0" java.lang.OutOfMemoryError: Java heap space
    at java.lang.reflect.Array.newArray(Native Method)
    at java.lang.reflect.Array.newInstance(Array.java:75)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1938)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2286)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2210)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2068)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1572)
    at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1974)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1566)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:430)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
    at org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:88)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:94)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3$$anonfun$run$1.apply(TaskResultGetter.scala:63)
    at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1991)
    at org.apache.spark.scheduler.TaskResultGetter$$anon$3.run(TaskResultGetter.scala:62)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Exception in thread "main" java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
    at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
    at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
    at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:201)
    at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:136)
    at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:367)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:149)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeBroadcast$1.apply(SparkPlan.scala:145)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
    at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:145)
    at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:135)
    at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:232)
    at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:102)
    at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181)
    at org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:36)
    at org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:66)
    at org.apache.spark.sql.execution.CodegenSupport$class.consume(WholeStageCodegenExec.scala:181)
    at org.apache.spark.sql.execution.joins.SortMergeJoinExec.consume(SortMergeJoinExec.scala:36)
    at org.apache.spark.sql.execution.joins.SortMergeJoinExec.doProduce(SortMergeJoinExec.scala:633)
    at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
    at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
    at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
    at org.apache.spark.sql.execution.joins.SortMergeJoinExec.produce(SortMergeJoinExec.scala:36)
    at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:46)
    at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
    at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
    at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
    at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:36)
    at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:97)
    at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
    at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
    at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
    at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.produce(BroadcastHashJoinExec.scala:39)
    at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:46)
    at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:88)
    at org.apache.spark.sql.execution.CodegenSupport$$anonfun$produce$1.apply(WholeStageCodegenExec.scala:83)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
    at org.apache.spark.sql.execution.CodegenSupport$class.produce(WholeStageCodegenExec.scala:83)
    at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:36)
    at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:524)
    at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:576)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:136)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.DeserializeToObjectExec.doExecute(objects.scala:89)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:136)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:160)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:157)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:81)
    at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:81)
    at org.apache.spark.sql.Dataset.rdd$lzycompute(Dataset.scala:2975)
    at org.apache.spark.sql.Dataset.rdd(Dataset.scala:2973)
    at com.hs.xlzf.task.route.ServiceAreaFreq$.save_service_freq(ServiceAreaFreq.scala:161)
    at com.hs.xlzf.task.route.ServiceAreaFreq$.main(ServiceAreaFreq.scala:36)
    at com.hs.xlzf.task.route.ServiceAreaFreq.main(ServiceAreaFreq.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:896)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

solve

Increasing the driver's memory

--driver-memory 6G 

Specific reference see

reference

Driver memory error occurs when the server returns a lot of results data

6、spark.driver.maxResultSize

abnormal

ERROR TaskSetManager: Total size of serialized results of 30 tasks (1108.5 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)

solve

Increased spark.driver.maxResultSize

--conf spark.driver.maxResultSize=2G 

7、Dropping event from queue eventLog

abnormal

19/05/20 11:49:54 ERROR AsyncEventQueue: Dropping event from queue eventLog. This likely means one of the listeners is too slow and cannot keep up with the rate at which tasks are being started by the scheduler.
19/05/20 11:49:54 WARN AsyncEventQueue: Dropped 1 events from eventLog since Thu Jan 01 08:00:00 CST 1970.

solve

Increased spark.scheduler.listenerbus.eventqueue.capacity (default 10000)

--conf spark.scheduler.listenerbus.eventqueue.capacity=100000
  • 旧版用spark.scheduler.listenerbus.eventqueue.size
19/05/21 14:38:15 WARN SparkConf: The configuration key 'spark.scheduler.listenerbus.eventqueue.size' has been deprecated as of Spark 2.3 and may be removed in the future. Please use the new key 'spark.scheduler.listenerbus.eventqueue.capacity' instead.

Specific reference see

reference

Spark was lost Histroy Server task, job and research questions Stage

Guess you like

Origin blog.csdn.net/weixin_34277853/article/details/91018867