RDD分区2GB限制

spark处理较大的数据时,遇到了分区2G限制的问题。

问题来源:

500G原数据存储在HBase上,Spark从HBase读取数据这些数据进行处理,任务资源分配如下:
--executor-memory 12G \
--executor-cores 6 \
--conf spark.yarn.driver.memoryOverhead=2048 \
--conf spark.yarn.executor.memoryOverhead=4096 \
--conf spark.shuffle.service.enabled=true \
--conf spark.dynamicAllocation.enabled=true \
--conf spark.dynamicAllocation.executorIdleTimeout=60s \
--conf spark.dynamicAllocation.initialExecutors=2 \
--conf spark.dynamicAllocation.maxExecutors=10 \
--conf spark.dynamicAllocation.minExecutors=0 \
--conf spark.memory.userLegacyMode=false \
--conf spark.memory.fraction=0.75 \
--conf spark.memory.storageFraction=0.5 

Spark抛出的日志片段:

Caused by:java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
         atsun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:828)
         atorg.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:127)
         atorg.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:115)
         atorg.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1240)
         atorg.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:129)
         atorg.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:136)
         atorg.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:515)
         atorg.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:430)
         atorg.apache.spark.storage.BlockManager.get(BlockManager.scala:669)
         atorg.apache.spark.CacheManager.getOrCompute(CacheManager.scala:44)
         atorg.apache.spark.rdd.RDD.iterator(RDD.scala:268)
         atorg.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
         atorg.apache.spark.scheduler.Task.run(Task.scala:89)
         atorg.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)
         atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         atjava.lang.Thread.run(Thread.java:745)

Spark任务遇到上面的异常后就立即退出了。

解决方法

设置RDD的分区数。当前Spark任务的分区数量为60个,通过repartition设置分区数为500。任务可以执行,没有再出现上面的异常。

为什么会有2G的限制?

这个没有找到明确的说明,我个人理解是单个partition的数据里太大会导致数据处理缓慢,没有意义的。至于为什么是2G,可以参考相关的资料做进一步了解:

Address various 2G limits

create LargeByteBuffer abstraction for eliminating 2GB limit on blocks

Remote Shuffle Blocks cannot be more than 2 GB









猜你喜欢

转载自blog.csdn.net/oitebody/article/details/80164742
今日推荐