Spark job :Connection reset by peer

I have a spark job on hadoop ,some jobs can cucced  but one is faild ,show Connection reset by peer and last show  Container killed by YARN for exceeding memory limits. 16.9 GB of 16 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.

  The first I set memory 25 to 30g ,show error is  :killed by YARN for exceeding  memory limits . 30.7 GB of 30GB physical memory used Consider boosting spark.yarn.executor.memoryOverhead ,so I try to big memory and somll the cores,this error still all. some hours I see the last article ,and resole this problem

    If your data is little and your partition is so big ,it will lead to this error ,so let your partition is small that can sole.


 

猜你喜欢

转载自blog.csdn.net/Baron_ND/article/details/86645687