Hive 异常情况

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/strongyoung88/article/details/79013326

FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. GC overhead limit exxceeded

2018-01-08 21:35:44     End of local task; Time Taken: 1.155 sec.
Execution completed successfully
MapredLocal task succeeded
2018-01-08 21:35:53     Starting to launch local task to process map join;      maximum memory = 477626368
2018-01-08 21:35:54     Processing rows:        200000  Hashtable size: 199999  Memory usage:   79776984        pp
ercentage:     0.167
2018-01-08 21:35:55     Dump the side-table for tag: 1 with group count: 282501 into file: file:/tmp/deploy/42decc
95c-63ea-4dce-9562-e4f83ebb1ddc/hive_2018-01-08_20-57-38_238_2057102698935725505-1/-local-10016/HashTable-Stage-22
0/MapJoin-mapfile21--.hashtable
2018-01-08 21:35:55     Uploaded 1 File to: file:/tmp/deploy/42dec95c-63ea-4dce-9562-e4f83ebb1ddc/hive_2018-01-088
_20-57-38_238_2057102698935725505-1/-local-10016/HashTable-Stage-20/MapJoin-mapfile21--.hashtable (8516673 bytes)
2018-01-08 21:35:55     End of local task; Time Taken: 2.459 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 7 out of 15
Number of reduce tasks is set to 0 since there's no reduce operator
FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. GC overhead limit exx
ceeded
MapReduce Jobs Launched:
Stage-Stage-12: Map: 11  Reduce: 1   Cumulative CPU: 1927.1 sec   HDFS Read: 586012632 HDFS Write: 114 SUCCESS
Stage-Stage-13: Map: 10  Reduce: 1   Cumulative CPU: 281.09 sec   HDFS Read: 21697300 HDFS Write: 6158653 SUCCESS
Stage-Stage-14: Map: 11  Reduce: 5   Cumulative CPU: 1961.42 sec   HDFS Read: 364190214 HDFS Write: 113075110 SUCC
CESS
Stage-Stage-16: Map: 10  Reduce: 1   Cumulative CPU: 265.6 sec   HDFS Read: 15781064 HDFS Write: 8559446 SUCCESS
Total MapReduce CPU Time Spent: 0 days 1 hours 13 minutes 55 seconds 210 msec
END-EOF-END-EOF

问题原因:是因为内存不够,导致频繁gc,当gc次数达到限制后,就会抛出这个异常

解决方法有两个:
1、增加空间,我们可以增加reduce端的内存大小

set mapreduce.reduce.memory.mb = 7000

2、从代码上优化,比如减少读取数据的大小,或从逻辑上优化代码。

猜你喜欢

转载自blog.csdn.net/strongyoung88/article/details/79013326