Hive常用性能调优&&常见问题参考&&MR作业调优

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_32297447/article/details/85334612

设置队列

Hive中所有MapReduce作业都提交到队列queue1中,对本次启动的会话有效,下次启动需要重新配置

hive --hiveconf mapreduce.job.queuename=queue1

设置执行引擎

set hive.execution.engine=mr;
set hive.execution.engine=spark;

控制Hive中map的数量

hive.merge.mapfiles=true//合并map端输出的结果
mapreduce.map.memory.mb=8192  // 设置申请map资源  10G内存
mapreduce.map.cpu.vcores=1 //设置申请cpu资源(需要在Yarn配置)
set hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;执行Map前进行小文件合并

map数量由三个配置决定:
mapred.min.split.size.per.node, 一个节点上split的至少的大小
mapred.min.split.size.per.rack 一个交换机下split至少的大小
mapred.max.split.size 一个split最大的大小

主要是把输入目录下的大文件分成多个map的输入, 并合并小文件, 做为一个map的输入。
具体的原理是下述三步:
a、根据输入目录下的每个文件,如果其长度超过mapred.max.split.size,以block为单位分成多个split(一个split是一个map的输入),每个split的长度都大于mapred.max.split.size, 因为以block为单位, 因此也会大于blockSize, 此文件剩下的长度如果大于mapred.min.split.size.per.node, 则生成一个split, 否则先暂时保留.
b、现在剩下的都是一些长度效短的碎片,把每个rack下碎片合并, 只要长度超过mapred.max.split.size就合并成一个split, 最后如果剩下的碎片比mapred.min.split.size.per.rack大, 就合并成一个split, 否则暂时保留.
c、把不同rack下的碎片合并, 只要长度超过mapred.max.split.size就合并成一个split, 剩下的碎片无论长度, 合并成一个split

控制Hive中reduce数量

hive.merge.mapredfiles=true//合并reduce端输出的结果
mapred.reduce.tasks=3 (强制指定reduce的任务数量)
hive.exec.reducers.bytes.per.reducer(每个reduce任务处理的数据量,默认256000000)
hive.exec.reducers.max(每个任务最大的reduce数,默认为1009)
计算reducer数的公式很简单N=min( hive.exec.reducers.max ,总输入数据量/ hive.exec.reducers.bytes.per.reducer ) 

只有一个reduce的场景:
  a、没有group by 的汇总
  b、order by
  c、笛卡尔积

数据倾斜

set hive.exec.reducers.max=200;
set mapred.reduce.tasks= 200;---增大Reduce个数
set hive.groupby.mapaggr.checkinterval=100000 ;--这个是group的键对应的记录条数超过这个值则会进行分拆,值根据具体数据量设置
set hive.groupby.skewindata=true; --如果是group by过程出现倾斜 应该设置为true
set hive.skewjoin.key=100000; --这个是join的键对应的记录条数超过这个值则会进行分拆,值根据具体数据量设置
set hive.optimize.skewjoin=true;--如果是join 过程出现倾斜 应该设置为true

Hive并行执行模式

Hive默认不支持多线程方式但是我可以通过修改hive-size.xml文件中的参数来支持多线程运行方式提高效率。

hive.exec.parallel 是否开启并行执行MR(false)
hive.exec.parallel.thread.number 最多可以并行执行几个任务

insert select操作的时候报如下错误

Caused by: org.apache.hadoop.hive.ql.metadata.HiveFatalException: [Error 20004]: Fatal error occurred when node tried to create too many dynamic partitions. The maximum number of dynamic partitions is controlled by hive.exec.max.dynamic.partitions and hive.exec.max.dynamic.partitions.pernode. Maximum was set to: 100 
根据报错结果可以知道,动态分区数默认100,查询出来的结果已经超过了默认最大动态分区数,因此在跑程序的时候做如下设置:
##hive动态分区参数设置
##开启动态分区
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
##调大动态最大动态分区数
set hive.exec.max.dynamic.partitions.pernode=1000;
set hive.exec.max.dynamic.partitions=1000;

hive提交任务后报错

Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=1024 
这两个配置需要调整大一些: 
yarn.scheduler.maximum-allocation-mb 
yarn.nodemanager.resource.memory-mb 
或者将虚拟机内存调大

hive内存溢出

2018-06-21 09:15:13     Starting to launch local task to process map join;      maximum memory = 3817865216
2018-06-21 09:15:29     Dump the side-table for tag: 0 with group count: 30673 into file: file:/tmp/data_m/753a5201-5454-431c-bcbb-d320d7cc0df6/hive_2018-06-21_09-03-15_808_550874019358135087-1/-local-10012/HashTable-Stage-11/MapJoin-mapfile20--.hashtable
2018-06-21 09:15:30     Uploaded 1 File to: file:/tmp/data_m/753a5201-5454-431c-bcbb-d320d7cc0df6/hive_2018-06-21_09-03-15_808_550874019358135087-1/-local-10012/HashTable-Stage-11/MapJoin-mapfile20--.hashtable (1769600 bytes)
2018-06-21 09:15:30     End of local task; Time Taken: 16.415 sec.
Execution completed successfully
MapredLocal task succeeded
Launching Job 4 out of 7
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1515910904620_2397048, Tracking URL = http://nma04-305-bigdata-030012222.ctc.local:8088/proxy/application_1515910904620_2397048/
Kill Command = /software/cloudera/parcels/CDH-5.8.3-1.cdh5.8.3.p0.2/lib/hadoop/bin/hadoop job  -kill job_1515910904620_2397048
Hadoop job information for Stage-11: number of mappers: 0; number of reducers: 0
2018-06-21 09:17:19,765 Stage-11 map = 0%,  reduce = 0%
Ended Job = job_1515910904620_2397048 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
在上面的日志信息中看到:maximum memory = 3817865216 
错误信息为:FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask 
导致错误的原因是由于join的时候其中一个表的数据量太大,导致OOM,错了导致MR被杀死,故日志中提示maximum memory = 3817865216,修改Hive SQL,限定数据范围,降低数据量后虽然也提示maximum memory = 3817865216,但是MR能正常执行了。

hive报lzo Premature EOF from inputStream错误

insert overwrite table data_m.tmp_lxm20180624_baas_batl_dpiqixin_other_https_4g_ext
partition(prov_id,day_id,net_type)
select msisdn,sai,protocolid,starttime,prov_id,day_id,net_type
from dpiqixinbstl.qixin_dpi_info_4g
where day_id='20180624' and prov_id='811';
报错信息
Error: java.io.IOException: java.lang.reflect.InvocationTargetException  
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)  
    at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)  
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:302)  
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.<init>(HadoopShimsSecure.java:249)  
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:363)  
    at org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:591)  
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:168)  
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:409)  
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)  
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)  
    at java.security.AccessController.doPrivileged(Native Method)  
    at javax.security.auth.Subject.doAs(Subject.java:396)  
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1550)  
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)  
Caused by: java.lang.reflect.InvocationTargetException  
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)  
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)  
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)  
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)  
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:288)  
    ... 11 more  
Caused by: java.io.EOFException: Premature EOF from inputStream  
    at com.hadoop.compression.lzo.LzopInputStream.readFully(LzopInputStream.java:75)  
    at com.hadoop.compression.lzo.LzopInputStream.readHeader(LzopInputStream.java:114)  
    at com.hadoop.compression.lzo.LzopInputStream.<init>(LzopInputStream.java:54)  
    at com.hadoop.compression.lzo.LzopCodec.createInputStream(LzopCodec.java:83)  
    at org.apache.hadoop.hive.ql.io.RCFile$ValueBuffer.<init>(RCFile.java:667)  
    at org.apache.hadoop.hive.ql.io.RCFile$Reader.<init>(RCFile.java:1431)  
    at org.apache.hadoop.hive.ql.io.RCFile$Reader.<init>(RCFile.java:1342)  
    at org.apache.hadoop.hive.ql.io.rcfile.merge.RCFileBlockMergeRecordReader.<init>(RCFileBlockMergeRecordReader.java:46)  
    at org.apache.hadoop.hive.ql.io.rcfile.merge.RCFileBlockMergeInputFormat.getRecordReader(RCFileBlockMergeInputFormat.java:38)  
    at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)  
    ... 16 more  
日志显示,在使用LZO进行压缩时出现Premature EOF from inputStream错误,该错误出现在stage-3
问题查找链接: 
http://www.cnblogs.com/aprilrain/archive/2013/03/06/2946326.html
问题原因: 
如果输出格式是TextOutputFormat,要用LzopCodec,相应的读取这个输出的格式是LzoTextInputFormat。 
如果输出格式用SequenceFileOutputFormat,要用LzoCodec,相应的读取这个输出的格式是SequenceFileInputFormat。 
如果输出使用SequenceFile配上LzopCodec的话,那就等着用SequenceFileInputFormat读取这个输出时收到“java.io.EOFException: Premature EOF from inputStream”吧。
解决方法: 
查看mapreduce.output.fileoutputformat.compress.codec配置信息
set mapreduce.output.fileoutputformat.compress.codec;
##配置参数如下:
mapreduce.output.fileoutputformat.compress.codec=com.hadoop.compression.lzo.LzoCodec
##将其修改为:
mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.DefaultCodec;
从新运行hql问题解决

猜你喜欢

转载自blog.csdn.net/qq_32297447/article/details/85334612