Hive高阶之数据压缩

Hive的压缩依赖于MapReduce的支持,也就是安装的hadoop组件,如果不支持的话就需要自行编译。常用的压缩方法是snappy压缩,而这种压缩方法依赖于操作系统的snappy组件,所以hadoop组件一般不默认编译,如果要支持的话就需要重新进行编译。好在CDH版本已经支持压缩方法了,不需要重新进行编译。我们可以通过hadoop命令查看当前的hadoop集群是否支持snappy压缩。

[root@node3 ~]# hadoop  checknative -a
19/03/05 21:50:38 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
19/03/05 21:50:38 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop:  true /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/lib/hadoop/lib/native/libsnappy.so.1
lz4:     true revision:10301
bzip2:   true /lib64/libbz2.so.1
openssl: true /lib64/libcrypto.so

在使用checknative命令输出的结果中,显示值为true的前面的压缩方法才是集群支持的,否则hive中将无法使用这种压缩方法。

使用压缩的方法

新建一个wordcount的方法来验证在运行MapReduce的时候是否使用了压缩方法。

首先在hdfs上新建一个目录,由于我们使用的是CDH搭建的集群,所有hdfs上的目录都有相应的权限控制,我们不能够直接在root用户在hdfs上直接创建目录,而是要使用sudo -u命令才行。

[root@node3 ~]# sudo -u hdfs hdfs dfs -mkdir -p /user/test/wordcount/input
[root@node3 datas]# sudo -u hdfs hdfs dfs -put /opt/datas/TestHiveQuery.java /user/test/wordcount/input 
[root@node3 datas]# sudo -u hdfs hdfs dfs -ls /user/test/wordcount/input
Found 1 items
-rw-r--r--   3 hdfs supergroup       1027 2019-03-05 22:27 /user/test/wordcount/input/TestHiveQuery.java

将之前写过的一个java程序上传到hdfs上,然后对这个文件调用wordcount程序进行统计,首先看不使用压缩的运行结果:

[root@node3 ~]# sudo -u hdfs yarn jar /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hadoop-mapreduce-examples-2.6.0-cdh5.15.2.jar wordcount  -Dmapreduce.map.output.compress=false /user/test/wordcount/input /user/test/wordcount/output
19/03/05 22:49:56 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.246.160:8032
19/03/05 22:49:56 INFO input.FileInputFormat: Total input paths to process : 1
19/03/05 22:49:56 INFO mapreduce.JobSubmitter: number of splits:1
19/03/05 22:49:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1550913260110_0007
19/03/05 22:49:57 INFO impl.YarnClientImpl: Submitted application application_1550913260110_0007
19/03/05 22:49:57 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1550913260110_0007/
19/03/05 22:49:57 INFO mapreduce.Job: Running job: job_1550913260110_0007
19/03/05 22:50:06 INFO mapreduce.Job: Job job_1550913260110_0007 running in uber mode : false
19/03/05 22:50:06 INFO mapreduce.Job:  map 0% reduce 0%
19/03/05 22:50:58 INFO mapreduce.Job:  map 100% reduce 0%
19/03/05 22:51:15 INFO mapreduce.Job:  map 100% reduce 67%
19/03/05 22:51:19 INFO mapreduce.Job:  map 100% reduce 100%
19/03/05 22:51:19 INFO mapreduce.Job: Job job_1550913260110_0007 completed successfully
19/03/05 22:51:19 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=1207
                FILE: Number of bytes written=599269
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1154
                HDFS: Number of bytes written=929
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=6
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=3
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=48725
                Total time spent by all reduces in occupied slots (ms)=30301
                Total time spent by all map tasks (ms)=48725
                Total time spent by all reduce tasks (ms)=30301
                Total vcore-milliseconds taken by all map tasks=48725
                Total vcore-milliseconds taken by all reduce tasks=30301
                Total megabyte-milliseconds taken by all map tasks=49894400
                Total megabyte-milliseconds taken by all reduce tasks=31028224
        Map-Reduce Framework
                Map input records=33
                Map output records=93
                Map output bytes=1272
                Map output materialized bytes=1207
                Input split bytes=127
                Combine input records=93
                Combine output records=65
                Reduce input groups=65
                Reduce shuffle bytes=1207
                Reduce input records=65
                Reduce output records=65
                Spilled Records=130
                Shuffled Maps =3
                Failed Shuffles=0
                Merged Map outputs=3
                GC time elapsed (ms)=250
                CPU time spent (ms)=45130
                Physical memory (bytes) snapshot=881377280
                Virtual memory (bytes) snapshot=6224773120
                Total committed heap usage (bytes)=648822784
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=1027
        File Output Format Counters 
                Bytes Written=929

其中有一个指标:Map output materialized bytes=1207

然后再看一下使用压缩的输出:

[root@node3 datas]# sudo -u hdfs yarn jar /opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hadoop-mapreduce-examples-2.6.0-cdh5.15.2.jar wordcount  -Dmapreduce.map.output.compress=true -Dmapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec /user/test/wordcount/input /user/test/wordcount/output3
19/03/05 22:33:46 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.246.160:8032
19/03/05 22:33:47 INFO input.FileInputFormat: Total input paths to process : 1
19/03/05 22:33:47 INFO mapreduce.JobSubmitter: number of splits:1
19/03/05 22:33:47 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1550913260110_0004
19/03/05 22:33:48 INFO impl.YarnClientImpl: Submitted application application_1550913260110_0004
19/03/05 22:33:49 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1550913260110_0004/
19/03/05 22:33:49 INFO mapreduce.Job: Running job: job_1550913260110_0004
19/03/05 22:38:42 INFO mapreduce.Job: Job job_1550913260110_0004 running in uber mode : false
19/03/05 22:38:42 INFO mapreduce.Job:  map 0% reduce 0%
19/03/05 22:40:43 INFO mapreduce.Job:  map 100% reduce 0%
19/03/05 22:41:02 INFO mapreduce.Job:  map 100% reduce 33%
19/03/05 22:41:16 INFO mapreduce.Job:  map 100% reduce 67%
19/03/05 22:41:33 INFO mapreduce.Job:  map 100% reduce 100%
19/03/05 22:41:35 INFO mapreduce.Job: Job job_1550913260110_0004 completed successfully
19/03/05 22:41:35 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=1090
                FILE: Number of bytes written=599031
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=1154
                HDFS: Number of bytes written=929
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=6
        Job Counters 
                Launched map tasks=1
                Launched reduce tasks=3
                Data-local map tasks=1
                Total time spent by all maps in occupied slots (ms)=115316
                Total time spent by all reduces in occupied slots (ms)=66650
                Total time spent by all map tasks (ms)=115316
                Total time spent by all reduce tasks (ms)=66650
                Total vcore-milliseconds taken by all map tasks=115316
                Total vcore-milliseconds taken by all reduce tasks=66650
                Total megabyte-milliseconds taken by all map tasks=118083584
                Total megabyte-milliseconds taken by all reduce tasks=68249600
        Map-Reduce Framework
                Map input records=33
                Map output records=93
                Map output bytes=1272
                Map output materialized bytes=1078
                Input split bytes=127
                Combine input records=93
                Combine output records=65
                Reduce input groups=65
                Reduce shuffle bytes=1078
                Reduce input records=65
                Reduce output records=65
                Spilled Records=130
                Shuffled Maps =3
                Failed Shuffles=0
                Merged Map outputs=3
                GC time elapsed (ms)=1681
                CPU time spent (ms)=11690
                Physical memory (bytes) snapshot=709476352
                Virtual memory (bytes) snapshot=6139576320
                Total committed heap usage (bytes)=335171584
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters 
                Bytes Read=1027
        File Output Format Counters 
                Bytes Written=929

用参数-Dmapreduce.map.output.compress是否为true来控制是否启用压缩,可以看出启用压缩功能后, Map output materialized bytes=1078,比没有启用压缩的输出要小,由于我们使用的例子的数据量比较小,所以压缩比例不是很大,对于大的数据文件,可以看出明显的数据压缩效果。

在hive中是否启用了压缩功能,可以通过查看mapreduce.map.output.compress参数,举例如下:

[root@node3 datas]# sudo -u hive hive
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.15.2-1.cdh5.15.2.p0.3/jars/hive-common-1.1.0-cdh5.15.2.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> set mapreduce.map.output.compress ;
mapreduce.map.output.compress=true
hive> set mapreduce.map.output.compress.codec;
mapreduce.map.output.compress.codec=org.apache.hadoop.io.compress.SnappyCodec

可以看出当前的hive已经启用了压缩功能,使用的压缩算法是snappy压缩。

更多有关大数据的内容请关注微信公众号:大数据与人工智能初学者
扫描下面的二维码即可关注:
在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/xjjdlut/article/details/88359528