CDH 之 Hive onTez

  1. 编译Tez框架环境准备

Replace x.y.z with the tez release number that you are using. E.g. 0.5.0. For Tez versions 0.8.3 and higher, Tez needs Apache Hadoop to be of version 2.6.0 or higher. For Tez version 0.9.0 and higher, Tez needs Apache Hadoop to be version 2.7.0 or higher.
0.9+的版本需要2.7+的hadoop,我的CDH version是2.6.0-5.10.2,因此最高只能使用0.8.5.

1) maven 3+版本

2)JDK 1.8+

  1. protobuf 2.5.0 (必须是这个版本)
https://github.com/protocolbuffers/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz
./configure make & make install
protoc --version
  1. Tez源码 http://apache.claz.org/tez/0.8.5/apache-tez-0.8.5-src.tar.gz

5)OS依赖包安装:

yum -y install gcc gcc-c++ libstdc++-devel make build
  1. 编译Tez框架

默认的pom文件是针对开源的hadoop,因此需要修改pom.xml文件来支持CDH

  1. 修改hadoop verion为CDH版本

2)默认的仓库是开源HADOOP的,因此需要增加CDH的maven仓库

               <repository>
                    <id>cloudera</id>
                    <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
            </repository>
 <pluginRepository>
    <id>cloudera</id>
    <name>Cloudera Repositories</name>
    <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
  </pluginRepository>

3)注释掉会出错的module, Tez ui会导致编译出错,这个实际上可以解决,但是这个东西没有也无所谓

4). 修改 Tez源代码,CDH需要修改,开源的HADOOP我测试发现不需要修改。

vi /data/apache-tez-0.8.5-src/tez-mapreduce/src/main/java/org/apache/tez/mapreduce/hadoop/mapreduce/JobContextImpl.java
增加如下方法:

  @Override
  public boolean userClassesTakesPrecedence() {
 
    return getJobConf().getBoolean(MRJobConfig.MAPREDUCE_JOB_USER_CLASSPATH_FIRST, false);
 
  }

5). 执行编译

mvn clean package -DskipTests=true -Dmaven.javadoc.skip=true
  1. 获取编译好的Tez包: tez-0.8.5.tar.gz
 cd tez-dist/target/
 
archive-tmp  maven-archiver  tez-0.8.5  tez-0.8.5-minimal  tez-0.8.5-minimal.tar.gz  tez-0.8.5.tar.gz  tez-dist-0.8.5-tests.jar
  1. 安装tez框架, 安装的机器为hive相关组件的机器,其他机器不需要。但是其他机器最好放一份,因为你可能也会在其他机器执行hiive/beeline,需要tez的lib的依赖

1)解压缩tez-0.8.5.tar.gz 到本地,并上传一份至HDFS

mkdir tez
mv tez-0.8.5.tar.gz  tez
tar -zxvf tez-0.8.5.tar.gz
rm -rf tez-0.8.5.tar.gz
 
#这是传给HDFS的
hdfs dfs -mkdir /engine/tez-0.8.5  
hdfs dfs -put tez  /engine/tez-0.8.5
hdfs dfs -R 777 /engine
 

#本地路径目录为/opt/cloudera/parcels/tez
  1. 建立tez-site.xml文件,我把这个文件放到 /etc/tez/conf目录下,并增加如下内容:
<?xml version="1.0" encoding="UTF-8"?>
 
<configuration>
    <property>
        <name>tez.lib.uris</name>
        <value>hdfs://tsczbdnndev1.trinasolar.com:8020/engine/tez-0.8.5/tez,hdfs://tsczbdnndev1.trinasolar.com:8020/engine/tez-0.8.5/tez/lib</value>
    </property>
 
    <property>
    <name>tez.use.cluster.hadoop-libs</name>
    <value>true</value>
  </property>
 
  <property>
    <name>tez.task.launch.env</name>
    <value>LD_LIBRARY_PATH=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native</value>
  </property>
  <property>
    <name>tez.am.launch.env</name>
    <value>LD_LIBRARY_PATH=/opt/cloudera/parcels/CDH/lib/hadoop/lib/native</value>
  </property>
  <property>
    <name>tez.history.logging.service.class</name>
    <value>org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService</value>
  </property>
 
 <property>
   <name>tez.am.acls.enabled</name>
   <value>false</value>
</property>
 
</configuration>

这里有2个主意点:

1)) tez.lib.uris的一定要是hdfs://namenode:8020/ , 之前我直接写/engine/tez-0.8.5/tez,然后DEBUG发现找不到这个目录。
2)) 参考过网上的文章,有人直接写/engine/tez-0.8.5//engine/tez-0.8.5.tar.gz, 开源的我不知道,CDH我测试过,根本行不通。
3)增加kyro的包到hdfs目录(本地目录的Tez不需要),如果不添加,在启动执行Tez JOB的时候会出错,找不到kyro的class
hdfs dfs -put /opt/cloudera/parcels/CDH/jars/kryo-2.22.jar /engine/tez-0.8.5/tez/lib
另外提示一下,kryo有2个版本的包,一个是2.21,一个是2.22,我们要的是2.22。

  1. 设置hue能够使用Tez框架,在hive服务环境高级配置代码添加hadoop
 classpath环境变量,目的是把Tez下面的lib放到classpath中,

HADOOP_CLASSPATH=/opt/cloudera/parcels/tez/lib/*:/opt/cloudera/parcels/tez/*:/etc/tez/conf:`hadoop classpath`

经过以上设置,整个设置就结束了。接下来可以开始测试:

  1. OS层测试
[root@tsczbdnndev1 ~]# export HADOOP_CLASSPATH=/opt/cloudera/parcels/tez/lib/*:/opt/cloudera/parcels/tez/*:/etc/tez/conf:`hadoop classpath`
 
[root@tsczbdnndev1 ~]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.2-1.cdh5.10.2.p0.5/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.2-1.cdh5.10.2.p0.5/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.2-1.cdh5.10.2.p0.5/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/tez/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
 
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.10.2-1.cdh5.10.2.p0.5/jars/hive-common-1.1.0-cdh5.10.2.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> 
    > set hive.execution.engine=tez;
hive> 
    > 
    > show tables;
OK
customers
jlwang2
jlwang4
jlwang5
jlwang6
jlwang7
sample_07
sample_08
src
test1
test2
test_table
web_logs
Time taken: 1.758 seconds, Fetched: 13 row(s)
hive> 
    > select count(*) from jlwang6;
Query ID = root_20190415181313_83dd68d7-1c15-4a2d-9add-7573d9b1f709
Total jobs = 1
Launching Job 1 out of 1
 
 
Status: Running (Executing on YARN cluster with App id application_1555311768924_0015)
 
--------------------------------------------------------------------------------
        VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED
--------------------------------------------------------------------------------
Map 1 ..........   SUCCEEDED      1          1        0        0       0       0
Reducer 2 ......   SUCCEEDED      1          1        0        0       0       0
--------------------------------------------------------------------------------
VERTICES: 02/02  [==========================>>] 100%  ELAPSED TIME: 7.89 s     
--------------------------------------------------------------------------------
OK
223
Time taken: 15.616 seconds, Fetched: 1 row(s)
beeline测试:

0: jdbc:hive2://10.40.2.93:10000/default>  !connect jdbc:hive2://10.40.2.93:10000/default;user=jlwang2;password=wangjialong;
Connecting to jdbc:hive2://10.40.2.93:10000/default;user=jlwang2;password=wangjialong;
Connected to: Apache Hive (version 1.1.0-cdh5.10.2)
Driver: Hive JDBC (version 1.1.0-cdh5.10.2)
Transaction isolation: TRANSACTION_REPEATABLE_READ
1: jdbc:hive2://10.40.2.93:10000/default> 
1: jdbc:hive2://10.40.2.93:10000/default> set hive.execution.engine=tez;
No rows affected (0.01 seconds)
1: jdbc:hive2://10.40.2.93:10000/default> select count(*) from jlwang6;
INFO  : Compiling command(queryId=hive_20190415181616_1249c5fe-b4da-46f7-9f6d-270b0f0eed1e): select count(*) from jlwang6
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20190415181616_1249c5fe-b4da-46f7-9f6d-270b0f0eed1e); Time taken: 0.118 seconds
INFO  : Executing command(queryId=hive_20190415181616_1249c5fe-b4da-46f7-9f6d-270b0f0eed1e): select count(*) from jlwang6
INFO  : Query ID = hive_20190415181616_1249c5fe-b4da-46f7-9f6d-270b0f0eed1e
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : Tez session hasn't been created yet. Opening session
INFO  : 
INFO  : Status: Running (Executing on YARN cluster with App id application_1555311768924_0017)
INFO  : Map 1: -/-      Reducer 2: 0/1
INFO  : Map 1: 0/1      Reducer 2: 0/1
INFO  : Map 1: 0(+1)/1  Reducer 2: 0/1
INFO  : Map 1: 1/1      Reducer 2: 0/1
INFO  : Map 1: 1/1      Reducer 2: 0(+1)/1
INFO  : Map 1: 1/1      Reducer 2: 1/1
INFO  : Completed executing command(queryId=hive_20190415181616_1249c5fe-b4da-46f7-9f6d-270b0f0eed1e); Time taken: 10.322 seconds
INFO  : OK
+------+--+
| _c0  |
+------+--+
| 223  |
+------+--+
1 row selected (10.594 seconds)
先设置好hadoop classpath,再登入hive/beeline。 这就是为什么之前在hive里也要添加hadoop classpath一个道理,如果不添加hue就会找不到jar包。

hue测试:



5.  Tez troubleshooting  

善用 hive --hiveconf hive.root.logger=DEBUG,console 模式,出现任何错误,先看yarn日志,再用debug模式检查哪里出了问题。这个没有什么技巧,就是看日志具体错误是什么。
发布了29 篇原创文章 · 获赞 4 · 访问量 6566

猜你喜欢

转载自blog.csdn.net/qq_42913729/article/details/99654647