hive 2.3.4安装过程

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/tszxlzc/article/details/88048128

注:我的Hadoop版本是2.7.3 Hadoop安装参考我的博客 https://blog.csdn.net/tszxlzc/article/details/61635411

  1. 下载hive,解压hive文件
[root@hadoop01 local]#  wget http://mirror.bit.edu.cn/apache/hive/hive-2.3.4/apache-hive-2.3.4-bin.tar.gz
[root@hadoop01 local]# tar -xzvf apache-hive-2.3.4-bin.tar.gz
  1. 将hive加到环境变量
[root@hadoop01 apache-hive-2.3.4-bin]# export HIVE_HOME=/usr/local/apache-hive-2.3.4-bin
[root@hadoop01 apache-hive-2.3.4-bin]# export PATH=$HIVE_HOME/bin:$PATH
  1. hadoop分布式或伪分布式,hive数据会存在hdfs上,默认放在目录下 /user/hive/warehouse,所以先创建目录
  $ $HADOOP_HOME/bin/hadoop fs -mkdir       /tmp
  $ $HADOOP_HOME/bin/hadoop fs -mkdir       /user/hive/warehouse
  $ $HADOOP_HOME/bin/hadoop fs -chmod g+w   /tmp
  $ $HADOOP_HOME/bin/hadoop fs -chmod g+w   /user/hive/warehouse
  1. 在/usr/local/apache-hive-2.3.4-bin/conf 目录下创建hive-site.xml文件
<configuration>
 <property>
    <name>datanucleus.schema.autoCreateAll</name>
    <value>true</value>
  </property>

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://192.168.152.128:3306/hive_db?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
 
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
 
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>
 
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
<description>password to use against metastore database</description>
</property>
</configuration>

  1. 在/usr/local/apache-hive-2.3.4-bin/lib目录下下载mysql驱动
[root@hadoop01 lib]# wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.37/mysql-connector-java-5.1.37.jar
  1. 初始化hive元数据到mysql
[root@hadoop01 conf]# schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:	 jdbc:mysql://192.168.152.128:3306/hive_db?createDatabaseIfNotExist=true
Metastore Connection Driver :	 com.mysql.jdbc.Driver
Metastore connection User:	 root
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.mysql.sql
Initialization script completed
schemaTool completed

  1. 启动hive,并创建一个表验证
[root@hadoop01 conf]# hive
which: no hbase in (/usr/local/apache-hive-2.3.4-bin/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/jdk1.8.0_111/bin:/usr/local/hadoop-2.7.3/bin:/usr/local/hadoop-2.7.3/sbin:/root/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/usr/local/apache-hive-2.3.4-bin/lib/hive-common-2.3.4.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive> create table x (a int);
OK
Time taken: 7.151 seconds
hive> 

至此hive创建表成功,还可以到hdfs查看到表已放到仓库目录下
在这里插入图片描述

问题记录:

  1. 由于我的hive和mysql在两个不同的虚拟机上,hive机器ping不同mysql机器,而mysql机器可以ping通hive机器,发现是hive机器网络选择了桥接模式,而mysql机器网络选择了NAT模式。将hive机器的网络改为NAT模式,就可以初始化hive元数据了
  2. 另外要查看hive的日志到 /tmp/root目录下查看,因为我是用root用户操作的

猜你喜欢

转载自blog.csdn.net/tszxlzc/article/details/88048128
今日推荐