Install Hive under Ubuntu16.04

 In the last blog we have talked about how to install Hadoop, don't forget that our purpose is to install Hive. So in this blog, I will introduce how to install Hive.

1. Environmental preparation

(1)Vmware

  (2)  Ubuntu 16.04

  (3)  Hadoop

2. Install Hive

 (1) Download of mysql-server and mysql-client

  $ su hadoop

  $ sudo apt-get install mysql-server mysql-client

 (2) Start the mysql service

  $ sudo /etc/init.d/mysql start

  

  (3) Enter the mysql service

  $ mysql -u root -p

  Type the mysql root password you set yourself,

  Now in mysql, execute the following command:

  create user 'hive'@'%' identified by 'hive';

  create all privileges on *.* to 'hive'@'%' with grant option;

  flush privileges;

  create database if not existes hive_metadata;

  grant all privileges on hive_metadata.* to 'hive'@'%' identifies by 'hive';

  grant all privileges on hive_metadata.* to 'hive'@'localhost' identified by 'hive';

  flush privileges;

  exit;

  $ sudo /etc/init.d/mysql restart

  mysql -u hive -p

  Type password: hive

  show databases;

  If hive_metadata does not exist, execute create database hive_metadata;

  (4) Install hive

  $ su hadoop

  $ cd /usr/local

  $ wget http://mirrors.hust.edu.cn/apache/hive/hive-2.3.3/apache-hive-2.3.3-bin.tar.gz

  To check whether there is a corresponding file, if not, search it yourself

  $ tar zxvf apache-hive-2.3.3-bin.tar.gz

  $ sudo mkdir hive

  $ sudo mv apache-hive-2.3.3.bin hive/hive-2.3.3

  $ cd hive/hive-2.3.3

  $ cd conf

  $ cp hive-default.xml.template hive-site.xml

  $ sudo vim hive-site.xml

  

<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
</property>

  $ cp hive-env.sh.template hive-env.sh

  $ sudo vim hive-env.sh

export  HADOOP_HOME=/usr/local/hadoop
export HIVE_CONF_DIR=/usr/local/hive/hive-2.3.3/conf

  $ cd ../ bin

  $ vim hive-config.sh

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HIVE_HOME=/usr/local/hive/hive-2.3.3
export HADOOP_HOME=/usr/local/hadoop

  $ sudo vim /etc/profile

 

export HIVE_HOME=/usr/local/hive/hive-2.3.3
export PATH=$PATH:$HIVE_HOME/bin

  $ source /etc/profile

  $ sudo cd /usr/local/hive/hive-2.3.3

  $ wget http://ftp.ntu.edu.tw/MySQL/Downloads/Connector-J/mysql-connector-java-5.1.45.tar.gz

  $ tar zxvf mysql-connector-java-5.1.45.tar.gz

  $ jar -cf mysql-connector-java-5.1.45.jar mysql-connector-java-5.1.45

  $ sudo cp mysql-connector-java-5.1.45.jar lib/

  (5) Test

  $jps

  Check whether the Namenode, datanode, secondarynode, resourcemanager, and nodemanager of hadoop all exist. If not, close hadoop and restart it. As for how to close and restart hadoop, see the previous blog about installing hadoop

  $ cd bin

  $./hive

  After executing this, you will enter:

  hive>

3. Error record

(1) The error of how to run sbin/start-all.sh is:

  

which: no hbase in (/opt/service/jdk1.7.0_67/bin:/opt/service/jdk1.7.0_67/jre/bin:/opt/mysql-5.6.24/bin:/opt/service/jdk1.7.0_67/bin:/opt/service/jdk1.7.0_67/jre/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin)

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/opt/apache/hive-2.1.0/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/opt/apache/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

If Class path contains multiple XXX bindings like this, you just need to delete one of the files according to the Found binding below.

(2) How to report an error as:

call from wuyanjing-virtucal-machie / 127.0.0.1 to localhost: 9000 failure

When this error occurs, first run the jps command to see if hadoop is running successfully. Generally restart hadoop, this problem is solved.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324724484&siteId=291194637