Hive of mysql installation configuration

A, MySQL installation

  Hive data exists in the HDFS. Further, hive which databases, each of which has a table, such information is referred hive metadata information .

  Metadata information does not exist HDFS, but there is a relational database, hive default using a derby database to store . That hive work, in addition to rely on Hadoop, but also rely on relational databases.

  Note: Although we can see the hive what HDFS database, what tables and table data are, however, this is not the metadata information. HDFS is the primary data storage hive of information.

  Problems encountered before : when to exit, switch to a different directory into the hive, find the library table and no, because, for the first time into the hive from the bin directory, create a directory in the bin directory metastore.db, in this directory, create a file to store derby.log

Metadata information. The metadata information is based on bin directory to create. And when you switch to a different directory into the hive, the query is not based on bin directory queries, finding out all metadata information leading to finding out. The problem is the derby database itself, so we can not use the derby database. In addition, with the derby database does not support concurrency, such as a person in the operation hive, at this time if there are other people who want to use hive, not take. So we chose to use mysql database. Currently hive support derby and mysql both databases .

 Mysql under Linux installation process: MySQL Installation under Linux

Two, Hive mysql configuration steps of

① delete HDFS in / usr / hive

    Execution: hadoop fs -rmr / user / hive (need to start hadoop)

② mysql driver package will be uploaded to the installation directory of the lib directory hive

    I used here rz command to upload: mysql-connector-java-5.1.38-bin.jar

③ add profile hive / conf, named: hive-site.xml

<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://hadoopalone:3306/hive?createDatabaseIfNotExist=true</value>  //mysql的url
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name> //mysql的用户名
        <value>root</value> 
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name> //mysql的密码
        <value>root</value>
    </property>
</configuration>

  As shown, add the following configuration

④ into the hive, go to the bin directory, execute: sh hive 

  If appears:
  Access denied for the User 'root' @ 'hadoop01' (a using password: YES) this error, refers to the current user permission to operate mysql database is not enough.

⑤ into the mysql database, assign permissions 

  执行:grant all privileges on *.* to 'root'@'hadoopalone' identified by 'root' with grant option;

     grant all on *.* to 'root'@'%' identified by 'root';

  Then execute: flush privileges; (note the semicolon)

⑥ manually create the Hive database

  If you do not advance in the mysql database to create a hive where, on entering the hive, mysql database automatically creates a hive. Note, however, because we have had before configuring mysql character set is utf-8, so this automatically creates the database character set is a hive of utf-8.
  But the hive requires character set is stored metadata must be iso8859-1. If not, hive will complain (first card for a while, then an error) when you create the table.

  Enter mysql execution: create database hive character set latin1;

⑦ After the above steps are done, re-entering the hive mysql data, found in the following table:

  

⑧ mysql database via connection navicat

  

⑨ can view the metadata information by DBS, TBLS, COLUMNS_V2 these three tables.

DBS metadata information stored in the database

  

TBLS table information stored in tables

  

COLUMNS table columns field information is stored

   

In addition, it is possible to query HDFS in position by looking SDS table

  

 So far, Hive configuration mysql and you're done! If you have any questions, comments everyone together to discuss.

 

Guess you like

Origin www.cnblogs.com/rmxd/p/11318609.html