Use eclipse to connect hive remotely


Basic environment:
namenode 192.168.1.187 kafka3
datanode 192.168.1.188 kafka4
datanode 192.168.1.189 kafka1

This cluster is installed one by one from the hadoop-*.tar.gz package under its own, so the configuration files need to be modified manually, compared to cloudera manager A little more complicated.

hadoop 2.6.2
hive 2.0.1 -- only installed on 187

1. Start hadoop
./start-all.sh

2. Configure hive
[root@kafka3 conf]# cat hive-site.xml
<?xml version=" 1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
    <name>hive.metastore. warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>

<property>
    <name>hive.querylog.location</name>
    <value>/hadoop/hive/log</value>
    <description>Location of Hive run time structured log file</description>
</property>
  <property> 
   <name>mapred.job.tracker</name> 
   <value>http://192.168.1.187:9001</value> 
  </property> 
  <property> 
   <name>mapreduce.framework.name</name> 
   <value>yarn</value> 
  </property> 
  <property>
   <name>hive.server2.thrift.port</name>
   <value>10000</value>
  </property>
  <property>
   <name>hive.server2.thrift.bind.host</name>
   <value>192.168.1.187</value>
  </property>
<property>
<property>
  <name>hive.server2.enable.doAs</name>
  <value>true</value>
</property>
   <name>hive.hwi.listen.port </name>
   <value>9999</value>
   <description>This is the port the Hive Web Interface will listen on </description>
</property>
<property>
   <name>datanucleus.autoCreateSchema </name>
   <value>false</value>
</property>
<property>
   <name>datanucleus.fixedDatastore </name>
   <value>true</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://192.168.1.189:3306/hive?createDatabaseIfNotExist=true</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>root</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>root</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>kafka1,kafka4,kafka3</value>
</property>
</configuration>

3.启动hiveserver服务
[root@kafka3 bin]# ./hiveserver2 ./hiveserver2service mode:hive --service hiveserver2
command line mode:





4. Test connection:
do not write jdbc program, run bin/beeline.sh

[root@kafka3 bin]# ./beeline
ls: cannot access /opt/apache-hive-2.0.1-bin//lib/hive-jdbc- *-standalone.jar: No such file or directory
Beeline version 2.0.1 by Apache Hive
beeline> !connect jdbc:hive://192.168.1.187:10000 root root         
scan complete in 1ms
scan complete in 7577ms
No known driver to handle " jdbc:hive://192.168.1.187:10000" --- instead of hive, use hive2
beeline>

found this package.
Note that all jar packages under hive's lib should be placed in eclipse

[root@kafka3 bin] # cp /opt/apache-hive-2.0.1-bin/jdbc/hive-jdbc-2.0.1-standalone.jar

/opt/apache-hive-2.0.1-bin/lib/beeline> !connect jdbc:hive2 ://192.168.1.187:10000
Connecting to jdbc:hive2://192.168.1.187:10000
Enter username for jdbc:hive2://192.168.1.187:10000: root
Enter password for jdbc:hive2://192.168.1.187:10000: root

beeline>  !connect jdbc:hive2://192.168.1.187:10000
Connecting to jdbc:hive2://192.168.1.187:10000
Enter username for jdbc:hive2://192.168.1.187:10000: root
Enter password for jdbc:hive2://192.168.1.187:10000:                                                   
Enter password for jdbc:hive2://192.168.1.187:10000: Error: Failed to open new session: java.lang.RuntimeException:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
User: root is not allowed to impersonate root (state=,code=0)        

重启hadoop后,还是不行,但是报错内容换了。
Add content to hadoop's core-site.xml:
<property>
        <name>hadoop.proxyuser.hadoop.hosts</name> -- I wrote it wrong at the beginning and I didn't know it! I am not a hadoop user, I am using the root user                                        
        <value>*</value>
    </property>
    <property>
            <name>hadoop.proxyuser.hadoop.groups</name>
            <value>root</value>
    </property >

Correct:
Add content in hadoop's core-site.xml:
<property>
        <name>hadoop.proxyuser.root.hosts</name>                                    
        <value>*</value>
    </property>
    <property>
            < name>hadoop.

    </property>

beeline> !connect jdbc:hive2://192.168.1.187:10000
Connecting to jdbc:hive2://192.168.1.187:10000
Enter username for jdbc:hive2://192.168.1.187:10000: root
Enter password for jdbc:hive2://192.168.1.187:10000:                                                   
16/06/02 11:22:00 [main]: INFO jdbc.HiveConnection: Transport Used for JDBC connection: null
Error: Could not open client transport with JDBC Uri: jdbc:hive2://192.168.1.187:10000: java.net.ConnectException: Connection refused (state=08S01, code=0)


Copy all jar packages starting with hive under $HIVE_HOME/lib
[root@kafka3 bin ]# ./beeline
Beeline version 2.0.1 by Apache Hive -- error report without
beeline >

open hive log
cd /opt/apache-hive-2.0.1-bin/conf
cp hive-log4j2.properties.template hive-log4j2.properties
vi hive-log4j2.properties
property.hive.log.dir = /hadoop/hive/log
property.hive.log.file = hive.log

[root@kafka3 log]# more hive.log

2016-06-03T10:20:16,883 INFO  [main]: service.AbstractService (AbstractService.java:init(89)) - Service:OperationManager is inited.
2016-06-03T10:20:16,884 INFO  [main]: service.AbstractService (AbstractService.java:init(89)) - Service:SessionManager is inited.
2016-06-03T10:20:16,884 INFO  [main]: service.AbstractService (AbstractService.java:init(89)) - Service:CLIService is inited.
2016-06-03T10:20:16,884 INFO  [main]: service.AbstractService (AbstractService.java:init(89)) - Service:ThriftBinaryCLIService is inited.
2016-06-03T10:20:16,884 INFO  [main]: service.AbstractService (AbstractService.java:init(89)) - Service:HiveServer2 is inited.
2016-06-03T10:20:17,022 INFO  [main]: service.AbstractService (AbstractService.java:start(104)) - Service:OperationManager is started.
2016-06-03T10:20:17,022 INFO  [main]: service.AbstractService (AbstractService.java:start(104)) - Service:SessionManager is started.
2016-06-03T10:20:17,023 INFO  [main]: service.AbstractService (AbstractService.java:start(104)) - Service:CLIService is started.
2016-06-03T10:20:17,023 INFO  [main]: service.AbstractService (AbstractService.java:start(104)) - Service:ThriftBinaryCLIService is started.
2016-06-03T10:20:17,023 INFO  [main]: service.AbstractService (AbstractService.java:start(104)) - Service:HiveServer2 is started.
2016-06-03T10:20:17,038 INFO  [main]: server.Server (Server.java:doStart(252)) - jetty-7.6.0.v20120127
2016-06-03T10:20:17,064 INFO  [main]: webapp.WebInfConfiguration (WebInfConfiguration.java:unpack(455)) - Extract jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.
0.1-standalone.jar!/hive-webapps/hiveserver2/ to /tmp/jetty-0.0.0.0-10002-hiveserver2-_-any-/webapp
2016-06-03T10:20:17,582 INFO  [Thread-10]: thrift.ThriftCLIService (ThriftBinaryCLIService.java:run(100)) - Starting ThriftBinaryCLIService on port 10000 with 5...500
worker threads

2016-06-03T10:20:17,023 INFO  [main]: service.AbstractService (AbstractService.java:start(104)) - Service:ThriftBinaryCLIService is started.
2016-06-03T10:20:17,023 INFO  [main]: service.AbstractService (AbstractService.java:start(104)) - Service:HiveServer2 is started.
2016-06-03T10:20:17,038 INFO  [main]: server.Server (Server.java:doStart(252)) - jetty-7.6.0.v20120127
2016-06-03T10:20:17,064 INFO  [main]: webapp.WebInfConfiguration (WebInfConfiguration.java:unpack(455)) - Extract jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/hive-webapps/hiveserver2/ to /tmp/jetty-0.0.0.0-10002-hiveserver2-_-any-/webapp
2016-06-03T10:20:17,582 INFO  [Thread-10]: thrift.ThriftCLIService (ThriftBinaryCLIService.java:run(100)) - Starting ThriftBinaryCLIService on port 10000 with 5...500 worker threads
2016-06-03T10:20:17,781 INFO  [main]: handler.ContextHandler (ContextHandler.java:startContext(737)) - started o.e.j.w.WebAppContext{/,file:/tmp/jetty-0.0.0.0-10002-hiveserver2-_-any-/webapp/},jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/hive-webapps/hiveserver2
2016-06-03T10:20:17,827 INFO  [main]: handler.ContextHandler (ContextHandler.java:startContext(737)) - started o.e.j.s.ServletContextHandler{/static,jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/hive-webapps/static}
2016-06-03T10:20:17,827 INFO [main]: handler.ContextHandler (ContextHandler.java:startContext(737)) - started oejsServletContextHandler{/logs,file:/hadoop/hive/log/}
2016-06-03T10: 20:17,841 INFO [main]: server.AbstractConnector (AbstractConnector.java:doStart(333)) - Started [email protected]:10002
2016-06-03T10:20:17,841 INFO [main]: server.HiveServer2 (HiveServer2. java:start(438)) - Web UI has started on port 10002


The web page can be opened, see hiveserver2
http://192.168.1.187:10002/hiveserver2.jsp


1.. Through the log, you can see that hiveserver2 is normally started, But it keeps getting an error: User: root is not allowed to impersonate root

set hadoop's core-site.xml<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/tmp</value >
</property>
<property>
  <name>fs.default.name</name>
  <value>hdfs://192.168.1.187:9000</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/hadoop/name</value>
</property>
<property>
  <name>hadoop.proxyuser.root.hosts</name>                                              
  <value>192.168.1.187</value>
</property>
<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>root</value>
</property>
<property>
  <name>fs.checkpoint.period</name>
  <value>3600</value>
  <description>The number of seconds between two periodic checkpoints.</description>
</property>
<property>
  <name>fs.checkpoint.size</name>
  <value>67108864</value>
</property>
<property>
     <name>fs.checkpoint.dir</name>
     <value>/hadoop/namesecondary< /value>
</property>
</configuration>
It took a long time to find out that the changes made to hadoop's core-site.xml on 187 were not transmitted to the other two nodes


2. Set impersonation, so that the hive server will submit the Execute the statement as the user. If set to false, the statement will be executed as the admin user of the hive server daemon
[html] 
<property> 
  <name>hive.server2.enable.doAs</name> 
  <value>true< /value> 
</property>  3. The driver classname of hive server 1 in

JDBC mode is org.apache.hadoop.hive.jdbc.HiveDriver, Hive Server 2 is org.apache.hive.jdbc.HiveDriver, these two are easy to confuse.


[root@kafka3 bin]# hiveserver2   --终于成功啦!!
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
OK


[root@kafka3 hadoop]# cd /opt/apache-hive-2.0.1-bin/bin
[root@kafka3 bin]# ./beeline
Beeline version 2.0.1 by Apache Hive
beeline>  !connect jdbc:hive2://192.168.1.187:10000
Connecting to jdbc:hive2://192.168.1.187:10000
Enter username for jdbc:hive2://192.168.1.187:10000: root
Enter password for jdbc:hive2://192.168.1.187:10000:                                                   
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/apache-hive-2.0.1-bin/lib/hive-jdbc-2.0.1-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connected to: Apache Hive (version 2.0.1)
Driver: Hive JDBC (version 2.0.1)
16/06/03 15:44:19 [main]: WARN jdbc.HiveConnection: Request to set autoCommit to false; Hive does not support autoCommit=false.
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.1.187:10000>
0: jdbc:hive2://192.168.1.187:10000> show tables;
INFO  : Compiling command(queryId=root_20160603154642_dd611020-8d3f-4abe-9bd5-7f2fda519007): show tables
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=root_20160603154642_dd611020-8d3f-4abe-9bd5-7f2fda519007); Time taken: 0.291 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=root_20160603154642_dd611020-8d3f-4abe-9bd5-7f2fda519007): show tables
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=root_20160603154642_dd611020-8d3f-4abe-9bd5-7f2fda519007); Time taken: 0.199 seconds
INFO  : OK
+---------------------------+--+
|         tab_name          |
+---------------------------+--+
| c2                        |
| hbase_runningrecord_temp  |
| rc_file                   |
| rc_file1                  |
| runningrecord_old         |
| sequence_file             |
| studentinfo               |
| t2                        |
| test_table                |
| test_table1               |
| tina                      |
+---------------------------+--+
11 rows selected (1.194 seconds)
0: jdbc:hive2://192.168.1.187:10000>



创建项目:hivecon
新建包:hivecon
新建类:testhive
package hivecon;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;

public class testhive {
public static void main(String[] args)throws Exception {
// TODO Auto-generated method stub 
Class.forName("org.apache.hive.jdbc.HiveDriver"); 
Connection conn=DriverManager.getConnection("jdbc:hive2://192.168.1.187:10000","root","");
System.out.println("连接:"+conn);
Statement stmt=conn.createStatement();
//String tablename="";
String query_sql="select systemno from runningrecord_old limit 1";
ResultSet rs=stmt.executeQuery(query_sql);
System.out.println("是否有数据:"+rs.next());
}
}
可以直接执行:
ERROR StatusLogger Unrecognized format specifier [msg]
ERROR StatusLogger Unrecognized conversion specifier [msg] starting at position 54 in conversion pattern.ERROR StatusLogger Unrecognized conversion specifier [n] starting at position 56 in conversion pattern. --Ignore log errors temporarily
ERROR StatusLogger Unrecognized format specifier [n]

连接:org.apache.hive.jdbc.HiveConnection@64485a47
是否有数据:false


---再添加一些操作:
package hivecon;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;

public class testhive {
    private static String sql = ""; 
    private static ResultSet res; 

public static void main(String[] args)throws Exception {
// TODO Auto-generated method stub 
Class.forName("org.apache.hive.jdbc.HiveDriver"); 
Connection conn=DriverManager.getConnection("jdbc:hive2://192.168.1.187:10000","root","");
System.out.println("连接:"+conn);
Statement stmt=conn.createStatement();
String query_sql="select systemno from runningrecord_old limit 1";
ResultSet rs=stmt.executeQuery(query_sql);
System.out.println("Is there data: "+rs.next());

//created table name 
String tableName = "tinatest"; 

/** Step 1: delete it if it exists**/ 
sql = "drop table " + tableName; 
stmt.execute(sql); 

/** Step 2: create it if it doesn't exist**/ 
sql = "create table " + tableName + " (key int, value string) row format delimited fields terminated by ','"; 
stmt.execute(sql); 

// execute "show tables" operation 
sql = "show tables '" + tableName + "'"; 
System.out.println("Running:" + sql); 
res = stmt.executeQuery(sql); 
System.out.println("  Execute "show tables" to run the result: ");
if (res.next()) { 
        System.out.println(res.getString(1)); 
}

// 执行“describe table”操作 
sql = "describe " + tableName; 
System.out.println("Running:" + sql); 
res = stmt.executeQuery(sql); 
System.out.println("执行“describe table”运行结果:"); 
while (res.next()) {   
        System.out.println(res.getString(1) + "\t" + res.getString(2)); 

// 执行“load data into table”操作 
String filepath = "/tmp/test2.txt"; 
sql = "load data local inpath '" + filepath + "' into table " + tableName; 
System.out.println("Running:" + sql); 
stmt.executeUpdate(sql); 
// 执行“select * query”操作 
sql = "select * from " + tableName; 
System.out.println("Running:" + sql); 
res = stmt.executeQuery(sql); 
System.out.println("Execute "select * query" result:"); 
while (res.next()) { 
        System.out.println(res.getInt(1) + "\t" + res.getString(2)); 

conn.close(); 
conn = null; 
}
} --Execution

result:
connection: org.apache Whether .hive.jdbc.HiveConnection@64485a47
has data: true
Running: show tables 'tinatest'
executes "show tables" Running result:
tinatest
Running: describe tinatest
executes "describe table" Running result:
key int
value string
Running: load data local inpath '/tmp/test2.txt' into table tinatest
Running:select * from tinatest
执行“select * query”运行结果:
1 a
2 b
3 tina

去hive里面验证:
hive> show tables;
OK
c2
hbase_runningrecord_temp
rc_file
rc_file1
runningrecord_old
sequence_file
studentinfo
t2
test_table
test_table1
tina
tinatest
Time taken: 0.065 seconds, Fetched: 12 row(s)
hive> select * from tinatest;
OK
1 a
2 b
3 tina
Time taken: 3.065 seconds, Fetched: 3 row(s)

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326946422&siteId=291194637