Use eclipse to connect to hbase remotely

Basic environment:
CDH 5.4.10
hadoop 2.6.0
hive 1.1.0
hbase 1.0.0
zookeeper 3.4.5
sqoop 1.4.5
jdk 1.7.0_67
os centos6.5

Since my hive and hbase are tested together, I am in Based on the hive connection, do the hbase connection test. The above is the hive connection configuration:
[root@master01 ~]# cd /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/ bin/
[root@master01 bin]# ./beeline
Beeline version 1.1.0-cdh5.4.10 by Apache Hive
beeline>

beeline> !connect jdbc:hive2://192.168.1.207:10000
Connecting to jdbc:hive2://192.168 .1.207:10000
Enter username for jdbc:hive2://192.168.1.207:10000: root
Enter password for jdbc:hive2://192.168.1.207:10000: **** --root/root directly connected!
Connected to: Apache Hive (version 1.1.0-cdh5.4.10)
Driver: Hive JDBC (version 1.1.0-cdh5.4.10)
Transaction isolation: TRANSACTION_REPEATABLE_READ
0: jdbc:hive2://192.168.1.207:10000>


Create first Several tables, import point data, easy to operate later:
[root@master01 bin]# hive
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0. 16/jars/hive-common-1.1.0-cdh5.4.10.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive>
hive> create table runningrecord_old(id int,systemno string,longitude string,latitude string,speed string,direction smallint,elevation string,acc string,islocation string,mileage string,oil string,currenttime timestamp,signalname string,currentvalue string) row format delimited fields terminated by ',';
OK
Time taken: 2.607 seconds
hive> load data local inpath '/tmp/rtest.txt' into table runningrecord_old;
Loading data to table default.runningrecord_old
Table default.runningrecord_old stats: [numFiles=1, totalSize=5480004]
OK
Time taken: 1.575 seconds


下一步,我们拷贝jar包到eclipse项目中去:
新建java project:hiveconnect
新建class:hiveconnecttest
新建folder:lib/    跟src同级

[root@master01 jars]# cd /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/jars
[root@master01 jars]# sz hive*.jar   放到刚刚那个lib目录下:d:/workspace/hiveconnect/lib/
[root@master01 jars]# sz hadoop*.jar
[root@master01 jars]# ll hive*.jar
hive-accumulo-handler-1.1.0-cdh5.4.10.jar
hive-ant-1.1.0-cdh5.4.10.jar
hive-beeline-1.1.0-cdh5.4.10.jar
hive-cli-1.1.0-cdh5.4.10.jar
hive-common-1.1.0-cdh5.4.10.jar
hive-contrib-1.1.0-cdh5.4.10.jar
hive-exec-1.1.0-cdh5.4.10.jar
hive-hbase-handler-1.1.0-cdh5.4.10.jar
hive-hcatalog-core-1.1.0-cdh5.4.10.jar
hive-hcatalog-pig-adapter-1.1.0-cdh5.4.10.jar
hive-hcatalog-server-extensions-1.1.0-cdh5.4.10.jar
hive-hcatalog-streaming-1.1.0-cdh5.4.10.jar
hive-hwi-1.1.0-cdh5.4.10.jar
hive-jdbc-1.1.0-cdh5.4.10.jar
hive-jdbc-1.1.0-cdh5.4.10-standalone.jar
hive-metastore-1.1.0-cdh5.4.10.jar
hive-serde-1.1.0-cdh5.4.10.jar
hive-service-1.1.0-cdh5.4.10.jar
hive-shims-0.23-1.1.0-cdh5.4.10.jar
hive-shims-1.1.0-cdh5.4.10.jar
hive-shims-common-1.1.0-cdh5.4.10.jar
hive-shims-scheduler-1.1.0-cdh5.4.10.jar
hive-testutils-1.1.0-cdh5.4.10.jar
hive-webhcat-1.1.0-cdh5.4.10.jar
hive-webhcat-java-client-1.1.0-cdh5.4.10.jar
hadoop-annotations-2.6.0-cdh5.4.10.jar
hadoop-ant-2.6.0-cdh5.4.10.jar
hadoop-ant-2.6.0-mr1-cdh5.4.10.jar
hadoop-archives-2.6.0-cdh5.4.10.jar
hadoop-auth-2.6.0-cdh5.4.10.jar
hadoop-aws-2.6.0-cdh5.4.10.jar
hadoop-azure-2.6.0-cdh5.4.10.jar
hadoop-capacity-scheduler-2.6.0-mr1-cdh5.4.10.jar
hadoop-common-2.6.0-cdh5.4.10.jar
hadoop-common-2.6.0-cdh5.4.10-tests.jar
hadoop-core-2.6.0-mr1-cdh5.4.10.jar
hadoop-datajoin-2.6.0-cdh5.4.10.jar
hadoop-distcp-2.6.0-cdh5.4.10.jar
hadoop-examples-2.6.0-mr1-cdh5.4.10.jar
hadoop-examples.jar
hadoop-extras-2.6.0-cdh5.4.10.jar
hadoop-fairscheduler-2.6.0-mr1-cdh5.4.10.jar
hadoop-gridmix-2.6.0-cdh5.4.10.jar
hadoop-gridmix-2.6.0-mr1-cdh5.4.10.jar
hadoop-hdfs-2.6.0-cdh5.4.10.jar
hadoop-hdfs-2.6.0-cdh5.4.10-tests.jar
hadoop-hdfs-nfs-2.6.0-cdh5.4.10.jar
hadoop-kms-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-app-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-common-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-core-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-hs-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.10-tests.jar
hadoop-mapreduce-client-nativetask-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-client-shuffle-2.6.0-cdh5.4.10.jar
hadoop-mapreduce-examples-2.6.0-cdh5.4.10.jar
hadoop-nfs-2.6.0-cdh5.4.10.jar
hadoop-rumen-2.6.0-cdh5.4.10.jar
hadoop-sls-2.6.0-cdh5.4.10.jar
hadoop-streaming-2.6.0-cdh5.4.10.jar
hadoop-streaming-2.6.0-mr1-cdh5.4.10.jar
hadoop-test-2.6.0-mr1-cdh5.4.10.jar
hadoop-tools-2.6.0-mr1-cdh5.4.10.jar
hadoop-yarn-api-2.6.0-cdh5.4.10.jar
hadoop-yarn-applications-distributedshell-2.6.0-cdh5.4.10.jar
hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.4.10.jar
hadoop-yarn-client-2.6.0-cdh5.4.10.jar
hadoop-yarn-common-2.6.0-cdh5.4.10.jar
hadoop-yarn-registry-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-common-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-nodemanager-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-resourcemanager-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-tests-2.6.0-cdh5.4.10.jar
hadoop-yarn-server-web-proxy-2.6.0-cdh5. 4.10.jar is

added to the build path and the

test is ok. Below the test content:
Connection: org.apache.hive.jdbc.HiveConnection@43a25848
Whether there is data: true
Running: show tables 'tinatest'
executes "show tables" Run result:
tinatest
Running: describe tinatest
executes "describe table" results:
key int
value string
Running:load data local inpath '/tmp/test2.txt' into table tinatest
Running:select * from tinatest
executes "select * query" results:
1 a
2 b
3 tina
===================================================== ===================================================== ===============

On the basis of the hive connection, let's create an hbase connection instance:
New class: hbaseconnecttest

sz hbase*.jar
hbase-annotations-1.0.0-cdh5 .4.10.jar
hbase-annotations-1.0.0-cdh5.4.10-tests.jar
hbase-checkstyle-1.0.0-cdh5.4.10.jar
hbase-client-1.0.0-cdh5.4.10.jar
hbase-client-1.0 .0-cdh5.4.10-tests.jar
hbase-common-1.0.0-cdh5.4.10.jar
hbase-common-1.0.0-cdh5.4.10-tests.jar
hbase-examples-1.0.0-cdh5.4.10. jar
hbase-hadoop2-compat-1.0.0-cdh5.4.10.jar
hbase-hadoop2-compat-1.0.0-cdh5.4.10-tests.jar
hbase-hadoop-compat-1.0.0-cdh5.4.10.jar
hbase-hadoop-compat-1.0.0-cdh5.4.10-tests.jar
hbase-indexer-cli-1.5-cdh5.4.10.jar
hbase-indexer-common-1.5-cdh5.4.10.jar
hbase-indexer-demo-1.5-cdh5.4.10.jar
hbase-indexer-engine-1.5-cdh5.4.10.jar
hbase-indexer-model-1.5-cdh5.4.10.jar
hbase-indexer-morphlines-1.5-cdh5.4.10.jar
hbase-indexer-mr-1.5-cdh5.4.10.jar
hbase-indexer-mr-1.5-cdh5.4.10-job.jar
hbase-indexer-server-1.5-cdh5.4.10.jar
hbase-it-1.0.0-cdh5.4.10.jar
hbase-it-1.0.0-cdh5.4.10-tests.jar
hbase-prefix-tree-1.0.0-cdh5.4.10.jar
hbase-protocol-1.0.0-cdh5.4.10.jar
hbase-rest-1.0.0-cdh5.4.10.jar
hbase-sep-api-1.5-cdh5.4.10.jar
hbase-sep-impl-1.5-hbase1.0-cdh5.4.10.jar
hbase-sep-impl-common-1.5-cdh5.4.10.jar
hbase-sep-tools-1.5-cdh5.4.10.jar
hbase-server-1.0.0-cdh5.4.10.jar
hbase-server-1.0.0-cdh5.4.10-tests.jar
hbase-shell-1.0.0-cdh5.4.10.jar
hbase-testing-util-1.0.0-cdh5.4.10.jar
hbase-thrift-1.0.0-cdh5.4.10.jar

第一次执行报错:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/zookeeper/KeeperException
at hiveconnect.hbaseconnecttest.main(hbaseconnecttest.java:17)

[root@master01 jars]# sz zookeeper-3.4.5-cdh5.4.10.jar

第二次执行报错:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/log4j/Level
at org.apache.hadoop.mapred.JobConf.<clinit>(JobConf.java:348)


[root@master01 jars]# sz *log4j*.jar
apache-log4j-extras-1.1.jar
apache-log4j-extras-1.2.17.jar
flume-ng-log4jappender-1.5.0-cdh5.4.10.jar
flume-ng-log4jappender-1.5.0-cdh5.4.10-jar-with-dependencies.jar
log4j-1.2.16.jar
log4j-1.2.17.jar
slf4j-log4j12-1.7.5.jar


[root@master01 bin]# hbase shell
16/06/12 10:56:09 INFO Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.0.0-cdh5.4.10, rUnknown, Tue Apr 12 11:10:23 PDT 2016

hbase(main):001:0>

第三次执行报错:
Caused by: java.lang.NoClassDefFoundError: org/apache/htrace/Trace
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:218)

[root@master01 jars]# ll *trace*.jar
-rw-r--r-- 1 root root  117409 Apr 13 02:42 accumulo-trace-1.6.0.jar
-rw-r--r-- 1 root root   30457 Apr 13 02:40 htrace-core-2.00.jar
-rw-r--r-- 1 root root   31212 Apr 13 02:40 htrace-core-3.0.4.jar
-rw-r--r-- 1 root root 1475955 Apr 13 02:39 htrace-core-3.1.0-incubating.jar
[root@master01 jars]# sz *trace*.jar


报错:
16/06/12 13:35:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/12 13:35:24 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

新建目录:conf
/opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/etc/hbase/conf.dist/hbase-site.xml
[root@master01 jars]# cd /opt/cloudera/parcels/CDH-5.4.10-1.cdh5.4.10.p0.16/etc/hbase/conf.dist/

[root@master01 conf.dist]# cd /etc/hbase/conf.cloudera.hbase/
[root@master01 conf.cloudera.hbase]# ll
total 28
-rw-r--r-- 1 root root   21 Jun  8 10:07 __cloudera_generation__
-rw-r--r-- 1 root root 3538 Jun  8 10:07 core-site.xml
-rw-r--r-- 1 root root  360 Jun  8 10:07 hbase-env.sh
-rw-r--r-- 1 root root 1984 Jun 8 10:07 hbase-site.xml
-rw-r--r-- 1 root root 1610 Jun 8 10:07 hdfs-site.xml
-rw-r --r-- 1 root root 0 Jun 8 10:07 jaas.conf
-rw-r--r-- 1 root root 312 Jun 8 10:07 log4j.properties
-rw-r--r-- 1 root root 315 Jun 8 10:07 ssl-client.xml
[root@master01 conf.cloudera.hbase]# sz hbase-site.xml
* B00000000000000
[root@master01 conf.cloudera.hbase]#Select

the project name hbaseconnect, right click and select Properties- >Java Build Path->Libraries->Add Class Folder (select conf and add the conf directory to this project).

Error: host does not recognize
16/06/12 14:11:25 WARN zookeeper.RecoverableZooKeeper: Unable to create ZooKeeper Connection
java.net.UnknownHostException: master01

edit file C:\WINDOWS\system32\drivers\etc\hosts
192.168.1.207 master01


还是缺少包,干脆把jars下面的全拷贝过来算了!
Caused by: java.lang.NoClassDefFoundError: org/apache/commons/cli/ParseException
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:636)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:608)

sz protobuf*.jar  slf4j*.jar commons*.jar


还是存在报错:
16/06/12 15:05:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/12 15:05:10 ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

Under Hadoop2.x, Eclipse under Windows will report an error. The reason is that hadoop2.x did not release winutils.exe

. The windows version of winutils is provided. The project address is: https://codeload.github.com/amihalik /hadoop-common-2.6.0-bin/zip/master Unzip
the package to D:\hadoop-common-2.6.0-bin-master

Set environment variables, and restart the computer
HADOOP_HOME=D:\hadoop-common-2.6. 0-bin-master

no longer reports errors!



Tested java code:
package hiveconnect;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org .apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.util.Bytes;

public class hbaseconnect {

        private static Configuration conf = null;
        static {
                conf = HBaseConfiguration.create();
                conf.set("hbase.zookeeper.quorum", "master01");// 使用eclipse时必须添加这个,否则无法定位master需要配置hosts
                conf.set("hbase.zookeeper.property.clientPort", "2181");
        }

        public static void main(String[] args) throws IOException {
                String[] cols = new String[1];
                String[] colsValue = new String[1];
                cols[0] = "title";
                colsValue[0] = "AboutYunArticle";

                // create table
                createTable();
                // Add value
                addData("www.aboutyun.com", "blog01", cols, colsValue);
        }

        private static void createTable() throws MasterNotRunningException,
                        ZooKeeperConnectionException, IOException {

                HBaseAdmin admin = new HBaseAdmin(conf);// Create a new database management admin // new api
                if (admin.tableExists(TableName.valueOf("LogTable"))) {
                        System.out.println("table is not exist!");
                        System.exit(0);
                } else {

                        HTableDescriptor desc = new HTableDescriptor(
                                        TableName.valueOf("blog01"));
                        desc.addFamily(new HColumnDescriptor("article"));
                        admin.createTable(desc);
                        admin.close();
                        System.out.println("create table Success!");
                }
        }
        private static void addData(String rowKey, String tableName,String[] column1, String[] value1) throws IOException {
                Put put = new Put(Bytes.toBytes(rowKey));// Set rowkey
                HTable table = new HTable(conf, Bytes.toBytes(tableName));// HTabel is responsible for record-related operations such as additions, deletions, changes, etc.//
                                                                                                                                        // Get table
                HColumnDescriptor[] columnFamilies = table.getTableDescriptor() // Get all column families.getColumnFamilies
                                ();

                for (int i = 0; i < columnFamilies.length; i++) {
                        String familyName = columnFamilies[i]. getNameAsString(); // Get the column family name
                        if (familyName.equals("article")) { // article column family put data
                                for (int j = 0; j < column1.length; j++) {
                                        put.add(Bytes.toBytes(familyName),
                                                        Bytes.toBytes(column1[j]), Bytes.toBytes(value1[j]));
                                }
                        }
                }
                table.put(put);
                System.out.println("add data Success!");
        }

}



测试结果:
hbase(main):003:0> scan 'blog'
ROW                                COLUMN+CELL                                                                                       
www.aboutyun.com column=article:title, timestamp=1465721042702, value=AboutYunArticle                              
1 row(s) in 0.1010 seconds


Package and export the project:
select the project name hiveconnect--export--runnable jar file--(hiveconnecttest-hiveconnect) --C:\Users\Administrator\Desktop\hiveconnect.jar --


Import the jar package:
create a new project: testhive--build path--configure build path--add external jars--select C:\Users\Administrator\ Desktop\hiveconnect.jar
New class hivetest:

package testhive;
import hiveconnect.hiveconnecttest;

public class hivetest {

public static void main(String[] args) throws Exception {
// TODO Auto-generated method stub
hiveconnect.hiveconnecttest hiveconnecttest = new hiveconnecttest( );
hiveconnecttest.main(null);
}

}
can be executed directly!


In the learning stage, please correct me if I am wrong.
QQ:906179271
tina

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326784976&siteId=291194637