impala操作hase、hive

impala中使用复杂类型(Hive):
    如果Hive中创建的表带有复杂类型(array,struct,map),且储存格式(stored as textfile)为text或者默认,那么在impala中将无法查询到该表
解决办法:
    另建一张字段一致的表,将stored as textfile改为stored as parquet,再将源表数据插入(insert into tablename2 select * from tablename1),这张表即可在impala中查询。

查询方法:
    impala 和hive不同,对array,map,struct等复杂类型不使用explode,而使用如下方法:
select order_id,rooms.room_id, days.day_id,days.price from test2,test2.rooms,test2.rooms.days;
看起来是把一个复杂类型当作子表,进行join的查询
表结构:
test2 (
   order_id string,
   rooms array<struct<
         room_id:string,
         days:array<struct<day_id:string,price:int>>
         >
   >
)

Impala与HBase整合:
Impala与HBase整合,需要将HBase的RowKey和列映射到Impala的Table字段中。Impala使用Hive的Metastore来存储元数据信息,与Hive类似,在于HBase进行整合时,也是通过外部表(EXTERNAL)的方式来实现。

在HBase中创建表:

...
tname = TableName.valueOf("students");
HTableDescriptor tDescriptor = new HTableDescriptor(tname);
HColumnDescriptor famliy = new HColumnDescriptor("core");
tDescriptor.addFamily(famliy);
admin.createTable(tDescriptor);
//添加列:
...
HTable htable = (HTable) connection.getTable(tname);
//不要自动清理缓冲区
 htable.setAutoFlush(false);
for (int i = 1; i < 50; i++) {
            Put put = new Put(Bytes.toBytes("lisi" + format.format(i)));
            //关闭写前日志
            put.setWriteToWAL(false);

            put.addColumn(Bytes.toBytes("core"), Bytes.toBytes("math"), Bytes.toBytes(format.format(i)));
            put.addColumn(Bytes.toBytes("core"), Bytes.toBytes("english"), Bytes.toBytes(format.format(Math.random() * i)));
            put.addColumn(Bytes.toBytes("core"), Bytes.toBytes("chinese"), Bytes.toBytes(format.format(Math.random() * i)));
            htable.put(put);
            if (i % 2000 == 0) {
                htable.flushCommits();
            }
        }
部分代码


在Hive中创建外部表:

...
        state.execute("create external table if not exists students (" +
                "user_name string, " +
                "core_math string, " +
                "core_english string, " +
                "core_chinese string )" +
                "row format serde 'org.apache.hadoop.hive.hbase.HBaseSerDe' " +
                "stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' " +
                "with serdeproperties ('hbase.columns.mapping'=':key,core:math,core:english,core:chinese') " +
                "tblproperties('hbase.table.name'='students')");
...
部分代码

上面DDL语句中,在WITH SERDEPROPERTIES选项中指定Hive外部表字段到HBase列的映射,其中“:key”对应于HBase中的RowKey,名称为“lisi****”,其余的就是列簇info中的列名。最后在TBLPROPERTIES中指定了HBase中要进行映射的表名。

在Impala中同步元数据:
Impala共享Hive的Metastore,这时需要同步元数据,可以通过在Impala Shell中执行同步命令:
#INVALIDATE METADATA;
然后,就可以查看到映射HBase中表了

注意: impala支持select / insert , 不支持 delete/update单行语句,Impala不支持修改非kudu表,其他操作与Hive类似

Java操作:
maven 依赖:

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
        </dependency>
        <dependency>
            <groupId>com.cloudera.impala</groupId>
            <artifactId>jdbc</artifactId>
            <version>2.5.31</version>
        </dependency>
maven

Java code:

import org.junit.After;
import org.junit.Before;
import org.junit.Test;

import java.sql.*;

/**
 * @Author:Xavier
 * @Data:2019-02-22 13:34
 **/


public class ImpalaOptionTest {
    private String driverName="com.cloudera.impala.jdbc41.Driver";
    private String url="jdbc:impala://datanode02:21050/xavierdb";
    private Connection conn=null;
    private Statement state=null;
    private ResultSet res=null;

    @Before
    public void init() throws ClassNotFoundException, SQLException {
        Class.forName(driverName);
        conn= DriverManager.getConnection(url,"impala","impala");
        state=conn.createStatement();
    }

    //显示数据库
    @Test
    public void test() throws SQLException {
//        ResultSet res=state.executeQuery("show databases");
//        ResultSet res = state.executeQuery("show tables");
        res = state.executeQuery("select * from students");
        
        while(res.next()){
            System.out.println(String.valueOf(res.getString(1)));
        }
    }

    // 释放资源
    @After
    public void destory() throws SQLException {
        if (res != null) state.close();
        if (state != null) state.close();
        if (conn != null) conn.close();
    }


}
Java Code







   

猜你喜欢

转载自www.cnblogs.com/xavier-xd/p/10419972.html