Two, hbase-- integrated implementation class Phoenix SQL operation hbase

A, Phoenix Overview

1. Introduction
can be understood as the Phoenix Hbase query engine, phoenix, from the saleforce.com open source project, and then donated to the Apache. It is equivalent to a Java middleware to help developers like to use some of jdbc access relational databases, access NoSql database HBase.

phoenix, operating tables and data stored on hbase. phoenix and just need Hbase be associated table. Then use the tool to do some read or write operation.

In fact, the Phoenix can only be seen as a tool instead of a grammar of HBase. Although the java can be used to connect jdbc phoenix, then HBase operation, but in a production environment, it can not be used in the OLTP. Online transaction processing environment, you need low latency, and Phoenix in the query HBase, while doing some optimization, but the delay is not small. It is still used in OLAT in, and then returns the results stored.

Second, deploy

Basic environment:
hadoop
hbase
ZooKeeper
this environment is the basis of the previous hbase environment after deployment, according to your own previous deployment, the deployment will not be repeated
here we by Phoenix as middleware, operating hbase cluster data. The version at hbase corresponding version of Phoenix, here is apache-phoenix-4.14.2-HBase- 1.3-bin.tar.gz. And is deployed on this host bigdata121
extracting package:

tar zxf  apache-phoenix-4.14.2-HBase-1.3-bin.tar.gz -C /opt/modules/
mv /opt/modules/apache-phoenix-4.14.2-HBase-1.3-bin /opt/modules/phoenix-4.14.2-HBase-1.3-bin

Configuration environment variable:

vim /etc/profile.d/phoenix.sh
#!/bin/bash
export PHOENIX_HOME=/opt/modules/phoenix-4.14.2-HBase-1.3
export PATH=$PATH:${PHOENIX_HOME}/bin

source /etc/profile.d/phoenix.sh

The replication hbase conf / hbase-site.xml to /opt/modules/phoenix-4.14.2-HBase-1.3-bin/bin next.

cp ${HBASE_HOME}/conf/hbase-site.xml /opt/modules/phoenix-4.14.2-HBase-1.3-bin/bin

Then the access Phoenix hbase some dependencies hbase lib directory to copy, Note: A replicated to all nodes hbase

cd /opt/modules/phoenix-4.14.2-HBase-1.3-bin
cp phoenix-4.10.0-HBase-1.2-server.jar phoenix-core-4.10.0-HBase-1.2.jar ${HBASE_HOME}/lib/

复制到另外两台hbase节点上
scp phoenix-4.10.0-HBase-1.2-server.jar phoenix-core-4.10.0-HBase-1.2.jar bigdata122:${HBASE_HOME}/lib/
scp phoenix-4.10.0-HBase-1.2-server.jar phoenix-core-4.10.0-HBase-1.2.jar bigdata122:${HBASE_HOME}/lib/

Phoenix started the command line, whether the test can be connected hbase

sqlline.py zkserver地址
如:
sqlline.py bigdata121,bigdata122,bigdata123:2181

Note that: Phoenix is ​​actually a hbase plug-in library, not a separate component, hbase With this plugin after the restart hbase, then you can use to connect hbase Phoenix, and the operation hbase.

Third, the basic use the command

What the table shows

!table

Create a table

create table "student"(
id integer not null primary key,
name varchar);
表名不加引号的话,就会默认将表名全部转为大写字母,加了引号就不会。后面的命令用到表名的时候情况都类似

Delete table

drop table “test”

Insert data

upsert into test values(1,'Andy');
插入value的时候,字符串不要用双引号,只能用单引号,否则报错.记住这里是upsert,不是insert,别搞错了

Query data

select * from "test"。用法基本和普通的sql的select一样

delete data

delete from "test" where id=2

Modify table structure

增加字段:alter table "student" add address varchar
删除字段:alter table "student" drop column address

Create a mapping table

hbase创建表
create 'fruit','info','accout'
插入数据
put 'fruit','1001','info:name','apple'
put 'fruit','1001','info:color','red'
put 'fruit','1001','info:price','10'
put 'fruit','1001','account:sells','20'
put 'fruit','1002','info:name','orange'
put 'fruit','1002','info:color','orange'
put 'fruit','1002','info:price','8'
put 'fruit','1002','account:sells','100'

Phoenix创建映射表,注意必须是同名
create view "fruit"(
"ROW" varchar primary key,
"info"."name" varchar,
"info"."color" varchar,
"info"."price" varchar,
"account"."sells" varchar,
);

接着就可以在 Phoenix 中查看到hbase中的数据了

Fourth, the use jdbc connection Phoenix

maven project pom.xml

<dependency>
    <groupId>org.apache.phoenix</groupId>
    <artifactId>phoenix-core</artifactId>
    <version>4.14.2-HBase-1.3</version>
</dependency>

Code:

package PhoenixTest;

import org.apache.phoenix.jdbc.PhoenixDriver;

import java.sql.*;

public class PhoenixConnTest {
    public static void main(String[] args) throws ClassNotFoundException, SQLException {
        //加载phoenix的jdbc驱动类
        Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");
        //构建连接字符串
        String url = "jdbc:phoenix:bigdata121,bigdata122,bigdata123:2181";
        //创建连接
        Connection connection = DriverManager.getConnection(url);
        //创建会话
        Statement statement = connection.createStatement();
        //执行sql语句,注意,因为表名需要双引号,所以记得加上 \ 用于转义
        boolean execute = statement.execute("select * from \"fruit\"");
        if (execute) {
            //获取返回的执行结果,并打印
            ResultSet resultSet = statement.getResultSet();
            while (resultSet.next()) {
                System.out.println(resultSet.getString("name"));
            }
        }
        statement.close();
        connection.close();

    }
}

Phoenix connection with jdbc, emerging issues

Exception in thread "main" com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchMethodError: com.lmax.disruptor.dsl.Disruptor.<init>(Lcom/lmax/disruptor/EventFactory;ILjava/util/concurrent/ThreadFactory;Lcom/lmax/disruptor/dsl/ProducerType;Lcom/lmax/disruptor/WaitStrategy;)V
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2254)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3985)
    at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4788)
    at org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:241)
    at org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:147)
    at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
    at java.sql.DriverManager.getConnection(DriverManager.java:664)
    at java.sql.DriverManager.getConnection(DriverManager.java:270)
    at PhoenixTest.PhoenixConnTest.main(PhoenixConnTest.java:11)
Caused by: java.lang.NoSuchMethodError: com.lmax.disruptor.dsl.Disruptor.<init>(Lcom/lmax/disruptor/EventFactory;ILjava/util/concurrent/ThreadFactory;Lcom/lmax/disruptor/dsl/ProducerType;Lcom/lmax/disruptor/WaitStrategy;)V
    at org.apache.phoenix.log.QueryLoggerDisruptor.<init>(QueryLoggerDisruptor.java:72)
    at org.apache.phoenix.query.ConnectionQueryServicesImpl.<init>(ConnectionQueryServicesImpl.java:414)
    at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:248)
    at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:241)
    at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4791)
    at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3584)
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2372)
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2335)
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2250)
    ... 8 more

First, we see this line:

java.lang.NoSuchMethodError: com.lmax.disruptor.dsl.Disruptor

Shows com.lmax.disruptor.dsl.Disruptor this method does not exist, he looked inside to find the ideal is to have this method, and then I Baidu under, this package is dependent Phoenix hbase and a package. Since the method is there, but there is something wrong does not exist, according to experience, this is probably the wrong version dependencies, leading to some of the methods are not compatible. So I tried the version for a relatively new disruptor under this package, go find a look on maven, pick the version number 3.3.7 (3.3.0 is the default), added to the pom.xml

<dependency>
    <groupId>com.lmax</groupId>
    <artifactId>disruptor</artifactId>
    <version>3.3.7</version>
</dependency>

And then re-run the program, a miracle happened, a normal operation. That is obviously a problem because the version of the package.

Five, bug Phoenix when used in conjunction with hbase

Hbase first column in the field has no concept of cluster type, all directly stored in binary, and hbase itself can only parse the string type. And we use Phoenix to create a table with the conventional sql, there is a type of field. The following occurs bug
1, HBase garbled
this situation is when you create a table of Phoenix, the field has a non-string type, such as int, double and the like. Then when inserting data using the insert such statements from Phoenix, and then use the select statement from Phoenix to view the data, it is normal, no problem. However, when using the scan data in table view hbase, you will find, other non-string type of field, the display is distortion. This is normal, as said earlier hbase unable to resolve non-string type, shown is a direct binary mode display. I do this bug no solution.
This situation is the best way, not from hbase query data, but from Phoenix query data. And in this case, in hbase, double row cluster, rowkey, column are weird characters displayed. Totally do not understand

2, Phoenix is not displayed correctly (not a garbled)
achieved already exists a table (data) in hbase, and then create a mapping table in Phoenix. Hbase end view data is normal, but by Phoenix and found a non-string type of display is not normal, such as int, double or the like, into some non-normal number. Like this:

select * from "fruit";
+-------+---------+---------+--------------+--------+
|  ROW  |  name   |  color  |    price     | sells  |
+-------+---------+---------+--------------+--------+
| 1001  | apple   | red     | -1322241488  | null   |
| 1002  | orange  | orange  | -1204735952  | null   |
+-------+---------+---------+--------------+--------+

The reason is simple, because hbase table itself does not have any data type information storage, but Phoenix was forced to forcibly resolve to other data types, naturally not match, so the display is not normal.

price and sells obviously a numeric type, but things do not show up properly. This case solution: create a mapping table in Phoenix, the field is defined as varchar type all of this string type.

Guess you like

Origin blog.51cto.com/kinglab/2447715