Phoenix操作手册

 

Phoenix操作手册

 

Phoenix最早是saleforce的一个开源项目,后来成为Apache基金的顶级项目。

Phoenix是构建在HBase上的一个SQL层,能让我们用标准的JDBC APIs而不是HBase客户端APIs来创建表,插入数据和对HBase数据进行查询。

put the SQL back in NoSQL

Phoenix完全使用Java编写,作为HBase内嵌的JDBC驱动。Phoenix查询引擎会将SQL查询转换为一个或多个HBase扫描,并编排执行以生成标准的JDBC结果集。直接使用HBase API、协同处理器与自定义过滤器,对于简单查询来说,其性能量级是毫秒,对于百万级别的行数来说,其性能量级是秒。

HBase的查询工具有很多,如:Hive、Tez、Impala、Spark SQL、Phoenix等。

Phoenix通过以下方式:

将SQL编译成原生的HBase scans。

确定scan关键字的最佳开始和结束

让scan并行执行

 

 

  1. HBASE架构图

 

 

 

 

 

  1. Phoenix 架构图

 

Phoenix通过以下方法来奉行把计算带到离数据近的地方的哲学:

协处理器
在服务端执行操作来最小化服务端和客户端的数据传输

定制的过滤器
为了删减数据使之尽可能地靠近源数据并最小化启动代价,Phoenix使用原生的HBase APIs而不是使用Map/Reduce框架

 

 

 

 

 

 

 

 

 

  1. 命令行方式连接

 

1.1.进入phoenix目录

cd /usr/hdp/2.6.5.0-292/phoenix/bin

1.2.建立连接

./sqlline.py bj-rack001-hadoop002:2181

./sqlline.py bj-rack001-hadoop002,bj-rack001-hadoop004,bj-rack001-hadoop003:2181

./sqlline.py 192.168.102.120,192.168.102.121,192.168.102.122:2181

1.3.常用命令

序号

命令

解释

1

!autocommit

设置是否关闭自动提交

2

!batch

执行批量任务

3

!close

关闭连接

4

!closeall

关闭所有打开的连接

5

!columns

列出指定table的所有列

6

!commit

提交事务(autocommit 设置为off)

7

!connect

打开连接

8

!dbinfo

获取数据库信息

9

!describe

表信息

10

!dropall

删除当前数据库所有表

11

!go

查询当前连接

12

!help

帮助命令

13

!history

显示历史

14

!importedkeys      

列出指定表的所有导入键

15

!indexes

显示索引

16

!isolation

设置此连接的事务隔离

17

!list

显示当前连接

18

!manual

显示SQLLine手册

19

!metadata

获取元数据信息

20

!nativesql

显示指定语句的nativeSQL

21

!outputformat      

输出格式(table,vertical,csv,tsv,xmlattrs,xmlelements)

22

!primarykeys

列出表中所有的主键

23

!procedures

列出所有程序

24

!properties

连接到属性文件中指定的数据库

25

!quit

退出

26

!reconnect

重连数据库

27

!record

将所有输出记录到指定的文件

28

!rehash

获取命令完成的表和列名称

29

!rollback

回滚当前事务 (if autocommit is off)

30

!run

执行脚本

31

!save

保存当前变量或别名

32

!scan

扫描已安装的JDBC驱动程序

33

!script

开始将脚本保存到文件

34

!set

设置sqlline变量

35

!sql

执行SQL

36

!tables

列出数据库中所有的表

37

!typeinfo

显示当前连接的类型映射

38

!verbose

Set verbose mode on

 

 

  1. 客户端工具(squirrel)

2.1.下载地址

http://192.168.102.120/sofewares/squirrel-sql-3.8.1/

客户端文件:  squirrel-sql-3.8.1-standard.jar

驱动程序:phoenix-4.7.0.2.6.5.0-292-client.jar

 

2.2.安装squirrel

在cmd下,进入到squirrel-sql-3.8.1-standard.jar所在目录,执行如下命令:

java -jar squirrel-sql-3.8.1standard.jar

然后,除了选择安装路径,其它的可按需选择,否则一路“next”即可。

 

 

 

 

2.3.配置Squirrel客户端

(1)将服务器上Phoenix安装目录下的phoenix-4.7.0.2.6.5.0-292-client.jar文件,

下载到Windows下squirrel-sql安装目录的lib文件夹下,并点击squirrel-sql.bat启动。

(2)添加新的驱动(Drivers -> New Driver),

(3)在弹出的“Add Driver”窗口中,设置Driver的

“Name”为:Phoenix,

“Phoenix”为: jdbc:phoenix:192.168.102.120:2181

“ClassName”为:  org.apache.phoenix.jdbc.PhoenixDriver

 

(4)点击“Drivers”边上的“Alias”选项卡,

和第(2)步的新建驱动一样新建Alias (Aliases -> New Aliases),在弹出的窗口中,

填写Name,UserName,Password,这三项可任意填写,

在Driver选项的下拉列表中查找我们刚添加的Phoenix驱动,然后点击“Test”按钮测试连接,

然后点击“Connect”按钮,弹出“Connection successful”表示配置成功。

 

 

 

 

(5)双击新建的test Alias ,然后点击“Connect”按钮,打开SQL命令行,在“SQL”选项卡下,即可

2.4. 测试

新建一张person表进行测试

   create table person (id integer not null primary key,name varchar,age integer)

   插入一条数据:

   upsert into person values (1,'zhangsan' ,18)

   查询数据:

   select * from person

   删除表:

   drop table person

  

 

(6)到Hbase Shell下去查看我们刚刚添加的数据

list

可以看到我们新建的person表已经添加进去了,至此squirrel也安装并配置成功了!!

 

scan 'person', { LIMIT => 10 }

 

注意:

通过squirrel添加的表,会自动转换为大写,

由于Phoenix是区分大小写的,所以在Hbase Shell下,表名都要用大写,

如:desc 'PERSON',如果用desc 'person'是会报错的。

 

2.5.参考链接:

         https://blog.csdn.net/NDF923/article/details/77838267

 

  1. JDBC方式连接(与普通JDBC无异)

3.1.增加pom.xml依赖

<dependency>
    <groupId>
org.apache.hbase</groupId>
    <artifactId>
hbase-client</artifactId>
    <version>
${hbase.version}</version>
</dependency>

<dependency>
    <groupId>
org.apache.hbase</groupId>
    <artifactId>
hbase-common</artifactId>
    <version>
${hbase.version}</version>
</dependency>

 

3.2.样例 :插入数据

 

@Test

public  void testInsertData() throws Throwable {

    try {

        Connection conn = JdbcUtils.getConnection();

        conn.setAutoCommit(false);



        PreparedStatement pstmt = conn.prepareStatement("upsert into \"employee\" values (?,?,?,?) ");

        long time = System.currentTimeMillis();





        for(int i=0; i<1000000; i++){



            pstmt.setString(1, "no_"+ i);

            pstmt.setString(2, "name" + i);

            pstmt.setString(3, "position" + RandomUitl.getRandomInt(1,10));

            pstmt.setString(4, "tel"  + RandomUitl.getRandomInt(10000000,99999999));

            pstmt.addBatch();//添加到同一个批处理中

            if( i%1000 == 0){

                System.out.println("now : " + i );

                pstmt.executeBatch();//执行批处理

                conn.commit();

            }

        }

        pstmt.executeBatch();//执行批处理

        conn.commit();



        // 关闭连接

        JdbcUtils.closeStatement(pstmt);

        JdbcUtils.closeConnection(conn);



        long timeUsed = System.currentTimeMillis() - time;

        System.out.println("time ===> " + timeUsed + "   mm");



    } catch (Exception e) {

        e.printStackTrace();

    }

}

 

3.3.样例:读取数据

 

@Test

public  void testReadData() throws Throwable {

    try {

        String driver="org.apache.phoenix.jdbc.PhoenixDriver";

        String url="jdbc:phoenix:192.168.102.120:2181";





        Connection conn = DriverManager.getConnection(url);



        Statement statement = conn.createStatement();



        long time = System.currentTimeMillis();



        ResultSet rs = statement.executeQuery("select * from \"employee\" where \"position\"  = 'position1'  ");



        Long count = 0L ;

        while (rs.next()) {

            String name = rs.getString("name");

            String position = rs.getString("position");

            String tel = rs.getString("tel");



            System.out.println("name=" + name );

            System.out.println("position=" + position);

            System.out.println("tel=" + tel);



            count++ ;

        }



        long timeUsed = System.currentTimeMillis() - time;



        System.out.println("time ===> " + timeUsed + "   mm");

        System.out.println("count ===> " + count );



        // 关闭连接

        rs.close();

        statement.close();

        conn.close();

       



    } catch (Exception e) {

        e.printStackTrace();

    }

}

 

  1.  Apache Spark Plugin

phoenix-spark插件扩展了Phoenix的MapReduce支持,允许Spark将Phoenix表加载为RDD或DataFrame,并使它们能够持久地返回Phoenix。

4.1.Spark 环境设置

  • To ensure that all requisite Phoenix / HBase platform dependencies are available on the classpath for the Spark executors and drivers, set both ‘spark.executor.extraClassPath’ and ‘spark.driver.extraClassPath’ in spark-defaults.conf to include the ‘phoenix-<version>-client.jar’
  • Note that for Phoenix versions 4.7 and 4.8 you must use the ‘phoenix-<version>-client-spark.jar’. As of Phoenix 4.10, the ‘phoenix-<version>-client.jar’ is compiled against Spark 2.x. If compability with Spark 1.x if needed, you must compile Phoenix with the spark16 maven profile.
  • To help your IDE, you can add the following provided dependency to your build:

 

<dependency>
  <groupId>org.apache.phoenix</groupId>
  <artifactId>phoenix-spark</artifactId>
  <version>${phoenix.version}</version>
  <scope>provided</scope>
</dependency>

4.2.读取表中数据

Given a Phoenix table with the following DDL

CREATE TABLE TABLE1 (ID BIGINT NOT NULL PRIMARY KEY, COL1 VARCHAR);
UPSERT INTO TABLE1 (ID, COL1) VALUES (1, 'test_row_1');
UPSERT INTO TABLE1 (ID, COL1) VALUES (2, 'test_row_2');

加载DataFrame

import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.phoenix.spark._
 
val sc = new SparkContext("local", "phoenix-test")
val sqlContext = new SQLContext(sc)
 
val df = sqlContext.load(
  "org.apache.phoenix.spark",
  Map("table" -> "TABLE1", "zkUrl" -> "phoenix-server:2181")
)
 
df
  .filter(df("COL1") === "test_row_1" && df("ID") === 1L)
  .select(df("ID"))
  .show
 

 

使用Configuration对象直接加载为DataFrame

 

import org.apache.hadoop.conf.Configuration
import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.phoenix.spark._
 
val configuration = new Configuration()
// Can set Phoenix-specific settings, requires 'hbase.zookeeper.quorum'
 
val sc = new SparkContext("local", "phoenix-test")
val sqlContext = new SQLContext(sc)
 
// Load the columns 'ID' and 'COL1' from TABLE1 as a DataFrame
val df = sqlContext.phoenixTableAsDataFrame(
  "TABLE1", Array("ID", "COL1"), conf = configuration
)
 
df.show

使用zookeeper url 加载数据

import org.apache.spark.SparkContext
import org.apache.spark.sql.SQLContext
import org.apache.phoenix.spark._
 
val sc = new SparkContext("local", "phoenix-test")
 
// Load the columns 'ID' and 'COL1' from TABLE1 as an RDD
val rdd: RDD[Map[String, AnyRef]] = sc.phoenixTableAsRDD(
  "TABLE1", Seq("ID", "COL1"), zkUrl = Some("phoenix-server:2181")
)
 
rdd.count()
 
val firstId = rdd1.first()("ID").asInstanceOf[Long]
val firstCol = rdd1.first()("COL1").asInstanceOf[String]

 

 

4.3.保存 Phoenix

准备数据

CREATE TABLE OUTPUT_TEST_TABLE (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR, col2 INTEGER);

保存 RDDs

The saveToPhoenix method is an implicit method on RDD[Product], or an RDD of Tuples. The data types must correspond to one of the Java types supported by Phoenix.

import org.apache.spark.SparkContext
import org.apache.phoenix.spark._
 
val sc = new SparkContext("local", "phoenix-test")
val dataSet = List((1L, "1", 1), (2L, "2", 2), (3L, "3", 3))
 
sc
  .parallelize(dataSet)
  .saveToPhoenix(
    "OUTPUT_TEST_TABLE",
    Seq("ID","COL1","COL2"),
    zkUrl = Some("phoenix-server:2181")
  )

保存 DataFrames

The save is method on DataFrame allows passing in a data source type. You can use org.apache.phoenix.spark, and must also pass in a table and zkUrl parameter to specify which table and server to persist the DataFrame to. The column names are derived from the DataFrame’s schema field names, and must match the Phoenix column names.

The save method also takes a SaveMode option, for which only SaveMode.Overwrite is supported.

Given two Phoenix tables with the following DDL:

CREATE TABLE INPUT_TABLE (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR, col2 INTEGER);
CREATE TABLE OUTPUT_TABLE (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR, col2 INTEGER);
import org.apache.spark.SparkContext
import org.apache.spark.sql._
import org.apache.phoenix.spark._
 
// Load INPUT_TABLE
val sc = new SparkContext("local", "phoenix-test")
val sqlContext = new SQLContext(sc)
val df = sqlContext.load("org.apache.phoenix.spark", Map("table" -> "INPUT_TABLE",
  "zkUrl" -> hbaseConnectionString))
 
// Save to OUTPUT_TABLE
df.saveToPhoenix(Map("table" -> "OUTPUT_TABLE", "zkUrl" -> hbaseConnectionString))
 
or
 
df.write \
 .format("org.apache.phoenix.spark") \
 .mode("overwrite") \
 .option("table", "OUTPUT_TABLE") \
 .option("zkUrl", "localhost:2181") \
 .save()

 

4.4.参考链接

http://phoenix.apache.org/phoenix_spark.html

 

  • 二级索引

 

在HBase中,只有一个单一的按照字典序排序的rowKey索引,当使用rowKey来进行数据查询的时候速度较快,但是如果不使用rowKey来查询的话就会使用filter来对全表进行扫描,很大程度上降低了检索性能。而Phoenix提供了二级索引技术来应对这种使用rowKey之外的条件进行检索的场景。

 

 

 

  1. Covered Indexes

Phoenix特别强大,因为我们提供了覆盖索引 - 一旦找到索引条目,我们就不需要返回主表了。 相反,我们将我们关心的数据捆绑在索引行中,从而节省了读取时间开销。

例如,以下内容将在v1和v2列上创建索引,并在索引中包含v3列,以防止必须从数据表中获取它:

例:

        CREATE INDEX my_index ON my_table (v1,v2) INCLUDE(v3)

 

  1. Functional Indexes

函数索引(在4.3及更高版本中提供)允许您不仅在列上创建索引,而且在任意表达式上创建索引。 然后,当查询使用该表达式时,索引可用于检索结果而不是数据表。 例如,您可以在UPPER(FIRST_NAME ||''||| LAST_NAME)上创建索引,以允许您对人员的组合名字和姓氏进行不区分大小写的搜索。

 

示例:

CREATE INDEX UPPER_NAME_IDX ON EMP (UPPER(FIRST_NAME||' '||LAST_NAME))

SELECT EMP_ID FROM EMP WHERE UPPER(FIRST_NAME||' '||LAST_NAME)='JOHN DOE'

 

  1. Global Indexes

适用于多读少写的业务场景。使用Global indexing的话在写数据的时候会消耗大量开销,因为所有对数据表的更新操作(DELETE, UPSERT VALUES and UPSERT SELECT),会引起索引表的更新,而索引表是分布在不同的数据节点上的,跨节点的数据传输带来了较大的性能消耗。在读数据的时候Phoenix会选择索引表来降低查询消耗的时间。在默认情况下如果想查询的字段不是索引字段的话索引表不会被使用,也就是说不会带来查询速度的提升。

示例:

create index my_index on company(name);

 

Global mutable index will not be used unless all of the columns referenced in the query are contained in the index.

 

 

存储:

 

 

 

 

 

 

参考链接:https://blog.csdn.net/maomaosi2009/article/details/45600109

 

 

  1. Local Indexes

Local indexing适用于写操作频繁的场景。与Global indexing一样,Phoenix会自动判定在进行查询的时候是否使用索引。使用Local indexing时,索引数据和数据表的数据是存放在相同的服务器中的避免了在写操作的时候往不同服务器的索引表中写索引带来的额外开销。使用Local indexing的时候即使查询的字段不是索引字段索引表也会被使用,这会带来查询速度的提升,这点跟Global indexing不同。一个数据表的所有索引数据都存储在一个单一的独立的可共享的表中。在读取数据的时候,在读数据的时候因为存储数据的region的位置无法预测导致性能有一定损耗。

 

create local index my_index on company(name);

 

存储:

 

参考链接: https://blog.csdn.net/maomaosi2009/article/details/45619047

 

  1. 性能对比

1. Global Indexes 和 Local Indexes

环境:
1.该测试采用jdbc批量插入、查询获取的测试结果
2.读取数据采样: 100W读取其中10W数据。 1000W数据读取100W数据
3.版本:hbase 1.1.2  、 Phoenix  4.7.0

注:
1.拆解的所有索引,都是FULL SCAN 写入效率低,并且组合条件查询的时候,效率并没有合并的索引高
2.三列组合索引 建立(A,B,C)三个索引, 筛选条件必须含有A ,否则full scan
3.local和global索引存储上有区别 global索引单独存储,local 索引和数据存储在一起

 
 
 
 
 
 

 

2.与hive impala性能对比

Phoenix vs Hive (running over HDFS and HBase)

Query: select count(1) from table over 10M and 100M rows. Data is 5 narrow columns. Number of Region Servers: 4 (HBase heap: 10GB, Processor: 6 cores @ 3.3GHz Xeon)

 

Phoenix vs Impala (running over HBase)

Query: select count(1) from table over 1M and 5M rows. Data is 3 narrow columns. Number of Region Server: 1 (Virtual Machine, HBase heap: 2GB, Processor: 2 cores @ 3.3GHz Xeon)

 

 

速查表

 

Commands

SELECT

DROP TABLE

DROP INDEX

UPSERT VALUES

CREATE FUNCTION

ALTER INDEX

UPSERT SELECT

DROP FUNCTION

EXPLAIN

DELETE

CREATE VIEW

UPDATE STATISTICS

DECLARE CURSOR

DROP VIEW

CREATE SCHEMA

OPEN CURSOR

CREATE SEQUENCE

USE

FETCH NEXT

DROP SEQUENCE

DROP SCHEMA

CLOSE

ALTER

GRANT

CREATE TABLE

CREATE INDEX

REVOKE

 

 

 

Other Grammar

Constraint

Join Type

Value

Options

Func Argument

Case

Hint

Class Name

Case When

Scan Hint

Jar Path

Name

Cache Hint

Order

Quoted Name

Index Hint

Expression

Alias

Small Hint

And Condition

Null

Seek To Column Hint

Boolean Condition

Data Type

Join Hint

Condition

SQL Data Type

Serial Hint

RHS Operand

HBase Data Type

Column Def

Operand

String

Table Ref

Summand

Boolean

Sequence Ref

Factor

Numeric

Column Ref

Term

Int

Select Expression

Array Constructor

Long

Select Statement

Sequence

Decimal

Split Point

Cast

Number

Table Spec

Row Value Constructor

Comments

Aliased Table Ref

Bind Parameter

 

 

参考官网链接:

http://phoenix.apache.org/language/index.html

 

SELECT

Example:

SELECT * FROM TEST LIMIT 1000;

SELECT * FROM TEST LIMIT 1000 OFFSET 100;

SELECT full_name FROM SALES_PERSON WHERE ranking >= 5.0 UNION ALL SELECT reviewer_name FROM CUSTOMER_REVIEW WHERE score >= 8.0

UPSERT VALUES

Example:

UPSERT INTO TEST VALUES('foo','bar',3);

UPSERT INTO TEST(NAME,ID) VALUES('foo',123);

  • UPSERT SELECT

Example:

UPSERT INTO test.targetTable(col1, col2) SELECT col3, col4 FROM test.sourceTable WHERE col5 < 100

UPSERT INTO foo SELECT * FROM bar;

DELETE

Example:

DELETE FROM TEST;

DELETE FROM TEST WHERE ID=123;

DELETE FROM TEST WHERE NAME LIKE 'foo%';

CREATE TABLE

CREATE TABLE my_schema.my_table ( id BIGINT not null primary key, date)

CREATE TABLE my_table ( id INTEGER not null primary key desc, date DATE not null,m.db_utilization DECIMAL, i.db_utilization) m.DATA_BLOCK_ENCODING='DIFF'

CREATE TABLE stats.prod_metrics ( host char(50) not null, created_date date not null,txn_count bigint CONSTRAINT pk PRIMARY KEY (host, created_date) )

CREATE TABLE IF NOT EXISTS "my_case_sensitive_table"

    ( "id" char(10) not null primary key, "value" integer)

    DATA_BLOCK_ENCODING='NONE',VERSIONS=5,MAX_FILESIZE=2000000 split on (?, ?, ?)

CREATE TABLE IF NOT EXISTS my_schema.my_table (org_id CHAR(15), entity_id CHAR(15), payload binary(1000),CONSTRAINT pk PRIMARY KEY (org_id, entity_id) )TTL=86400

DROP TABLE

Example:

DROP TABLE my_schema.my_table;

DROP TABLE IF EXISTS my_table;

DROP TABLE my_schema.my_table CASCADE;

JOIN TYPE

Example:

DROP SCHEMA IF EXISTS my_schema

DROP SCHEMA my_schema

INNER

LEFT OUTER

RIGHT

        

这部分写的是join的原理

  1. lhs INNER JOIN rhs

rhs will be built as hash table in server cache.

  1. lhs LEFT OUTER JOIN rhs

rhs will be built as hash table in server cache.

  1. lhs RIGHT OUTER JOIN rhs

lhs will be built as hash table in server cache.

 

Let’s take the previous query for example:

SELECT O.OrderID, C.CustomerName, I.ItemName, I.Price, O.Quantity

FROM Orders AS O

INNER JOIN Customers AS C

ON O.CustomerID = C.CustomerID

INNER JOIN Items AS I

ON O.ItemID = I.ItemID;

 

The default join order (using star-join optimization) will be:

1. SCAN Customers --> BUILD HASH[0]

   SCAN Items --> BUILD HASH[1]

2. SCAN Orders JOIN HASH[0], HASH[1] --> Final Resultset

 

Alternatively, if we use hint “NO_STAR_JOIN”:

SELECT /*+ NO_STAR_JOIN*/ O.OrderID, C.CustomerName, I.ItemName, I.Price, O.Quantity

FROM Orders AS O

INNER JOIN Customers AS C

ON O.CustomerID = C.CustomerID

INNER JOIN Items AS I

ON O.ItemID = I.ItemID;

 

 

The join order will be:

1. SCAN Customers --> BUILD HASH[0]

2. SCAN Orders JOIN HASH[0]; CLOSE HASH[0] --> BUILD HASH[1]

3. SCAN Items JOIN HASH[1] --> Final Resultset

 

It is also worth mentioning that not the entire dataset of the table should be counted into the memory consumption. Instead, only those columns used by the query, and of only the records that satisfy the predicates will be built into the server hash table.

 

Join操作请关注官网API: http://phoenix.apache.org/joins.html

 

CREATE FUNCTION

Example:

CREATE FUNCTION my_reverse(varchar) returns varchar as 'com.mypackage.MyReverseFunction' using jar 'hdfs:/localhost:8080/hbase/lib/myjar.jar'

CREATE FUNCTION my_reverse(varchar) returns varchar as 'com.mypackage.MyReverseFunction'

CREATE FUNCTION my_increment(integer, integer constant defaultvalue='10') returns integer as 'com.mypackage.MyIncrementFunction' using jar '/hbase/lib/myincrement.jar'

CREATE TEMPORARY FUNCTION my_reverse(varchar) returns varchar as 'com.mypackage.MyReverseFunction' using jar 'hdfs:/localhost:8080/hbase/lib/myjar.jar'

DROP FUNCTION

Example:

 

DROP FUNCTION IF EXISTS my_reverse

DROP FUNCTION my_reverse

CREATE VIEW

Example:

CREATE VIEW "my_hbase_table"( k VARCHAR primary key, "v" UNSIGNED_LONG) default_column_family='a';

CREATE VIEW my_view ( new_col SMALLINT ) AS SELECT * FROM my_table WHERE k = 100;

CREATE VIEW my_view_on_view AS SELECT * FROM my_view WHERE new_col > 70;

DROP VIEW

Example:

 

DROP VIEW my_view

DROP VIEW IF EXISTS my_schema.my_view

DROP VIEW IF EXISTS my_schema.my_view CASCADE

CREATE SEQUENCE

Example:

CREATE SEQUENCE my_sequence;

CREATE SEQUENCE my_sequence START WITH -1000

CREATE SEQUENCE my_sequence INCREMENT BY 10

CREATE SEQUENCE my_schema.my_sequence START 0 CACHE 10

DROP SEQUENCE

Example:

DROP SEQUENCE my_sequence

DROP SEQUENCE IF EXISTS my_schema.my_sequence

ALTER

Example:

ALTER TABLE my_schema.my_table ADD d.dept_id char(10) VERSIONS=10

ALTER TABLE my_table ADD dept_name char(50), parent_id char(15) null primary key

ALTER TABLE my_table DROP COLUMN d.dept_id, parent_id;

ALTER VIEW my_view DROP COLUMN new_col;

ALTER TABLE my_table SET IMMUTABLE_ROWS=true,DISABLE_WAL=true;

CREATE INDEX

Example:

CREATE INDEX my_idx ON sales.opportunity(last_updated_date DESC)

CREATE INDEX my_idx ON log.event(created_date DESC) INCLUDE (name, payload) SALT_BUCKETS=10

CREATE INDEX IF NOT EXISTS my_comp_idx ON server_metrics ( gc_time DESC, created_date DESC ) DATA_BLOCK_ENCODING='NONE',VERSIONS=?,MAX_FILESIZE=2000000 split on (?, ?, ?)

CREATE INDEX my_idx ON sales.opportunity(UPPER(contact_name))

DROP INDEX

Example:

DROP INDEX my_idx ON sales.opportunity

DROP INDEX IF EXISTS my_idx ON server_metrics

ALTER INDEX

Example:

ALTER INDEX my_idx ON sales.opportunity DISABLE

ALTER INDEX IF EXISTS my_idx ON server_metrics REBUILD

EXPLAIN

Example:

EXPLAIN SELECT NAME, COUNT(*) FROM TEST GROUP BY NAME HAVING COUNT(*) > 2;

EXPLAIN SELECT entity_id FROM CORE.CUSTOM_ENTITY_DATA WHERE organization_id='00D300000000XHP' AND SUBSTR(entity_id,1,3) = '002' AND created_date < CURRENT_DATE()-1;

UPDATE STATISTICS

Example:

UPDATE STATISTICS my_table

UPDATE STATISTICS my_schema.my_table INDEX

UPDATE STATISTICS my_index

UPDATE STATISTICS my_table COLUMNS

UPDATE STATISTICS my_table SET phoenix.stats.guidepost.width=50000000

CREATE SCHEMA

Example:

CREATE SCHEMA IF NOT EXISTS my_schema

CREATE SCHEMA my_schema

USE

Example:

USE my_schema

USE DEFAULT

DROP SCHEMA

Example:

DROP SCHEMA IF EXISTS my_schema

DROP SCHEMA my_schema

 

 

 

猜你喜欢

转载自blog.csdn.net/zhanglong_4444/article/details/84870279