SAP Hana database sql syntax

1 Constraints
1.1 Comments
1.2 Identifiers
1.3 Single quotes
1.4 Double quotes
1.5 SQL reserved words
2 Data types
2.1 Date and time types
2.1.1 DATE (date)
2.1.2 TIME (time)
2.1.3 SECONDDATE (date + time)
2.1.4 TIMESTAMP (timestamp)
2.2 Number type
2.2.1 TINYINT
2.2.2 SMALLINT
2.2.3 INTEGER
2.2.4 BIGINT
2.2.5 DECIMAL (precision, scale) or DEC (p, s)
2.2.6 SMALLDECIMAL
2.2.7 REAL
2.2.8 DOUBLE
2.2.9 FLOAT( n )
2.3 Character type
2.3.1 VARCHAR
2.3.2 NVARCHAR
2.3.3 ALPHANUM
2.3.4 SHORTTEXT
2.4 BINARY TYPE

2.4.1 VARBINARY
2.5 Large object (LOB) types
2.5.1 BLOB
2.5.2 CLOB
2.5.3 NCLOB
2.5.4 TEXT
2.6 Mapping between SQL data types and column store data types

2.7        数据类型转换
2.7.1        显式类型转换
2.7.2        隐式类型转换
2.7.3        转换规则表
2.7.4        类型转换优先级
2.8        类型常量
2.8.1        字符串常量
2.8.2        数字常量
2.8.3        十六进制数字常量
2.8.4        二进制字符串常量
2.8.5        日期、时间、时间戳常量
3       谓词
3.1        比较谓词
3.2        BETWEEN谓词
3.3        In 谓词
3.4        Exists 谓词
3.5        Like 谓词
3.6        NULL 谓词
3.7        CONTAINS 谓词
4       操作符
4.1        一元和二元操作符
4.2        操作符优先级
4.3        算术操作符
4.4        字符串操作符
4.5        比较操作符
4.6        逻辑操作符
4.7        合并操作符
5 Expressions
5.1 Case Expressions
5.2 Function Expressions
5.3 Aggregate Expressions
5.4 Subqueries in Expressions
6 SQL Functions
6.1 Data Type Conversion Functions
6.1.1 CAST
6.1.2 TO_ALPHANUM
6.1.3 TO_BIGINT
6.1.4 TO_BINARY
6.1.5 TO_BLOB
6.1.6 TO_CHAR
6.1.7 TO_CLOB
6.1.8 TO_DATE
6.1.9 TO_DATS
6.1.10 TO_DECIMAL
6.1.11 TO_DOUBLE
6.1.12 TO_INT
6.1.13 TO_INTEGER 6.1.14
TO_NCHAR
6.1.15 TO_NCLO
B 6.1.16 TO_NVARCHAR
6.1.17 TO_REAL
6.1. 18 TO_SECONDDATE
6.1.19      TO_SMALLDECIMAL
6.1.20      TO_SMALLINT
6.1.21      TO_TIME
6.1.22      TO_TIMESTAMP
6.1.23      TO_TINYINT
6.1.24      TO_VARCHAR
6.2        日期时间函数
6.2.1        ADD_DAYS
6.2.2        ADD_MONTHS
6.2.3        ADD_SECONDS
6.2.4        ADD_YEARS
6.2.5        CURRENT_DATE
6.2.6        CURRENT_TIME
6.2.7        CURRENT_TIMESTAMP
6.2.8        CURRECT_UTCDATE
6.2.9        CURRENT_UTCTIME
6.2.10      CURRENT_UTCTIMESTAMP
6.2.11      DAYNAME
6.2.12      DAYOFMONTH
6.2.13      DAYOFYEAR
6.2.14      DAYS_BETWEEN
6.2.15      EXTRACT
6.2.16      HOUR
6.2.17      ISOWEEK
6.2.18      LAST_DAY
6.2.19      LOCALTOUTC
6.2.20      MINUTE
6.2.21      MONTH
6.2.22      MONTHNAME
6.2.23      NEXT_DAY
6.2.24      NOW
6.2.25      QUARTER
6.2.26      SECOND
6.2.27      SECONDS_BETWEEN
6.2.28      UTCTOLOCAL
6.2.29      WEEK
6.2.30      WEEKDAY
6.2.31      YEAR
6.3        数字函数
6.3.1        ABS
6.3.2        ACOS
6.3.3        ASIN
6.3.4        ATAN
6.3.5        ATAN2
6.3.6        BINTOHEX
6.3.7        BITAND
6.3.8        CEIL
6.3.9        COS
6.3.10      COSH
6.3.11      COT
6.3.12      EXP
6.3.13      FLOOR
6.3.14      GREATEST
6.3.15      HEXTOBIN
6.3.16      LEAST
6.3.17      LN
6.3.18      LOG
6.3.19      MOD
6.3.20      POWER
6.3.21      ROUND
6.3.22      SIGN
6.3.23      SIN
6.3.24      SINH
6.3.25      SQRT
6.3.26      TAN
6.3.27      TANH
6.3.28      UMINUS
6.4        字符串函数
6.4.1        ASCII
6.4.2        CHAR
6.4.3        CONCAT
6.4.4        LCASE
6.4.5        LEFT
6.4.6        LENGTH
6.4.7        LOCATE
6.4.8        LOWER
6.4.9        LPAD
6.4.10      LTRIM
6.4.11      NCHAR
6.4.12      REPLACE
6.4.13      RIGHT
6.4.14      RPAD
6.4.15      RTRIM
6.4.16      SUBSTR_AFTER
6.4.17      SUBSTR_BEFORE
6.4.18      SUBSTRING
6.4.19      TRIM
6.4.20      UCASE
6.4.21      UNICODE
6.4.22      UPPER
6.5        杂项函数
6.5.1        COALESCE
6.5.2        CURRENT_CONNECTION
6.5.3        CURRENCT_SCHEMA
6.5.4        CURRENT_USER
6.5.5        GROUPING_ID
6.5.6        IFNULL
6.5.7        MAP
6.5.8        NULLIF
6.5.9        SESSION_CONTEXT
6.5.10      SESSION_USER
6.5.11      SYSUUID
7       SQL 语句
7.1        数据定义语句
7.1.1        ALTER AUDIT POLICY
7.1.2        ALTER FULLTEXT INDEX
7.1.3        ALTER INDEX
7.1.4        ALTER SEQUENCE
7.1.5        ALTER TABLE
7.1.6        CREATE AUDIT POLICY
7.1.7        CREATE FULLTEXT INDEX
7.1.8        CREATE INDEX
7.1.9        CREATE SCHEMA
7.1.10      CREATE SEQUENCE
7.1.11      CREATE SYNONYM
7.1.12      CREATE TABLE
7.1.13      CREATE TRIGGER
7.1.14      CREATE VIEW
7.1.15      DROP AUDIT POLICY
7.1.16      DROP FULLTEXT INDEX
7.1.17      DROP INDEX
7.1.18      DROP SCHEMA
7.1.19 DROP SEQUENCE
7.1.20 DROP SYNONYM
7.1.21 DROP TABLE
7.1.22 DROP TRIGGER
7.1.23 DROP VIEW
7.1.24 RENAME COLUMN
7.1.25 RENAME INDEX
7.1.26 RENAME TABLE
7.1.27 ALTER TABLE ALTER TYPE
7.1.28 TRUNCATE TABLE
7.2 Data manipulation statement
7.2.1 DELETE
7.2.2 EXPLAIN PLAN
7.2.3 INSERT
7.2.4 LOAD 7.2.5
MERGE DELTA
7.2.6 REPLACE | UPSERT
7.2.7 SELECT
7.2.8 UNLOAD
7.2.9 UPDATE
7.3 System management statement
7.3.1 SET SYSTEM LICENSE
7.3.2        ALTER SYSTEM ALTER CONFIGURATION
7.3.3        ALTER SYSTEM ALTER SESSION SET
7.3.4        ALTER SYSTEM ALTER SESSION UNSET
7.3.5        ALTER SYSTEM CANCEL [WORK IN] SESSION
7.3.6        ALTER SYSTEM CLEAR SQL PLAN CACHE
7.3.7        ALTER SYSTEM CLEAR TRACES
7.3.8        ALTER SYSTEM DISCONNECT SESSION
7.3.9        ALTER SYSTEM LOGGING
7.3.10      ALTER SYSTEM RECLAIM DATAVOLUME
7.3.11      ALTER SYSTEM RECLAIM LOG
7.3.12      ALTER SYSTEM RECLAIM VERSION SPACE
7.3.13      ALTER SYSTEM RECONFIGURE SERVICE
7.3.14      ALTER SYSTEM REMOVE TRACES
7.3.15      ALTER SYSTEM RESET MONITORING VIEW
7.3.16      ALTER SYSTEM SAVE PERFTRACE
7.3.17      ALTER SYSTEM SAVEPOINT
7.3.18      ALTER SYSTEM START PERFTRACE
7.3.19      ALTER SYSTEM STOP PERFTRACE
7.3.20      ALTER SYSTEM STOP SERVICE
7.3.21      UNSET SYSTEM LICENSE ALL
7.4        会话管理语句
7.4.1        CONNECT
7.4.2        SET HISTORY SESSION
7.4.3        SET SCHEMA
7.4.4        SET [SESSION]
7.4.5        UNSET [SESSION]
7.5        事务管理语句
7.5.1        COMMIT
7.5.2        LOCK TABLE
7.5.3        ROLLBACK
7.5.4        SET TRANSACTION
7.6        访问控制语句
7.6.1        ALTER SAML PROVIDER
7.6.2        ALTER USER
7.6.3        CREATE ROLE
7.6.4 CREATE SAML PROVIDER
7.6.5 CREATE USER
7.6.6 DROP ROLE
7.6.7 DROP SAML PROVIDER
7.6.8 DROP USER
7.6.9 GRANT
7.6.10 REVOKE
7.7 Data import and export statement
7.7.1 EXPORT
7.7.2 IMPORT
7.7 . 3 IMPORT FROM


1 Constraints
1.1 Comments
You can add comments to your SQL statements to increase readability and maintainability. Comments are separated in SQL statements as follows:

l Double hyphen "--". Everything after the double hyphen up to the end of the line is considered a comment by the SQL parser.

l "/*" and "*/". This type of comment is used to comment multiline content. All text between the quotation mark "/*" and the closing character "*/" will be ignored by the SQL parser.

1.2 Identifiers
Identifiers are used to represent names in SQL statements, including table names, view names, synonyms, column names, index names, function names, stored procedure names, user names, role names, and so on. There are two types of identifiers: undelimited identifiers and delimited identifiers (referring to separating strings with spaces).

l Unseparated table and column names must start with a letter and cannot contain symbols other than numbers or underscores.

l Delimited identifiers are closed with a delimiter, double quotes, and then the identifier can contain any characters including special characters. For example, "AB$%CD" is a valid identifier.

l Restrictions:

o "_SYS_" is specially reserved for the database engine, so it is not allowed to appear at the beginning of the name of the collection object.

o Role names and user names must be specified without separators.

o Identifiers have a maximum length of 127 characters.

1.3 Single quotes
Single quotes are used to separate strings, using two single quotes can represent the single quote itself.

1.4 Double quotes
Use double quotes to separate identifiers, and use two double quotes to represent the double quotes themselves.

1.5 SQL Reserved Words
Reserved words have special meanings for the SQL parser of the SAP HANA database and cannot be user-defined names. Reserved words cannot be used as collection object names in SQL statements. If necessary, you can circumvent this restriction by qualifying table or column names with double quotes.

The following table lists all the reserved words of the present and future SAP HANA database:

2 data types


2.1 Date and time type
2.1.1 DATE (date)
The DATE data type is composed of year, month and day information, representing a date value. The default format of the DATA type is 'YYYY-MM-DD'. YYYY means year, MM means month and DD means day. Time values ​​range from 0001-01-01 to 9999-12-31.

select to_date('365','DDD') from dummy;

select to_date('2015/365','YYYY/ddd') from dummy;

select to_date('2015-january','YYYY-month') from dummy;

select to_date('2015-February/28','yyyy-moNth/dd') from dummy;

select to_date('2015-Jan/31','yyyy-mon/dd') from dummy;

select to_date('2015/2-1','yyyy/mM-dd') from dummy;

select to_date('2015/02-01','yyyy/mM-dd') from dummy;

select to_date('2015+02=01','yyyy+mM=dd') from dummy;

select to_date('20150201','yyyymmdd') from dummy;

2.1.2     TIME(时间)
TIME 数据类型由小时、分钟、秒信息组成,表示一个时间值。 TIME 类型的默认格式为‘HH24:MI:SS’。 HH24 表示从 0 至 24 的小时数, MI 代表 0 至 59 的分钟值而 SS 表示 0 至 59的秒。

select to_time('1:1:1 PM','HH:MI:SS PM') from dummy;

select to_time('1:1:1','HH:MI:SS') from dummy;

select to_time('1:1:1','HH24:MI:SS') from dummy;

2.1.3      SECONDDATE(日期+时间)
SECONDDATE 数据类型由年、月、日、小时、分钟和秒来表示一个日期和时间值。

SECONDDATE 类型的默认格式为‘YYYY-MM-DD HH24:MI:SS’。 YYYY 代表年, MM 代表月份,DD 代表日, HH24 表示小时, MI 表示分钟, SS 表示秒。日期值的范围从 0001-01-01 00:00:01 至 9999-12-31 24:00:00。

2.1.4      TIMESTAMP(时戳)
TIMESTAMP 数据类型由日期和时间信息组成时戳。默认格式为‘YYYY-MM-DD HH24:MI:SS.FF7’。 FFn 代表含有小数的秒,其中 n 表示小数部分的数字位数。时间戳的范围从 0001-01-01 00:00:00.0000000 至 9999-12-31 23:59:59.9999999。

select to_timestamp('2015/1/2 1:1:1','YYYY/MM/DD HH:MI:SS') from dummy;

select to_timestamp('2015/1/2 1:1:1.999','YYYY/MM/DD HH:MI:SS.FF3') from dummy;

select to_timestamp('2015/1/2 1:1:1.9999999','YYYY/MM/DD HH:MI:SS.FF7') from dummy;

selectcurrent_timestampfrom dummy;--2015-6-12 16:50:26.349

select to_char(current_timestamp,'D') from dummy;--5    注:这个应该是星期几

select to_char(current_timestamp,'DD') from dummy;--12

select to_char(current_timestamp,'DDD') from dummy;--163

select to_char(current_timestamp,'Day') from dummy;--Friday

select to_char(current_timestamp,'Dy') from dummy;--Fri

select to_char(current_timestamp,'mon') from dummy;--jun

select to_char(current_timestamp,'month') from dummy;--june

select to_char(current_timestamp,'rm') from dummy;--vi

select to_char(current_timestamp,'q') from dummy;--2

select to_char(current_timestamp,'w') from dummy;--2

select to_char(current_timestamp,'ww') from dummy;--24

select to_char(current_timestamp,'FF7') from dummy;--1260000

select to_char(current_timestamp,'YY') from dummy;--15

2.2 Number Types
2.2.1 TINYINT
The TINYINT data type stores an 8-bit (1 byte) unsigned integer. The minimum value for TINYINT is 0 and the maximum value is 255.

2.2.2     SMALLINT
SMALLINT 数据类型存储一个 16 (2个字节)位无符号整数。 SMALLINT 的最小值为-32,768 ,最大值为32, 767。

2.2.3     INTEGER
INTEGER 数据类型存储一个 32 (4个字节)位有符号整数。 INTEGER 的最小值为-2,147,483,648 ,最大值为 2,147,483,647。

2.2.4     BIGINT
BIGINT 数据类型存储一个 64 (8个字节)位有符号整数。 INTEGER 的最小值为-9,223,372,036,854,775,808,最大值为 9, 223,372,036,854,775,807。

2.2.5     DECIMAL(精度,小数位数)或 DEC( p, s)
DECIMAL (p, s) 数据类型指定了一个精度为 p 小数位数为 s 的定点小数。精度是有效位数的总数,范围从 1 至 34。

小数位数是从小数点到最小有效数字的数字个数,范围从-6,111 到 6,176,这表示位数指定了十进制小数的指数范围从 10-6111 至 106176。如果没有指定小数位数,则默认值为 0。

当数字的有效数字在小数点的右侧(后面)时,小数位数为正;有效数字在小数点左侧(前面)时,小数位数为负。

例子:

 0.0000001234 (1234 x 10-10) 精度为 4,小数位数 10。

1.0000001234(10000001234 x 10-10) 精度为 11,小数位数为 10。

1234000000 (1234x106) 精度为 4,小数位数为-6。

当未指定精度和小数位数, DECIMAL 成为浮点小数。这种情况下,精度和小数位数可以在上文描述的范围内不同,根据存储的数值, 1-34 的精度和 6111-6176 的小数位数。

2.2.6     SMALLDECIMAL
SMALLDECIMAL 是一个浮点十进制数。精度和小数位数可以在范围有所不同,根据存储的数值, 1-16 的精度以及-369-368的小数位数。 SMALLDECIMAL 只支持列式存储。

DECIMAL 和 SMALLDECIMAL 都是浮点十进制数。举例来说,一个十进制列可以存储 3.14,3.1415, 3.141592 同时保持它们的精度。

DECIMAL(p, s) 是 SQL 对于定点十进制数的标准标记。例如, 3.14, 3.1415,3.141592 存储在 decimal(5, 4)列中为 3.1400, 3.1415, 3.1416,各自保持其精度( 5)和小数位数( 4)。

2.2.7     REAL
REAL 数据类型定义一个 32 位(4个字节)单精度浮点数。

2.2.8     DOUBLE
DOUBLE 数据类型定义一个 64 位(8个字节)的双精度浮点数,最小值为-1.79769 x 10308,最大值为

1.79769x10308, DOUBLE 最小的正数为 2.2207x10-308,最大的负数为-2.2207x10-308。

2.2.9     FLOAT( n )
FLOAT 数据类型定义一个 32 位或 64 位的实数, n 指定有效数字的个数,范围可以从 1 至53。

当你使用 FLOAT( n )数据类型时,如果 n 比 25 小,其会变成 32 位的实数类型;如果 n 大于等于 25,则会成为 64 的 DOUBLE 数据类型。如果 n 没有声明,默认变成 64 位的double 数据类型。

2.3字符类型
字符类型用来存储包含字符串的值。 VARCHAR类型包含 ASCII字符串,而 NVARCHAR用来存储 Unicode字符串。

2.3.1     VARCHAR
VARCHAR (n) 数据类型定义了一个可变长度的 ASCII 字符串, n 表示最大长度,是一个 1 至5000的整数值。

2.3.2     NVARCHAR
NVARCHAR (n) 数据类型定义了一个可变长度的 Unicode 字符串, n 表示最大长度,是一个1 至 5000的整数值。

2.3.3     ALPHANUM
ALPHANUM (n) 数据类型定义了一个可变长度的包含字母数字的字符串, n 表示最大长度,是一个 1 至 127的整数值。

2.3.4     SHORTTEXT
SHORTTEXT (n) 数据类型定义了一个可变长度的字符串,支持文本搜索和字符搜索功能。

这不是一个标准的 SQL 类型。选择一列 SHORTTEXT (n) 列会生成一个 NVARCHAR (n)类型的列。

2.4二进制类型


二进制类型用来存储二进制数据的字节。

2.4.1      VARBINARY
VARBINARY 数据类型用来存储指定最大长度的二进制数据,以字节为单位, n 代表最大长度,是一个 1 至 5000的整数。

2.5大对象( LOB)类型


LOB (Large Object) data types, CLOB, NCLOB and BLOB, are used to store large amounts of data such as text files and images. The maximum size of a LOB is 2GB.

2.5.1 BLOB
The BLOB data type is used to store large binary data.

2.5.2 CLOB
The CLOB data type is used to store large ASCII character data.

2.5.3 NCLOB
The NCLOB data type is used to store large Unicode character objects.

2.5.4 TEXT
The TEXT data type specifies to support the text search function, which is not an independent SQL type. Select a column TEXT column will be

Generate a column of type NCLOB.

LOB types are used to store and retrieve large amounts of data. The LOB type supports the following operations:

lLength(n) returns the length of the LOB in bytes.

lLIKE can be used to search LOB columns.

LOB types have the following restrictions:

lLOB columns cannot appear in an ORDER BY or GROUP BY clause.

lLOB columns cannot appear in the FROM clause as a join predicate.

l cannot appear as a predicate in a WHERE clause, except for LIKE, CONTAINS, = or <>.

lLOB columns cannot appear in the SELECT clause as an argument to an aggregate function.

lLOB columns cannot appear in a SELECT DISTINCT statement.

lLOB columns cannot be used for set operations, except EXCEPT, UNION ALL is an exception.

lLOB columns cannot be used as primary keys.

lLOB columns cannot use the CREATE INDEX statement.

lLOB columns cannot use statistics update statements.

2.6 Mapping between SQL data types and column storage data types


2.7 Data type conversion


This section describes the type conversions allowed in SAP HANA database.

2.7.1 Explicit type conversion
The type of the expression result, such as a field index, a field function or a literal can be converted using the following functions: CAST, TO_ALPHANUM, TO_BIGINT, TO_VARBINARY,TO_BLOB, TO_CLOB, TO_DATE, TO_DATS, TO_DECIMAL, TO_DOUBLE, TO_INTEGER, TO_INT, TO_NCLOB, TO_NVARCHAR, TO_REAL, TO_SECONDDATE, TO_SMALLINT, TO_TINYINT, TO_TIME, TO_TIMESTAMP, TO_VARCHAR.

2.7.2 Implicit type conversion
When a given series of operators/parameter types does not conform to its expected type, SAP HANA database will perform type conversion. This conversion only happens when the relevant conversion is available and makes the operator/argument type executable.

For example, comparisons between BIGINT and VARCHAR are performed by implicitly converting VARCHAR to BIGINT.

Explicit conversions can be used for all implicit conversions except for the TIME and TIMESTAMP data types. TIME and TIMESTAMP can be converted to each other using TO_TIME(TIMESTAMP) and TO_TIMESTAMP(TIME).

example

2.7.3 Conversion rule table
In the following table:

 "OK" in the box indicates that the data type conversion is allowed without any checks.

 "CHK" in the box indicates that the data type conversion will only be performed if the data is a valid target type.

 "-" in the box means that the data type conversion is not allowed.

The rules shown below apply to both implicit and explicit conversions, except for TIME to TIMESTAMP conversions. The TIME type can only be performed with an explicit conversion TO_TIMESTAMP or CAST function.

2.7.4      类型转换优先级
本节介绍 SAP HANA 数据库实施的数据类型的优先级。数据类型优先级指定较低优先级的类型转换为较高优先级的类型。

2.8、类型常量
常量是表示一个特定的固定数值的符号。

2.8.1      字符串常量
字符串常量括在单引号中。

o        'Brian'

o        '100'

Unicode 字符串的格式与字符串相似,但前面有一个 N 标识符( N 代表 SQL-92 标准中的国际语言)。 N 字母前缀必须是大写。

o        N'abc'

SELECT'Brian'"character string 1", '100'"character string 2", N'abc'"unicode string"FROM DUMMY;

2.8.2       数字常量
数字常量用没有括在单引号中的数字字符串表示。数字可能包含小数点或者科学计数。

o        123

o        123.4

o        1.234e2

2.8.3       十六进制数字常量
十六进制数字常量是十六进制数的字符串,含有前缀 0x。

o        0x0abc

SELECT 123 "integer", 123.4 "decimal1", 1.234e2 "decimal2", 0x0abc "hexadecimal"FROM DUMMY;

2.8.4 Binary string constants
A binary string is prefixed with an X and is a string of hexadecimal digits enclosed in single quotes.

o        X'00abcd'

o        x'dcba00'

SELECT X'00abcd'"binary string 1", x'dcba00'"binary string 2"FROM DUMMY;

2.8.5 Date, time, and timestamp constants
Date, time, and timestamp have the following prefixes:

o        date'2010-01-01'

o time'11:00:00.001'

o        timestamp'2011-12-31 23:59:59'

SELECT date'2010-01-01' "date", time'11:00:00.001' "time", timestamp'2011-12-31 23:59:59' "timestamp" FROM DUMMY;

SELECTdate'2010-01-01'"date", time'11:00:00.001'"time", timestamp'2011-12-31 23:59:59'"timestamp"FROM DUMMY;

3 Predicates
A predicate is specified by combining one or more expressions, or logical operators, and returns one of the following logical/true values:

TRUE, FALSE, or UNKNOW.

3.1 Comparison predicates
Two values ​​are compared using comparison predicates and return TRUE, FALSE or UNKNOW.

grammar:

<comparison_predicate> ::=<expression> { = | != | <> | > | < | >= | <= } [ ANY | SOME| ALL ] { <expression_list> | <subquery> }

<expression_list> ::= <expression>, ...

The expression can be a simple expression such as a character, date or number, or a scalar (only one result) subquery. The SELECT clause of this subquery has only one table field or a statistical column

If the result of the subquery has only one piece of data, the [ALL|ANY|SOME] option can be omitted

If the subquery returns multiple items, you may need to bring the [ALL|ANY|SOME] option

²  ALL: True if all the rows returned by the subquery satisfy the comparison condition

²  ANY|SOME: If only one of all the rows returned by the subquery satisfies the comparison condition, it will be true

²  = When the equal sign is used with ANY|SOME, it has the same effect as the IN operator

3.2BETWEEN predicate
Values ​​will be compared in a given range.

grammar:

<range_predicate> ::= <expression1> [NOT] BETWEEN <expression2> AND <expression3>

BETWEEN …AND … - When a range predicate is specified, returns true if expression1 is within the range specified by expression2 and expression3; returns true only if expression2 is less than expression3.

3.3In 谓词
一个值与一组指定的值比较。如果 expression1 的值在 expression_list(或子查询)中,结果返回真。

语法:

<in_predicate> ::= <expression> [NOT] IN { <expression_list> | <subquery> }

... WHERE CITY IN ('BERLIN', 'NEW YORK', 'LONDON').

如果CITY为IN后面列表中的任何一个时返回true

IN后面也可以根子查询:

SELECT SINGLE city latitude longitude
  INTO (city, lati, longi)
  FROM sgeocity
  WHERE city IN ( SELECT cityfrom FROM spfli
                    WHERE carrid = carr_id
                    AND   connid = conn_id ).

3.4Exists 谓词
如果子查询返回非空结果集,结果为真;返回空结果集,结果则为假。

这类子查询没有返回值,也不要求SELECT从句中只有一个选择列,选择列可以任意个数,WHERE or HAVING从句来根据该子查询的是否查询到数据来决定外层主查询语句来选择相应数据

DATA: name_tab TYPE TABLE OF scarr-carrname,
      name LIKE LINE OF name_tab.
SELECT carrname INTO TABLE name_tab FROM scarr
  WHERE EXISTS ( SELECT * FROM spfli
                    WHERE carrid = scarr~carrid
                    AND cityfrom = 'NEW YORK' ).
LOOP AT name_tab INTO name.
  WRITE: / name.
ENDLOOP.

此子查询又为相关子查询:

如果某个子查的WHERE条件中引用了外层查询语句的列,则称此子查询为相关子查询。相关子查询对外层查询结果集中的每条记录都会执行一次,所以尽量少用相关子查询

3.5Like 谓词
Like 用来比较字符串, Expression1 与包含在 expression2 中的模式比较。通配符( %)和( _)可以用在比较字符串 expression2 中。

“_”用于替代单个字符,“%”用于替代任意字符串,包括空字符串。

可以使用ESCAPE选项指定一个忽略符号h,如果通配符“_”、“%”前面有符号<h>,那么通配符失去了它在模式中的功能,而指字符本身了:

... WHERE FUNCNAME LIKE'EDIT#_%'ESCAPE'#';

以“EDIT_”开头的字符串

3.6 NULL predicate
When the predicate IS NULL is specified, the value can be compared with NULL. IS NULL returns true if the expression evaluates to NULL; if the predicate IS NOT NULL is specified, returns true if the value is not NULL.

grammar:

null_predicate> ::= <expression> IS [NOT] NULL

3.7CONTAINS predicate
The CONTAINS predicate is used to search for text matching strings in the subquery.

grammar:

<contains_function> ::= CONTAINS '(' <contains_columns> ',' <search_string>')'| CONTAINS '(' <contains_columns> ',' <search_string> ',' <search_specifier> ')'

<contains_columns> ::= '*' | <column_name> | '(' <columnlist> ')'

<search_string> ::= <string_const>

<search_specifier> ::= <search_type> <opt_search_specifier2_list>| <search_specifier2_list>

<opt_search_specifier2_list> ::= empty| <search_specifier2_list>

<search_type> ::= <exact_search> | <fuzzy_search> | <linguistic_search>

<search_specifier2_list> ::= <search_specifier2>| <search_specifier2_list> ',' <search_specifier2>

<search_specifier2> := <weights> | <language>

<exact_search> ::= EXACT

<fuzzy_search> ::= FUZZY| FUZZY '(' <float_const> ')' | FUZZY '(' <float_const> ',' <additional_params> ')'

<linguistic_search> ::= LINGUISTIC

<weights> ::= WEIGHT '(' <float_const_list> ')'

<language> :: LANGUAGE '(' <string_const> ')'

<additional_params> ::= <string_const>

search_string: Use the freestyle string search format (for example, Peter "Palo Alto" or Berlin - "SAP LABS").

search_specifier: If no search_specifier is specified, EXACT is the default.

EXACT: EXACT returns true for those records that match searchterms exactly in search_attributes.

FUZZY: FUZZY returns true for those records that similarly match searchterms in search_attributes (eg, misspellings are ignored to some extent).

Float_const: If float_const is omitted, the default value is 0.8. The default can be overridden by defining the parameter FUZZINESSTHRESHOLD supported by columnar storage join views.

WEIGHT: If the weights list is defined, it must be the same as the number of columns in <contains_columns>.

LANGUAGE: LANGUAGE is used in the preprocessing of the search string and as a filter before searching. Only return documents and defined languages ​​that match the search string.

LINGUISTIC: LINGUISTIC returns true for those searchterms character variables that appear in searchattribute.

Restriction: If multiple CONTAINS are defined in the where condition, only one of them will consist of more than one column from the <contains_columns> list.

CONTAINS applies only to columnar tables (simple tables and joined views).

example:

Exact search:

select * from T where contains(column1, 'dog OR cat') -- EXACT is implicit

select * from T where contains(column1, 'dog OR cat', EXACT)

select * from T where contains(column1, '"cats and dogs"') -- phrase search

Fuzzy search:

select * from T where contains(column1, 'catz', FUZZY(0.8))

语言搜索:

select * from T where contains(column1, 'catz', LINGUISTIC)

自由式搜索:自由式搜索是对于多列的搜索。

select * from T where CONTAINS( (column1,column2,column3), 'cats OR dogz', FUZZY(0.7))

select * from T where CONTAINS( (column1,column2,column3), 'cats OR dogz', FUZZY(0.7))

4                            操作符
你可以在表达式中使用操作符进行算术运算。操作符可以用来计算、比较值或者赋值。

4.1一元和二元操作符


4.2操作符优先级
一个表达式可以使用多个操作符。如果操作符大于一个,则 SAP HANA 数据库会根据操作符优先级评估它们。你可以通过使用括号改变顺序,因为在括号内的表达式会第一个评估。

如果没有使用括号,则操作符优先级将根据下表。请注意, SAP HANA 数据库对于优先级相同的操作符将从左至右评估操作符。

4.3算术操作符
你可以使用算术操作符来执行数学运算,如加法、减法、乘法和除法,以及负数。

4.4字符串操作符


对于 VARCHAR 或者 NVARCHAR 类型字符串,前导或者后置空格将保留。如果其中一字符串类型为 NVARCHAR,则结果也为 NVARCHAR 并且限制在 5000 个字母, VARCHAR 联接的最大长度也限制在 5000 个字母。

4.5比较操作符
语法:

<comparison_operation> ::= <expression1> <comparison_operator> <expression2>

4.6 Logical Operators
Search conditions can be combined using AND or OR operators, and you can also use NOT operators to negate conditions.

4.7 Merge operator
Performs a merge operation on the results of two or more queries

UNION: union, deduplication

UNION ALL: union, including repetition

INTERSECT: intersection

EXCEPT: difference

5 Expressions
Expressions are clauses that can be used to calculate and return a value.

grammar:

<expression> ::=<case_expression>
| <function_expression>
| <aggregate_expression>
| (<expression> )
| ( <subquery> )
| - <expression>
| <expression> <operator> <expression>
| <variable_name>
| <constant>
| [<correlation_name>.]<column_name>

5.1Case 表达式
Case 表达式允许用户使用 IF ... THEN ... ELSE逻辑,而不用在 SQL 语句中调用存储过程。

语法:

<case_expression> ::=

CASE <expression>

WHEN <expression> THEN <expression>, ...

[ ELSE <expression>]

{ END | END CASE }

如果位于 CASE 语句后面的表达式和 WHEN 后面的表达式相等,则 THEN 之后的表达式将作为返回值;否则返回 ELSE 语句之后的表达式,如果存在的话。

CASE后面还可以省略,省略后如下:

BEGIN

 OUTTAB = SELECT CARRID, CONNID, FLDATE, BOOKID, CUSTOMID

         FROM "SFLIGHT"."SBOOK"

         WHERE (CASE WHEN CARRID = :IV_CARRID THEN '1'

         ELSE '2' END) = '2';

 END;

由于CASE WHEN性能不好,所以可以改写成下面这样:

SELECT CARRID, CONNID, FLDATE, BOOKID, CUSTOMID

         FROM "SFLIGHT"."SBOOK"

         WHERE CARRID  <> :IV_CARRID or carrid is null

5.2Function 表达式
SQL 内置的函数可以作为表达式使用。

语法:

<function_expression> ::= <function_name> ( <expression>, ... )

5.3Aggregate 表达式
Aggregate 表达式利用 aggregate 函数计算同一列中多行值。

语法:

<aggregate_expression> ::= COUNT(*) | <agg_name> ( [ ALL | DISTINCT ] <expression>)

<agg_name> ::= COUNT | MIN | MAX | SUM | AVG | STDDEV | VAR

5.4表达式中的子查询
子查询是在括号中的 SELECT 语句。 SELECT 语句可以包含一个,有且仅有一个选择项。当作为表达式使用时,标量(只有一条结果)子查询允许返回零个或一个值。

语法:

<scalar_subquery_expression> ::= (<subquery>)

在最高级别的 SELECT 中的 SELECT 列表,或者 UPDATE 语句中的 SET 条件中,你可以在任何能使用列名的地方使用标量子查询。不过, scalar_subquery 不可以在 GROUP BY 条件中使用。

例子:

以下语句返回每个部门中的员工数,根据部门名字分组:

SELECT DepartmentName, COUNT(*), 'out of',(SELECTCOUNT(*) FROM Employees)

FROM Departments AS D, Employees AS E WHERE D.DepartmentID = E.DepartmentID

GROUPBY DepartmentName;

6 SQL functions
6.1 Data type conversion function
The data type conversion function is used to convert a parameter from one data type to another data type, or to test whether the conversion is feasible.

6.1.1 CAST
syntax:

CAST (expression AS data_type)

Grammatical elements:

Expression – The expression to be converted.

Data type – The target data type. TINYINT | SMALLINT | INTEGER | BIGINT | DECIMAL | SMALLDECIMAL | REAL | DOUBLE | ALPHANUM | VARCHAR | NVARCHAR | DAYDATE | DATE |

example:

SELECTCAST (7 ASVARCHAR) "cast"FROM DUMMY;--7

6.1.2 TO_ALPHANUM
Syntax:

TO_ALPHANUM (value)

describe:

Converts the given value to the ALPHANUM data type.

example:

SELECT TO_ALPHANUM ('10') "to alphanum"FROM DUMMY;--10

6.1.3 TO_BIGINT
Syntax:

TO_BIGINT (value)

describe:

Convert value to type BIGINT.

example:

SELECT TO_BIGINT ('10') "to bigint"FROM DUMMY;--10

6.1.4 TO_BINARY
Syntax:

TO_BINARY (value)

describe:

将 value 转换为 BINARY 类型。

例子:

SELECT TO_BINARY ('abc') "to binary"FROM DUMMY;--616263  显示时却是以十六进制显示,而不是二进制?

6.1.5     TO_BLOB
语法:

TO_BLOB (value)

描述:

将 value 转换为 BLOB 类型。参数值必须是二进制字符串。

例子:

SELECT TO_BLOB (TO_BINARY('abcde')) "to blob"FROM DUMMY;--abcde

6.1.6     TO_CHAR
语法:

TO_CHAR (value [, format])

描述:

将 value 转换为 CHAR 类型。如果省略 format 关键字,转换将会使用 Date Formats 中说明的日期格式模型。

例子:

SELECT TO_CHAR (TO_DATE('2009-12-31'), 'YYYY/MM/DD') "to char"FROM DUMMY;--2009/12/31

SELECT TO_CHAR (TO_DATE('2009-12-31')) "to char"FROM DUMMY;--2009-12-31

6.1.7     TO_CLOB
语法:

TO_CLOB (value)

描述:

将 value 转换为 CLOB 类型。

例子:

SELECT TO_CLOB ('TO_CLOB converts the value to a CLOB data type') "to clob"FROM DUMMY;--TO_CLOB converts the value to a CLOB data type

6.1.8 TO_DATE
Syntax:

TO_DATE (d [, format])

describe:

Converts the date string d to the DATE data type. If the format keyword is omitted, the conversion will use the date format model described in Date Formats.

example:

SELECT TO_DATE('2010-01-12', 'YYYY-MM-DD') "to date"FROM DUMMY;--2010-1-12

6.1.9 TO_DATS
Syntax:

TO_DATS (d)

describe:

Convert the string d to an ABAP date string in the format "YYYYMMDD".

example:

SELECT TO_DATS ('2010-01-12') "abap date"FROM DUMMY;--20100112

6.1.10 TO_DECIMAL
Syntax:

TO_DECIMAL (value [, precision, scale])

describe:

Convert value to DECIMAL type.

Precision is the total number of significant digits and ranges from 1 to 34. Scale is the number of digits from the decimal point to the least significant figure, ranging from -6,111 to 6,176, which means that the number of digits specifies the exponent of a decimal fraction ranging from 10-6111 to 106176. If no decimal places are specified, the default is 0.

Scale is positive when a number has significands to the right of the decimal point, and scale is negative when significands are to the left of the decimal point.

When precision and scale are not specified, DECIMAL becomes a floating-point decimal. In this case, the precision and scale can vary within the ranges described above, with a precision of 1-34 and a scale of 6111-6176, depending on the stored value.

example:

SELECTTO_DECIMAL(7654321.888888, 10, 3) "to decimal"FROM DUMMY--7,654,321.888

6.1.11 TO_DOUBLE
syntax:

TO_DOUBLE (value)

describe:

Converts value to DOUBLE (double precision) data type.

example:

SELECT 3*TO_DOUBLE ('15.12') "to double"FROM DUMMY;--45.36

6.1.12 TO_INT
Syntax:

TO_INT (value)

describe:

Convert value to type INTEGER.

example:

SELECT TO_INT('10') "to int"FROM DUMMY;--10

6.1.13 TO_INTEGER
Syntax:

TO_INTEGER (value)

describe:

Convert value to type INTEGER.

example:

SELECT TO_INTEGER ('10') "to int"FROM DUMMY;--10

6.1.14 TO_NCHAR
Syntax:

TO_NCHAR (value [, format])

describe:

Converts value to the NCHAR Unicode character type. If the format keyword is omitted, the conversion will use the date format model described in Date Formats.

example:

SELECT TO_NCHAR (TO_DATE('2009-12-31'), 'YYYY/MM/DD') "to nchar"FROM DUMMY;--2009/12/31

6.1.15 TO_NCLOB
Syntax:

TO_NCLOB (value)

describe:

Convert value to NCLOB data type.

example:

SELECT TO_NCLOB ('TO_NCLOB converts the value to a NCLOB data type') "to nclob"FROM DUMMY;--TO_NCLOB converts the value to a NCLOB data type

6.1.16 TO_NVARCHAR
Syntax:

TO_NVARCHAR (value [,format])

describe:

Converts value to the NVARCHAR Unicode character type. If the format keyword is omitted, the conversion will use the date format model described in DateFormats.

example:

SELECT TO_NVARCHAR(TO_DATE('2009/12/31'), 'YY-MM-DD') "to nchar"FROM DUMMY;--09-12-31

6.1.17 TO_REAL
Syntax:

TO_REAL (value)

describe:

Converts value to a real (single precision) data type.

example:

SELECT 3*TO_REAL ('15.12') "to real"FROM DUMMY;--45.36000061035156

6.1.18 TO_SECONDDATE
syntax:

TO_SECONDDATE (d [, format])

描述:

将 value 转换为 SECONDDATE 类型。如果省略 format 关键字,转换将会使用 Date Formats 中说明的日期格式模型。

例子:

SELECT TO_SECONDDATE ('2010-01-11 13:30:00', 'YYYY-MM-DD HH24:MI:SS') "to seconddate"FROM DUMMY;--2010-1-11 13:30:00.0

6.1.19   TO_SMALLDECIMAL
语法:

TO_SMALLDECIMAL (value)

描述:

将 value 转换为 SMALLDECIMAL 类型。

例子:

SELECT TO_SMALLDECIMAL(7654321.89) "to smalldecimal"FROM DUMMY;--7,654,321.89

6.1.20   TO_SMALLINT
语法:

TO_SMALLINT (value)

描述:

将 value 转换为 SMALLINT 类型。

例子:

SELECT TO_SMALLINT ('10') "to smallint"FROM DUMMY;--10

6.1.21   TO_TIME
语法:

TO_TIME (t [, format])

描述:

将时间字符串 t 转换为 TIME 类型。如果省略 format 关键字,转换将会使用 Date Formats 中说明的日期格式模型。

例子:

SELECT TO_TIME ('08:30 AM', 'HH:MI AM') "to time"FROM DUMMY;--8:30:00

6.1.22 TO_TIMESTAMP
Syntax:

TO_TIMESTAMP (d [, format])

describe:

Convert time string t to TIMESTAMP type. If the format keyword is omitted, the conversion will use the date format model described in Date Formats.

example:

SELECT TO_TIMESTAMP ('2010-01-11 13:30:00', 'YYYY-MM-DD HH24:MI:SS') "to timestamp"FROM DUMMY;--2010-1-11 13:30:00.0

6.1.23 TO_TINYINT
Syntax:

TO_TINYINT (value)

describe:

Convert value to TINYINT type.

example:

SELECT TO_TINYINT ('10') "to tinyint"FROM DUMMY;--10

6.1.24 TO_VARCHAR
Syntax:

TO_VARCHAR (value [, format])

describe:

Converts the given value to a VARCHAR string type. If the format keyword is omitted, the conversion will use the date format model described in Date Formats.

example:

SELECT TO_VARCHAR (TO_DATE('2009-12-31'), 'YYYY/MM/DD') "to char"FROM DUMMY;--2009/12/31

6.2 Date and time functions
6.2.1 ADD_DAYS
Syntax:

ADD_DAYS (d, n)

describe:

Calculates the value n days after date d.

example:

SELECT ADD_DAYS (TO_DATE ('2009-12-05', 'YYYY-MM-DD'), 30) "add days"FROM DUMMY;--2010-1-4

6.2.2 ADD_MONTHS
Syntax:

ADD_MONTHS (d, n)

describe:

Computes the value n months after date d.

example:

SELECT ADD_MONTHS (TO_DATE ('2009-12-05', 'YYYY-MM-DD'), 1) "add months"FROM DUMMY--2010-1-5

6.2.3 ADD_SECONDS
Syntax:

ADD_SECONDS (t, n)

describe:

Computes the value n seconds after time t.

example:

SELECT ADD_SECONDS (TO_TIMESTAMP ('2012-01-01 23:30:45'), 15) "add seconds"FROM DUMMY;--2012-1-1 23:31:00.0

6.2.4 ADD_YEARS
Syntax:

ADD_YEARS (d, n)

describe:

Computes the value n years after date d.

example:

SELECT ADD_YEARS (TO_DATE ('2009-12-05', 'YYYY-MM-DD'), 1) "add years"FROM DUMMY;--2010-12-5

6.2.5 CURRENT_DATE
Syntax:

CURRENT_DATE

describe:

Returns the current local system date.

example:

selectcurrent_date from dummy;--2015-6-12

6.2.6 CURRENT_TIME
Syntax:

CURRENT_TIME

describe:

Returns the current local system time.

example:

select current_time  from dummy;--16:58:11

6.2.7 CURRENT_TIMESTAMP
Syntax:

CURRENT_TIMESTAMP

describe:

Returns the timestamp information of the current local system.

example:

selectcurrent_timestamp  from dummy;--2015-6-12 16:58:11.471

6.2.8 CURRECT_UTCDATE
Syntax:

CURRENT_UTCDATE

describe:

Returns the current UTC date. UTC stands for Coordinated Universal Time, also known as Greenwich Mean Time (GMT).

example:

SELECT CURRENT_UTCDATE "Coordinated Universal Date"FROM DUMMY;--2015-6-12

6.2.9 CURRENT_UTCTIME
Syntax:

CURRENT_UTCTIME

describe:

Returns the current UTC time.

example:

SELECTCURRENT_TIMESTAMP,CURRENT_UTCTIME "Coordinated Universal Time"FROM DUMMY;--2015-6-12 23:25:49.721;15:25:49

6.2.10 CURRENT_UTCTIMESTAMP
Syntax:

CURRENT_UTCTIMESTAMP

describe:

Returns the current UTC timestamp.

example:

SELECTCURRENT_TIMESTAMP,CURRENT_UTCTIMESTAMP "Coordinated Universal Timestamp"FROM DUMMY;-2015-6-12 23:28:07.62;2015-6-12 15:28:07.62

6.2.11   DAYNAME
语法:

DAYNAME (d)

描述:

返回一周中日期 d 的英文名。

例子:

SELECTDAYNAME ('2011-05-30') "dayname"FROM DUMMY;--MONDAY

6.2.12   DAYOFMONTH
语法:

DAYOFMONTH (d)

描述:

返回一个月中日期 d 的整数数字(即一个月中的几号)。

例子:

SELECT DAYOFMONTH ('2011-05-30') "dayofmonth"FROM DUMMY;--30

6.2.13   DAYOFYEAR
语法:

DAYOFYEAR (d)

描述:

返回一年中代表日期 d 的整数数字(即一年中的第几天)。

例子:

SELECTDAYOFYEAR ('2011-02-01') "dayofyear"FROM DUMMY;--32

6.2.14   DAYS_BETWEEN
语法:

DAYS_BETWEEN (d1, d2)

描述:

计算 d1 和 d2 之间的天数(只包括一端:[d1,d2)或者(d1,d2])。

例子:

SELECTDAYS_BETWEEN (TO_DATE ('2015-01-01', 'YYYY-MM-DD'), TO_DATE('2015-02-02', 'YYYY-MM-DD')) "days between"FROM DUMMY;--32

SELECTDAYS_BETWEEN ('2015-01-01','2015-02-02') "days between" FROM DUMMY;--32 type implicit conversion (character to date)

SELECTDAYS_BETWEEN ('2015-02-01','2015-03-01') "days between"FROM DUMMY;--28

6.2.15 EXTRACT
syntax:

EXTRACT ({YEAR | MONTH | DAY | HOUR | MINUTE | SECOND} FROM d)

describe:

Returns the value (year, month, day, hour, minute, second) of the datetime field specified in date d.

example:

SELECTEXTRACT(YEARFROM TO_DATE('2010-01-04', 'YYYY-MM-DD')) "年",EXTRACT(MONTHFROM'2010-01-04') "月" ,EXTRACT(DAYFROM'2010-01-04') "日" ,EXTRACT(HOURFROM'2010-01-04 05') "时",EXTRACT(MINUTEFROM'2010-01-04 05') "分",EXTRACT(SECONDFROM'2010-01-04 05:06:07') "秒"FROM DUMMY;

6.2.16 HOUR
syntax:

HOUR (t)

describe:

Returns an integer representing the hour at time t.

example:

SELECTHOUR ('12:34:56') "hour"FROM DUMMY;--12

6.2.17 ISOWEEK
syntax:

ISOWEEK (d)

describe:

Returns the ISO year and week number of date d. Week numbers are prefixed with the letter W. See also WEEK.

example:

SELECT ISOWEEK (TO_DATE('2011-05-30', 'YYYY-MM-DD')) "isoweek"FROM DUMMY;--2011-W22

6.2.18 LAST_DAY
Syntax:

LAST_DAY (d)

describe:

Returns the date of the last day of the month containing date d.

example:

SELECT LAST_DAY (TO_DATE('2010-01-04', 'YYYY-MM-DD')) "last day"FROM DUMMY;--2010-1-31

6.2.19 LOCALTOUTC
Syntax:

LOCALTOUTC (t, timezone)

describe:

Convert the local time t under timezone to UTC time (UTC: Universal Time Coordinated, Universal Time Coordinated. Beijing time zone is East Eighth District, 8 hours ahead of UTC, UTC + time zone difference = local time).

example:

SELECT LOCALTOUTC (TO_TIMESTAMP('2012-01-01 01:00:00', 'YYYY-MM-DD HH24:MI:SS'), 'EST') "localtoutc"FROM DUMMY;--2012-1-1 6:00:00.0

6.2.20 MINUTE
syntax:

MINUTE(t)

describe:

Returns a number representing minutes at time t.

example:

SELECTMINUTE ('12:34:56') "minute"FROM DUMMY;--34

6.2.21 MONTH
Syntax:

MONTH(d)

describe:

Returns the number of the month of the date d.

example:

SELECTMONTH ('2011-05-30') "month"FROM DUMMY;--5

6.2.22 MONTHNAME
syntax:

MONTHNAME(d)

describe:

Returns the English name of the month in which date d falls.

example:

SELECTMONTHNAME ('2011-05-30') "monthname"FROM DUMMY;--MAY

6.2.23   NEXT_DAY
语法:

NEXT_DAY (d)

描述:

返回日期 d 的第二天。

例子:

SELECT NEXT_DAY (TO_DATE ('2009-12-31', 'YYYY-MM-DD')) "next day"FROM DUMMY;--2010-1-1

6.2.24   NOW
语法:

NOW ()

描述:

返回当前时间戳。

例子:

SELECT NOW () "now"FROM DUMMY;--2015-6-12 17:23:01.773

6.2.25   QUARTER
语法:

QUARTER (d, [, start_month ])

描述:

返回日期 d 的年份,季度。第一季度由 start_month 定义的月份开始,如果没有定义start_month,第一季度假设为从一月开始。

例子:

SELECTQUARTER (TO_DATE('2012-01-01', 'YYYY-MM-DD'), 2) "quarter"FROM DUMMY;--2011-Q4

6.2.26   SECOND
语法:

SECOND (t)

描述:

返回时间 t 表示的秒数。

例子:

SELECTSECOND ('12:34:56') "second"FROM DUMMY;--56

6.2.27   SECONDS_BETWEEN
语法:

SECONDS_BETWEEN (d1, d2)

描述:

Computes the number of seconds between date arguments d1 and d2, semantically equivalent to d2-d1.

example:

SELECT SECONDS_BETWEEN ('2015-01-01 01:01:01', '2015-01-01 02:01:01') "seconds between"FROM DUMMY;--3600

SELECT SECONDS_BETWEEN ('2015-01-01 01:01:01', '2015-01-01 01:02:02') "seconds between"FROM DUMMY;--61

SELECT SECONDS_BETWEEN ('2015-01-01 01:01:01', '2015-01-01 01:01:02') "seconds between"FROM DUMMY;--1

6.2.28 UTCTOLOCAL
Syntax:

UTCTOLOCAL (t, timezone)

describe:

Converts a UTC time value to local time in timezone timezone.

example:

SELECT UTCTOLOCAL(TO_TIMESTAMP('2012-01-01 01:00:00', 'YYYY-MM-DD HH24:MI:SS'), 'EST') "utctolocal"FROM DUMMY;--2011-12-31 20:00:00.0

6.2.29 WEEK
syntax:

WEEK (d)

describe:

Returns the integer number of the week of the day d. See also ISOWEEK.

example:

SELECTWEEK(TO_DATE('2011-05-30', 'YYYY-MM-DD')) "week"FROM DUMMY;--23

6.2.30 WEEKDAY
Syntax:

WEEKDAY (d)

describe:

Returns the day number (day of week) representing the week of the date d. The return value ranges from 0 to 6, representing Monday(0) to Sunday(6).

example:

SELECTWEEKDAY (TO_DATE ('2011-01-02', 'YYYY-MM-DD')) "week day"FROM DUMMY;--6

SELECTWEEKDAY (TO_DATE ('2011-01-03', 'YYYY-MM-DD')) "week day"FROM DUMMY;--0

6.2.31 YEAR
syntax:

YEAR (d)

describe:

Returns the year number in which date d falls.

example:

SELECTYEAR (TO_DATE ('2011-05-30', 'YYYY-MM-DD')) "year"FROM DUMMY;--2011

6.3 Numeric functions
Numeric functions accept numbers or strings with numbers as input and return values. When a string of numeric characters is given as input, an implicit string-to-number conversion is automatically performed before computing the result.

6.3.1 ABS
syntax:

ABS (n)

describe:

Returns the absolute value of the numeric argument n.

example:

SELECTABS (-1) "absolute"FROM DUMMY;--1

6.3.2 ACOS
syntax:

ACOS (n)

describe:

返回参数 n 的反余弦,以弧度为单位,值为-1 至 1。

例子:

SELECTACOS (0.5) "acos"FROM DUMMY;--1.0471975511965979

6.3.3     ASIN
语法:

ASIN (n)

描述:

返回参数 n 的反正弦值,以弧度为单位,值为-1 至 1。

例子:

SELECTASIN (0.5) "asin"FROM DUMMY;--0.5235987755982989

6.3.4     ATAN
语法:

ATAN (n)

描述:

返回参数 n 的反正切值,以弧度为单位, n 的范围为无限。

例子:

SELECTATAN (0.5) "atan"FROM DUMMY;--0.4636476090008061

6.3.5     ATAN2
语法:

ATAN2 (n, m)

描述:

返回两数 n 和 m 比率的反正切值,以弧度为单位。这和 ATAN(n/m)的结果一致。

例子:

SELECTATAN2 (1.0, 2.0) "atan2"FROM DUMMY;--0.4636476090008061

6.3.6     BINTOHEX
语法:

BINTOHEX (expression)

描述:

将二进制值转换为十六进制。

例子:

SELECT BINTOHEX('AB') "bintohex"FROM DUMMY;--4142 先会将“AB”字符串隐式转换为二进制??

SELECT TO_BINARY ('AB') "to binary"FROM DUMMY;--4142 显示时却是以十六进制显示,而不是二进制?

6.3.7     BITAND
语法:

BITAND (n, m)

描述:

对参数 n 和 m 的位执行 AND 操作(即按位与)。 n 和 m 都必须是非负整数。 BITAND 函数返回 BIGINT 类型的结果。

例子:

SELECT BITAND (255, 123) "bitand"FROM DUMMY;--123

6.3.8     CEIL
语法:

CEIL(n)

描述:

返回大于或者等于 n 的第一个整数(大小它的最小整数)

例子:

SELECT CEIL (14.5) "ceiling"FROM DUMMY;--15

6.3.9     COS
语法:

COS (n)

描述:

返回参数 n 的余弦值,以弧度为单位。

例子:

SELECTCOS (0.0) "cos"FROM DUMMY;--1

6.3.10   COSH
语法:

COSH (n)

描述:

返回参数 n 的双曲余弦值。

例子:

SELECT COSH (0.5) "cosh"FROM DUMMY;--1.1276259652063807

6.3.11   COT
语法:

COT (n)

描述:

计算参数 n 的余切值,其中 n 以弧度表示。

例子:

SELECTCOT (40) "cot"FROM DUMMY;-- -0.8950829176379128

6.3.12   EXP
语法:

EXP (n)

描述:

返回以 e 为底, n 为指数的计算结果。

例子:

SELECTEXP (1.0) "exp"FROM DUMMY;--2.718281828459045

6.3.13   FLOOR
语法:

FLOOR (n)

描述:

返回不大于参数 n 的最大整数。

例子:

SELECTFLOOR (14.5) "floor"FROM DUMMY;--14

6.3.14 GREATEST
syntax:

GREATEST (n1 [, n2]...)

describe:

Return the maximum number of parameters n1, n2, ....

example:

SELECT GREATEST ('aa', 'ab', 'bb', 'ba') "greatest"FROM DUMMY;--bb

6.3.15 HEXTOBIN
syntax:

HEXTOBIN (value)

describe:

Convert a hexadecimal number to binary.

example:

SELECTHEXTOBIN ('1a') "hextobin" FROM DUMMY;--1A Still in hexadecimal?

6.3.16 LEAST
syntax:

LEAST (n1 [, n2]...)

describe:

Return the minimum number of parameters n1, n2, ....

example:

SELECT LEAST('aa', 'ab', 'ba', 'bb') "least"FROM DUMMY;--aa

6.3.17 LN
Syntax:

LN (n)

describe:

Returns the natural logarithm of the argument n.

example:

SELECTLN (9) "ln"FROM DUMMY;--2.1972245773362196

6.3.18 LOG
syntax:

LOG (b, n)

describe:

Returns the natural logarithm of n to base b. Base b must be a positive number greater than 1, and n must be a positive number.

example:

SELECTLOG (10, 2) "log"FROM DUMMY;--0.30102999566398114

6.3.19   MOD
语法:

MOD (n, d)

描述:

返回 n 整除 b 的余数值。

当 n 为负时,该函数行为不同于标准的模运算。

以下列举了 MOD 函数返回结果的例子

如果 d 为零,返回 n。

如果 n 大于零,且 n 小于 d,则返回 n。

如果 n 小于零,且 n 大于 d,则返回 n。

在上文提到的其他情况中,利用 n 的绝对值除以 d 的绝对值来计算余数。如果 n 小于 0,则 MOD返回的余数为负数;如果 n 大于零, MOD 返回的余数为正数。

例子:

SELECTMOD (15, 4) "modulus"FROM DUMMY;--3

SELECTMOD (-15, 4) "modulus"FROM DUMMY;-- -3

6.3.20   POWER
语法:

POWER (b, e)

描述:

计算以 b 为底, e 为指数的值。

例子:

SELECTPOWER (2, 10) "power"FROM DUMMY;--1024

6.3.21   ROUND
语法:

ROUND (n [, pos])

描述:

返回参数 n 小数点后 pos 位置的值(四舍五入)。

例子:

SELECTROUND (16.16, 1) "round"FROM DUMMY;--16.2

SELECTROUND (16.16, -1) "round"FROM DUMMY;--20

6.3.22   SIGN
语法:

SIGN (n)

描述:

返回 n 的符号(正或负)。如果 n 为正,则返回 1; n 为负,返回-1, n 为 0 返回 0。

例子:

SELECTSIGN (-15) "sign"FROM DUMMY;-- -1

6.3.23   SIN
语法:

SIN (n)

描述:

返回参数 n 的正弦值,以弧度为单位。

例子:

SELECTSIN(3.141592653589793/2) "sine"FROM DUMMY;--1

6.3.24   SINH
语法:

SINH (n)

描述:

返回 n 的双曲正弦值,以弧度为单位。

例子:

SELECT SINH (0.0) "sinh"FROM DUMMY;--0

6.3.25   SQRT
语法:

SQRT (n)

描述:

返回 n 的平方根。

例子:

SELECTSQRT (2) "sqrt"FROM DUMMY;--1.4142135623730951

6.3.26   TAN
语法:

TAN (n)

描述:

返回 n 的正切值,以弧度为单位。

例子:

SELECTTAN (0.0) "tan"FROM DUMMY;--0

6.3.27   TANH
语法:

TANH (n)

描述:

返回 n 的双曲正切值,以弧度为单位。

例子:

SELECT TANH (1.0) "tanh"FROM DUMMY;--0.7615941559557649

6.3.28   UMINUS
语法:

UMINUS (n)

描述:

返回 n 的负值。

例子:

SELECT UMINUS(-765) "uminus"FROM DUMMY;--756

SELECT UMINUS(765) "uminus"FROM DUMMY;-- -756

6.4字符串函数
6.4.1     ASCII
语法:

ASCII(c)

描述:

返回字符串 c 中第一个字节的 ASCII 值。

SELECTASCII('Ant') "ascii"FROM DUMMY;--65

6.4.2     CHAR
语法:

CHAR (n)

描述:

返回 ASCII 值为数字 n 的字符。

例子:

SELECTCHAR (65) || CHAR (110) || CHAR (116) "character"FROM DUMMY;--Ant

6.4.3     CONCAT
语法:

CONCAT (str1, str2)

描述:

返回位于 str1 后的 str2 联合组成的字符串。级联操作符(||)与该函数作用相同。

例子:

SELECTCONCAT ('C', 'at') "concat"FROM DUMMY;--Cat

6.4.4     LCASE
语法:

LCASE(str)

描述:

将字符串 str 中所有字符转换为小写。

注意: LCASE 函数作用与 LOWER 函数相同。

例子:

SELECTLCASE ('TesT') "lcase"FROM DUMMY;--test

6.4.5     LEFT
语法:

LEFT (str, n)

描述:

返回字符串 str 开头 n 个字符/位的字符。

例子:

SELECTLEFT ('Hello', 3) "left"FROM DUMMY;--Hel

6.4.6     LENGTH
语法:

LENGTH(str)

描述:

返回字符串 str 中的字符数。对于大对象(LOB)类型,该函数返回对象的字节长度。

例子:

SELECTLENGTH ('length in char') "length"FROM DUMMY;--14

6.4.7 LOCATE
syntax:

LOCATE (haystack, needle)

describe:

Returns the position of the substring needle in the string haystack. Returns 0 if not found.

example:

SELECTLOCATE ('length in char', 'char') "locate"FROM DUMMY;--11

SELECTLOCATE ('length in char', 'length') "locate"FROM DUMMY;--1

SELECTLOCATE ('length in char', 'zchar') "locate"FROM DUMMY;--0

6.4.8 LOWER
Syntax:

LOWER (str)

describe:

Converts all characters in the string str to lowercase.

Note: The function of LOWER is the same as that of LCASE.

example:

SELECTLOWER ('AnT') "lower"FROM DUMMY;--ant

6.4.9 LPAD
Syntax:

LPAD (str, n [, pattern])

describe:

Pads the string str with spaces from the left to the length specified by n. If the pattern parameter is specified, the string str will be filled sequentially until the length specified by n is satisfied.

example:

SELECT LPAD ('end', 15, '12345') "lpad"FROM DUMMY;--123451234512end

6.4.10 LTRIM
Syntax:

LTRIM (str [, remove_set])

describe:

Returns the value of the string str truncated of all leading spaces. If remove_set is defined, LTRIM removes characters in the string str containing the set from the beginning, and the process continues until a character not in remove_set is reached.

Note: remove_set is considered a character set, not a search character.

example:

SELECTLTRIM ('babababAabend','ab') "ltrim"FROM DUMMY;--Aabend

6.4.11 NCHAR
syntax:

NCHAR (n)

describe:

Returns the Unicode character represented by the integer n.

example:

SELECT UNICODE ('江') "unicode"FROM DUMMY;--27743

SELECTNCHAR (27743) "nchar"FROM DUMMY;--江

6.4.12 REPLACE
syntax:

REPLACE (original_string, search_string, replace_string)

describe:

Search original_string for all occurrences of search_string and replace with replace_string.

If original_string is empty, the return value is also empty.

If two overlapping substrings in original_string match search_string, only the first one will be replaced:

SELECTREPLACE ('abcbcb','bcb', '') "replace"FROM DUMMY;--acb

如果 original_string 未出现 search_string,则返回未修改的 original_string。

如果 original_string, search_string 或者 replace_string 为 NULL,则返回值也为 NULL。

例子:

SELECTREPLACE ('DOWNGRADE DOWNWARD','DOWN', 'UP') "replace"FROM DUMMY;--UPGRADE UPWARD

6.4.13   RIGHT
语法:

RIGHT(str, n)

描述:

返回字符串 str 中最右边的 n 字符/字节。

例子:

SELECTRIGHT('HI0123456789', 3) "right"FROM DUMMY;--789

6.4.14   RPAD
语法:

RPAD (str, n [, pattern])

描述:

从尾部开始对字符串 str 使用空格进行填充,达到 n 指定的长度。如果指定了 pattern 参数,字符串 str 将按顺序填充直到满足 n 指定的长度。

例子:

SELECT RPAD ('end', 15, '12345') "right padded"FROM DUMMY;--end123451234512

6.4.15   RTRIM
语法:

RTRIM (str [,remove_set ])

描述:

返回字符串 str 截取所有后置空格后的值。如果定义了 remove_set, RTRIM 从尾部位置移除字符串 str 包含该集合中的字符,该过程持续至到达不在 remove_set 中的字符。

注意: remove_set 被视为字符集合,而非搜索字符。

例子:

SELECTRTRIM ( 'views','ab') "rtrim"FROM DUMMY;--views

6.4.16 SUBSTR_AFTER
Syntax:

SUBSTR_AFTER (str, pattern)

describe:

Returns the substring of str following the first occurrence of pattern.

Returns the empty string if str does not contain the pattern substring.

If pattern is the empty string, str is returned.

Returns NULL if str or pattern is NULL.

example:

SELECT SUBSTR_AFTER ('Hello My Friend','My') "substr after"FROM DUMMY;-- ' Friend'

6.4.17 SUBSTR_BEFORE
Syntax:

SUBSTR_BEFORE (str, pattern)

describe:

Returns the substring of str preceding the first occurrence of pattern.

Returns the empty string if str does not contain the pattern substring.

If pattern is the empty string, str is returned.

Returns NULL if str or pattern is NULL.

example:

SELECT SUBSTR_BEFORE ('Hello My Friend','My') "substr before"FROM DUMMY;--'Hello '

6.4.18 SUBSTRING
syntax:

SUBSTRING (str, start_position [, string_length])

describe:

Returns the substring of string str starting at start_position. SUBSTRING can return the remaining characters from start_position or, optionally, the number of characters set by the string_length parameter.

如果 start_position 小于 0,则被视为 1。

如果 string_length 小于 1,则返回空字符串。

例子:

SELECTSUBSTRING ('1234567890',4,2) "substring"FROM DUMMY;--45

6.4.19   TRIM
语法:

TRIM ([[LEADING | TRAILING | BOTH] trim_char FROM] str )

描述:

返回移除前导和后置空格后的字符串 str。截断操作从起始(LEADING)、结尾(TRAILING)或者两端(BOTH)执行。

如果 str 或者 trim_char 为空,则返回 NULL。

如果没有指定可选项, TRIM 移除字符串 str 中两端的子字符串 trim_char。

如果没有指定 trim_char,则使用单个空格(就是去空格)。

例子:

SELECTTRIM ('a'FROM'aaa123456789aa') "trim both"FROM DUMMY;--123456789

SELECTTRIM (LEADING'a'FROM'aaa123456789aa') "trim leading"FROM DUMMY;--123456789aa

6.4.20   UCASE
语法:

UCASE (str)

描述:

将字符串 str 中所有字符转换为大写。

注意: UCASE 函数作用与 UPPER 函数相同。

例子:

SELECTUCASE ('Ant') "ucase"FROM DUMMY;--ANT

6.4.21   UNICODE
语法:

UNICODE(c)

描述:

返回字符串中首字母的 UnIcode 字符码数字;如果首字母不是有效编码,则返回 NULL。

例子:

SELECT UNICODE ('江') "unicode"FROM DUMMY;--27743

SELECTNCHAR (27743) "nchar"FROM DUMMY;--江

6.4.22   UPPER
语法:

UPPER (str)

描述:

将字符串 str 中所有字符转换为大写。

注意: UPPER 函数作用与 UCASE 相同。

例子:

SELECTUPPER ('Ant') "uppercase"FROM DUMMY;--ANT

6.5杂项函数
6.5.1     COALESCE
语法:

COALESCE (expression_list)

描述:

返回 list 中非 NULL 的表达式。 Expression_list 中必须包含至少两个表达式,并且所有表达式都是可比较的。如果所有的参数都为 NULL,则结果也为 NULL。

例子:

CREATETABLE coalesce_example (ID INTPRIMARYKEY, A REAL, B REAL);

INSERTINTO coalesce_example VALUES(1, 100, 80);

INSERTINTO coalesce_example VALUES(2, NULL, 63);

INSERTINTO coalesce_example VALUES(3, NULL, NULL);

SELECT id, a, b, COALESCE (a, b*1.1, 50.0) "coalesce"FROM coalesce_example

6.5.2     CURRENT_CONNECTION
语法:

CURRENT_CONNECTION

描述:

返回当前连接 ID。

例子:

SELECT CURRENT_CONNECTION "current connection"FROM DUMMY;--400,038

6.5.3     CURRENCT_SCHEMA
语法:

CURRENT_SCHEMA

描述:

返回当前Schema名。

例子:

SELECT CURRENT_SCHEMA "current schema"FROM DUMMY;--SYSTEM

6.5.4     CURRENT_USER
语法:

CURRENT_USER

描述:

返回当前语句上下文的用户名,即当前授权堆栈顶部的用户名。

例子:

--使用SYSTEM用户执行基础SQL

SELECT CURRENT_USER "current user"FROM DUMMY;--SYSTEM

-- USER_A用户创建存储过程

CREATEPROCEDUREUSER_A.PROC1 LANGUAGE SQLSCRIPT SQL SECURITY DEFINER AS

BEGIN

       SELECT CURRENT_USER "current user"FROM DUMMY;

END;

-- USER_A用户调用

CALL USER_A.PROC1;--USER_A

--授权予USER_B执行USER_A.PROC1的权限后调用

CALL USER_A.PROC1;--USER_A

--将Schema  USER_A授权予USER_B用户后,通过USER_B用户创建存储过程

CREATEPROCEDUREUSER_A.PROC2 LANGUAGE SQLSCRIPT SQL SECURITY INVOKER AS

BEGIN

       SELECT CURRENT_USER "current user"FROM DUMMY;

END;

-- USER_A用户调用

CALL USER_A.PROC2;-- USER_A

-- USER_B用户调用

CALL USER_A.PROC2;-- USER_B

6.5.5 GROUPING_ID
Syntax:

GROUPING_ID(column_name_list)

describe:

The GROUPING_ID function can use GROUPING SETS to return multilevel aggregates in a single result set. GROUPING_ID Returns an integer identifying the grouping set each row belongs to. Each column of GROUPING_ID must be an element in GROUPING SETS.

Assign the GROUPING_ID by converting the resulting bit vector from GROUPING SETS to a decimal number, treating the bit vector as a binary number. After forming the bit vector, 0 is assigned to each column specified by GROUPING SETS, otherwise 1 is assigned according to the order in which the GROUPING SETS appear. By treating the bit vector as a binary number, the function returns an integer value as output.

example:

SELECT customer, year, product, SUM(sales),GROUPING_ID(customer, year, product)

FROM guided_navi_tab

GROUPBYGROUPING SETS ((customer, year, product),(customer, year),(customer, product),(year, product),(customer),(year),(product));

CUSTOMER         YEAR      PRODUCT            SUM(SALES)       GROUPING_ID(CUSTOMER,YEAR,PRODUCT)

1 C1 2009 P1 100 0

2 C1 2010 P1 50 0

3 C2 2009 P1 200 0

4 C2 2010 P1 100 0

5 C1 2009 P2 200 0

6 C1 2010 P2 150 0

7 C2 2009 P2 300 0

8 C2 2010 P2 150 0

9 C1 2009 a 300 1

10 C1 2010 a 200 1

11 C2 2009 a 500 1

12 C2 2010 a 250 1

13 C1 a P1 150 2

14 C2 a P1 300 2

15 C1 a P2 350 2

16 C2 a P2 450 2

17 and 2009 P1,300 4

18 and 2010 P1 150 4

19 a 2009 P2 500 4

20 a 2010 P2 300 4

21 C1 a a 500 3

22 C2 a a 750 3

23 a 2009 a 800 5

24 a 2010 a 450 5

25 aa P1,450 6

26 a a P2 800 6

6.5.6 IFNULL
Syntax:

IFNULL (expression1, expression2)

describe:

Returns the first expression in the input that is not NULL.

Returns expression1 if expression1 is not NULL.

Returns expression2 if expression2 is not NULL.

Returns NULL if both input expressions are NULL.

example:

SELECTIFNULL ('diff', 'same') "ifnull"FROM DUMMY;--diff

SELECTIFNULL (NULL, 'same') "ifnull"FROM DUMMY;--same

SELECTIFNULL (NULL, NULL) "ifnull"FROM DUMMY;--null

6.5.7 MAP
syntax:

MAP (expression, search1, result1 [, search2, result2] ... [, default_result])

describe:

Searches for expression in the search collection and returns the corresponding results.

If no expression value is found and default_result is defined, MAP returns default_result.

If no expression value is found and default_result is not defined, MAP returns NULL.

Notice:

Search values ​​and corresponding results are always provided as search-results.

example:

SELECTMAP(2, 0, 'Zero', 1, 'One', 2, 'Two', 3, 'Three', 'Default') "map"FROM DUMMY;--Two

SELECTMAP(99, 0, 'Zero', 1, 'One', 2, 'Two', 3, 'Three', 'Default') "map"FROM DUMMY;--Default

SELECTMAP(99, 0, 'Zero', 1, 'One', 2, 'Two', 3, 'Three') "map"FROM DUMMY;--null

6.5.8 NULLIF
syntax:

NULLIF (expression1, expression2)

describe:

NULLIF compares the values ​​of two input expressions and returns NULL if the first expression is equal to the second.

If expression1 is not equal to expression2, NULLIF returns expression1.

If expression2 is NULL, NULLIF returns expression1.

The first parameter cannot be NULL

example:

SELECTNULLIF ('diff', 'same') "nullif"FROM DUMMY;--diff

SELECTNULLIF('same', 'same') "nullif"FROM DUMMY;--null

SELECTNULLIF('same', null) "nullif"FROM DUMMY;--same

6.5.9 SESSION_CONTEXT
Syntax:

SESSION_CONTEXT(session_variable)

describe:

Returns the session_variable value assigned to the current user.

The session_variable accessed can be predefined or user-defined. The predefined session variables that can be set by the client are 'APPLICATION', 'APPLICATIONUSER' and 'TRACEPROFILE'.

Session variables can be defined or modified by using the command SET [SESSION] <variable_name> = <value> and unset by UNSET [SESSION] <variable_name>.

SESSION_CONTEXT returns an NVARCHAR type with a maximum length of 512 characters.

example:

Read session variables:

SELECT SESSION_CONTEXT('APPLICATION') "session context"FROM DUMMY;--HDBStudio

6.5.10 SESSION_USER
Syntax:

SESSION_USER

describe:

Returns the username for the current session.

example:

-- example showing basic function operation using SYSTEM user

SELECT SESSION_USER "session user"FROM DUMMY;--SYSTEM

SYSUID

grammar:

6.5.11 SYSUUID
Description:

Returns the SYSUUID of the SAP HANA connection instance.

example:

SELECT SYSUUID FROM DUMMY;--557A323598FE12F4E20036775F49B32D

7 SQL Statements
This chapter describes the SQL statements supported by SAP HANA database.

 Schema Definition and Manipulation Statements Schema manipulation statement

 Data Manipulation Statements data manipulation statement

 System Management Statements system management statement

 Session Management Statements session management statement

 Transaction Management Statements transaction management statement

 Access Control Statements access control statement

 Data Import Export Statements data import and export statements

7.1 Data definition statement
7.1.1 ALTER AUDIT POLICY
syntax:

ALTER AUDIT POLICY <policy_name> <audit_mode>

Grammatical elements:

<policy_name> ::= <identifier>

Changed audit policy name:

<audit_mode> ::= ENABLE | DISABLE

Audit_mode enables or disables the audit policy.

ENABLE: Enable audit policy.

DISABLE: Disables the audit policy.

describe:

The ALTER AUDIT POLICY statement enables or disables an audit policy. <policy_name> must define an existing audit policy name.

Only database users with system privilege AUDIT ADMIN are allowed to change the audit policy. Each database user with this authority can modify any audit policy, whether created by the user or not.

Newly created audit policies are disabled by default and no auditing will take place. Therefore, this audit policy must be enabled to perform auditing.

Audit policies can be disabled and enabled as needed.

配置参数:

以下审计的配置参数存储在文件 global.ini,在审计配置部分:

global_auditing_state ( 'true' / 'false' )

无论启动的审计策略数量多少,审计只会在配置参数 global_auditing_state 设置为 true 时启用,默认值 false。

default_audit_trail_type ( 'SYSLOGPROTOCOL' / 'CSVTEXTFILE' ) 指定如何存储审计结果。

SYSLOGPROTOCOL:使用系统 syslog。

CSVTEXTFILE:审计信息值以逗号分隔存储在一个文本文件中。

default_audit_trail_path

指定 CSVTEXTFILE 存储的文件路径。

如果用户拥有需要的系统权限,参数可以在监控视图 M_INIFILE_CONTENTS 中选择。这些只有在显示设置的情况下才可见。

系统表和监控视图:

AUDIT_POLICY:显示所有审计策略和状态。

M_INIFILE_CONTENTS:显示数据库系统配置参数。

只有拥有系统权限 CATALOG READ, DATA ADMIN 或 INIFILE ADMIN 的用户可以查看

M_INIFILE_CONTENTS 视图中的内容,对于其他用户则为空。

例子:

该例子中你需要先利用如下语句创建名为 priv_audit 的审计权限:

CREATE AUDIT POLICY priv_audit AUDITING SUCCESSFUL GRANT PRIVILEGE, REVOKE PRIVILEGE, GRANT ROLE, REVOKE ROLE LEVEL CRITICAL;

现在你可以启用审计策略:

ALTER AUDIT POLICY priv_audit ENABLE;

你也可以禁用该审计策略:

ALTER AUDIT POLICY priv_audit DISABLE;

7.1.2     ALTER FULLTEXT INDEX
语法:

ALTER FULLTEXT INDEX <index_name> <alter_fulltext_index_option>

语法元素:

<index_name> ::= <identifier>

被修改的全文索引标识符:

<alter_fulltext_index_option> ::= <fulltext_parameter_list> | <queue_command> QUEUE

定义了全文索引的参数或者全文索引队列的状态是否应该修改。后者只对异步显式全文索引可见。

<fulltext_parameter_list> ::= <fulltext_parameter> [, ...]

修改的全文索引参数列表:

<fulltext_parameter> ::= FUZZY SEARCH INDEX <on_off>

| PHRASE INDEX RATIO <index_ratio>

| <change_tracking_elem>

<on_off> ::= ON | OFF

FUZZY SEARCH INDEX

使用模糊搜索索引。

PHRASE INDEX RATIO

定义短语索引比率。

<index_ratio> ::= <float_literal>

定义短语索引比率的百分比,值必须为 0.0 与 1.0 之间。

SYNC[HRONOUS]

改变全文索引至同步模式。

ASYNC[HRONOUS]

改变全文索引至异步模式。

<flush_queue_elem> ::= EVERY <integer_literal> MINUTES

| AFTER <integer_literal> DOCUMENTS

| EVERY <integer_literal> MINUTES OR AFTER <integer_literal>

DOCUMENTS

当使用异步索引时,你可以利用 flush_queue_elem 定义更新全文索引的时间。

<queue_command> ::= FLUSH | SUSPEND | ACTIVATE

FLUSH

利用正在处理队列的文件更新全文索引。

SUSPEND

暂停全文索引处理队列。

ACTIVATE

激活全文索引处理队列。

描述:

使用该命令,你可以修改全文索引的参数或者索引处理队列的状态。队列是用来激活全文索引以

异步方式工作的机制,例如插入操作被阻塞,直到文件处理完。

ALTER FULLTEXT INDEX <index_name> <fulltext_elem_list>语句修改全文索引的参数。

ALTER FULLTEXT INDEX <index_name> <queue_parameters>语句修改异步全文索引的处理队列状

态。

例子:

ALTER FULLTEXT INDEX i1 PHRASE INDEX RATIO 0.3 FUZZY SEARCH INDEX ON

在上述例子中,对于全文索引’i1’,短文索引比率设为 30,,并且启用了模糊搜索索引。

ALTER FULLTEXT INDEX i2 SUSPEND QUEUE

暂停全文索引’i2’的队列

ALTER FULLTEXT INDEX i2 FLUSH QUEUE

利用队列中已处理的文件更新全文索引’i2’。

7.1.3     ALTER INDEX
ALTER INDEX <index_name> REBUILD

语法元素:

<index_name>::= <identifier>

Defines the index name to rebuild.

describe:

The ALTER INDEX statement rebuilds indexes.

example:

The following example rebuilds the index idx.

ALTER INDEX idx REBUILD;

7.1.4 ALTER SEQUENCE
syntax:

ALTER SEQUENCE <sequence_name> [<alter_sequence_parameter_list>]

[RESET BY <reset_by_subquery>]

Grammatical elements:

<sequence_name> ::= <identifier>

The modified sequence name.

<alter_sequence_parameter_list> ::= <alter_sequence_parameter>, ...

<alter_sequence_parameter> ::= <sequence_parameter_restart_with>

| <basic_sequence_parameter>

<sequence_parameter_restart_with> ::= RESTART WITH <restart_value>

<basic_sequence_parameter> ::= INCREMENT BY <increment_value>

| MAXVALUE <maximum_value>

| NO MAXVALUE

| MINVALUE <minimum_value>

| NO MINVALUE

| CYCLE

| NO CYCLE

RESTART WITH

The starting value of the sequence. If you do not specify a value for the RESTART WITH clause, the current sequence value will be used.

<restart_value> ::= <integer_literal>

The first value provided by the sequence generator is an integer between 0 and 4611686018427387903.

INCREMENT BY

The sequence increment value.

<increment_value> ::= <integer_literal>

Increments or decrements the value of a sequence by an integer.

MAXVALUE

Defines the maximum value generated by the sequence.

<maximum_value> ::= <integer_literal>

A positive integer defining the maximum value that the sequence can generate and must be between 0 and 4611686018427387903.

NO MAXVALUE

With the NO MAXVALUE instruction, the maximum value for the increment sequence will be 4611686018427387903, and the maximum value for the decrement sequence will be -1.

MINVALUE

Defines the minimum value generated by the sequence.

<minimum_value> ::= <integer_literal>

A positive integer defining the minimum value that the sequence can generate and must be between 0 and 4611686018427387903.

NO MINVALUE

With the NO MINVALUE directive, the minimum value of the ascending sequence will be 1, and the minimum value of the decreasing sequence will be -

4611686018427387903。

CYCLE

With the CYCLE instruction, the sequence will restart after reaching the maximum or minimum value.

NO CYCLE

With the NO CYCLE instruction, the sequence will not restart after reaching the maximum or minimum value.

<reset_by_subquery> ::= <subquery>

During system restart, the system automatically executes the RESET BY statement and will restart the sequence with the values ​​determined by the RESET BY subquery.

For details on subqueries, see Subquery.

describe:

The ALTER SEQUENCE statement is used to modify the parameters of the sequence generator.

example:

In the example below, you change the starting sequence value of the sequence seq to 2.

ALTER SEQUENCE seq RESTART WITH 2;

In the example below, you changed the sequence seq to have a maximum value of 100 and no minimum value.

ALTER SEQUENCE seq MAXVALUE 100 NO MINVALUE;

In the example below, you change the increment of the sequence seq to 2 and limit to "no cycle".

ALTER SEQUENCE seq INCREMENT BY 2 NO CYCLE;

In the following example, you first create table r with column a. Then you modify the reset-by subquery of sequence seq to column a pack

contains the maximum value.

CREATE TABLE r (a INT);

ALTER SEQUENCE seq RESET BY SELECT MAX(a) FROM r;

7.1.5 ALTER TABLE
syntax:

ALTER TABLE  [<schema_name>.]<table_name>{

<add_column_clause>

| <drop_column_clause>

| <alter_column_clause>

| <add_primary_key_clause>

| <drop_primary_key_clause>

| <preload_clause>

| <table_conversion_clause>

| <move_clause>

| <add_range_partition_clause>

| <move_partition_clause>

| <drop_range_partition_clause>

| <partition_by_clause>

| <disable_persistent_merge_clause>

| <enable_persistent_merge_clause>

| <enable_delta_log>

| <disable_delta_log>

| <enable_automerge>

| <disable_automerge>

}

Grammatical elements:

<add_column_clause> ::= ADD ( <column_definition> [<column_constraint>], ...)

<drop_column_clause> ::= DROP ( <column_name>, ... )

<alter_column_clause> ::= ALTER ( <column_definition> [<column_constraint>], ... )

<column_definition> ::= <column_name> <data_type> [<column_store_data_type>][<ddic_data_type>] [DEFAULT <default_value>] [GENERATED ALWAYS AS <expression>]

<column_constraint> ::= NULL| NOT NULL| UNIQUE [BTREE | CPBTREE]| PRIMARY KEY [BTREE | CPBTREE]

<data_type> ::= DATE | TIME | SECONDDATE | TIMESTAMP | TINYINT | SMALLINT | INTEGER | BIGINT |SMALLDECIMAL | DECIMAL | REAL | DOUBLE| VARCHAR |

                                NVARCHAR | ALPHANUM | SHORTTEXT |VARBINARY | BLOB| CLOB | NCLOB | TEXT

<column_store_data_type> ::= CS_ALPHANUM | CS_INT | CS_FIXED | CS_FLOAT | CS_DOUBLE |CS_DECIMAL_FLOAT | CS_FIXED(p-s, s) | CS_SDFLOAT| CS_STRING |

CS_UNITEDECFLOAT | CS_DATE | CS_TIME| CS_FIXEDSTRING | CS_RAW | CS_DAYDATE | CS_SECONDTIME | CS_LONGDATE | CS_SECONDDATE

<ddic_data_type> ::= DDIC_ACCP | DDIC_ALNM | DDIC_CHAR | DDIC_CDAY | DDIC_CLNT | DDIC_CUKY| DDIC_CURR | DDIC_D16D | DDIC_D34D | DDIC_D16R |

DDIC_D34R | DDIC_D16S | DDIC_D34S| DDIC_DATS | DDIC_DAY | DDIC_DEC | DDIC_FLTP | DDIC_GUID| DDIC_INT1 | DDIC_INT2 | DDIC_INT4 | DDIC_INT8 | DDIC_LANG | DDIC_LCHR | DDIC_MIN|DDIC_MON| DDIC_LRAW | DDIC_NUMC | DDIC_PREC | DDIC_QUAN | DDIC_RAW| DDIC_RSTR | DDIC_SEC | DDIC_SRST | DDIC_SSTR | DDIC_STRG | DDIC_STXT | DDIC_TIMS| DDIC_UNIT| DDIC_UTCM | DDIC_UTCL | DDIC_UTCS | DDIC_TEXT | DDIC_VARC | DDIC_WEEK

<default_value> ::= NULL | <string_literal> | <signed_numeric_literal> | <unsigned_numeric_literal>

DEFAULT:DEFAULT 定义了 INSERT 语句没有为列提供值情况下,默认分配的值。

GENERATED ALWAYS AS:指定在运行时生成的列值的表达式。

<column_constraint> ::= NULL| NOT NULL| UNIQUE [BTREE | CPBTREE]| PRIMARY KEY [BTREE | CPBTREE]

NULL | NOT NULL:NOT NULL 禁止列的值为 NULL。如果指定了 NULL,将不被认为是常量,其表示一列可能含有空值,默认为 NULL。

UNIQUE:指定为唯一键的列。一个唯一的复合键指定多个列作为唯一键。有了 unique 约束,多行不能在同一列中具有相同的值。

PRIMARY KEY: A primary key constraint is a combination of a NOT NULL constraint and a UNIQUE constraint that prohibits multiple rows from having the same value in the same column.

BTREE | CPBTREE: Specifies the index type. When the data type of the column is string, binary string, decimal number or the constraint is a composite key, or a non-unique key, the default index type is CPBTREE, otherwise

Use BTREE.

In order to use a B+-tree index, the BTREE keyword must be used; for a CPB+-tree index, the CPBTREE keyword must be used.

A B+-tree is a tree that maintains sorted data for efficient insertion, deletion, and search of records.

The CPB+-tree represents the compressed prefix B+-tree, which is based on the pkB-tree. A CPB+ tree is a very small index because it uses "partial keys", which are just a part of the full key of an index node. For larger keys,

CPB+-trees exhibit better performance than B+-trees.

If the index type is omitted, SAP HANA Database will choose an appropriate index considering the data type of the column.

It is possible to increase the length of a column when ALTER is used. When attempting to modify a column definition in columnar storage, no error is returned because no checks are made in the database. Errors can occur when selecting columns, the data does not conform to the newly defined data type. ALTER still does not obey datatype conversion rules.

It is possible to add a NOT NULL constraint to an existing column, if the table is empty or if the table has data and defines a default value

<add_primary_key_clause> ::= ADD [CONSTRAINT <constraint_name>] PRIMARY KEY( <column_name>, ... )

ADD PRIMARY KEY: Add a primary key.

PRIMARY KEY: A primary key constraint is a combination of a NOT NULL constraint and a UNIQUE constraint that prohibits multiple rows from having the same value in the same column.

CONSTRAINT: Specifies the constraint name.

<drop_primary_key_clause> ::= DROP PRIMARY KEY

DROP PRIMARY KEY: Delete the primary key constraint.

<preload_clause> ::= PRELOAD ALL | PRELOAD ( <column_name> ) | PRELOAD NONE

PRELOAD:设置/移除给定表或列的预载标记。 PRELOAD ALL 为表中所有列设置预载标记, PRELOAD( <column_name> )为指定的列设置标记, PRELOAD NONE 移除

所有列的标记。其结果是这些表在索引服务器启动后自动加载至内存中。预载标记的当前状态在系统表 TABLES,列 PRELOAD 中可见, 可能值为('FULL', 'PARTIALLY',

'NO');在系统表 TABLE_COLUMNS,列 PRELOAD,可能值为('TRUE','FALSE')。

<table_conversion_clause> ::= [ALTER TYPE] {ROW [THREADS <number_of_threads>] | COLUMN[THREADS <number_of_threads> [BATCH <batch_size>]]}

ALTER TYPE ROW | COLUMN:该命令用于将表存储类型从行转换为列或从列转换为行。

THREADS <number_of_threads>:指定进行表转换的并行线程数。线程数目的最佳值应设置为可用的 CPU 内核的数量。

Default:默认值为 param_sql_table_conversion_parallelism,即在 ndexserver.ini 文件中定义的 CPU内核数。

BATCH <batch_size>:指定批插入的行数,默认值为最佳值 2,000,000。插入表操作在每个<batch_size>记录插入后立即提交,可以减少内存消耗。 BATCH 可选项只

可以在表 从行转换为列时使用。然而,大于 2,000,000的批大小可能导致高内存消耗,因此不建议修改该值。通过复制现有的表中的列和数据,可以从现有的表

Create a new table with a different storage type. This command is used to convert a table from rows to columns or from columns to rows. If the source table is row storage, the newly created table is column storage.

<move_clause> ::= MOVE [PARTITION <partition_number>] TO [LOCATION ]<host_port> [PHYSICAL] |MOVE [PARTITION <partition_number>] PHYSICAL

MOVE moves a table to another location in a distributed environment. The port number is the port number of the internal index server, 3xx03. If you have a partitioned table, you can move only individual partitions by specifying the optional partition number

When moving the partition table, if the partition number is not specified, it will cause an error.

The PHYSICAL keyword is only applicable to columnar storage tables. Rowstore tables are always physically moved. If the optional keyword PHYSICAL is specified, persistent storage is moved immediately to the target host. Otherwise, the move will create a new host

The persistence layer link inside points to the old host persistence layer. The link without the TO<host_port> part will be removed in the next merge or move. When the PHYSICAL move operation does not specify TO <host_port>, it will move

Remove persistent layer links that still exist from the last move.

LOCATION is only supported for backward compatibility

<add_range_partition_clause> ::= ADD  PARTITION <lower_value> <= VALUES < <upper_value>| PARTITION <value_or_values> = <target_value>| PARTITION OTHERS

ADD PARTITION: Add partitions to a partitioned table, using the RANGE, HASH RANGE, ROUNDROBIN RANGE keywords. When adding partitions to a range-partitioned table, you can, if desired,

The remaining partitions are repartitioned.

<drop_range_partition_clause> ::= DROP  PARTITION <lower_value> <= VALUES < <upper_value>| PARTITION <value_or_values> = <target_value>| PARTITION OTHERS

DROP PARTITION: Drop partitions of tables partitioned according to RANGE, HASH RANGE, ROUNDROBIN RANGE.

<partition_clause> ::= PARTITION BY <hash_partition> [,<range_partition> | ,<hash_partition>]| PARTITION BY <range_partition>| PARTITION BY <roundrobin_partition>

[,<range_partition>]

<hash_partition> ::=HASH (<partition_expression>[, ...]) PARTITIONS { <num_partitions> |GET_NUM_SERVERS() }

<range_partition> ::= RANGE ( <partition_expression> ) ( <range_spec> )

<roundrobin_partition> ::= ROUNDROBIN PARTITIONS {<num_partitions> |GET_NUM_SERVERS()}

<range_spec> ::= {<from_to_spec> | <single_spec>[,...] } [, PARTITION OTHERS]

<from_to_spec> ::= PARTITION <lower_value> <= VALUES < <upper_value>

<single_spec> ::= PARTITION VALUE <single_value>

<partition_expression> ::= <column_name>| YEAR(<column_name>) | MONTH(<column_name>)

PARTITION BY:使用 RANGE, HASH RANGE, ROUNDROBIN RANGE 对表进行分区。关于表分区自居,请参见 CREATE TABLE。

<merge_partition_clause> ::= MERGE PARTITIONS

MERGE PARTITIONS:合并分区表的所有部分至非分区表。

<disable_persistent_merge_clause> ::= DISABLE PERSISTENT MERGE

DISABLE PERSISTENT MERGE:指导合并管理器对于给定表,使用内存进行合并而非持久合并。

<enable_persistent_merge_clause> ::= ENABLE PERSISTENT MERGE

ENABLE PERSISTENT MERGE:指导合并管理器对于给定表使用持久合并。

<enable_delta_log> ::= ENABLE DELTA LOG

启动表日志记录。启用之后,你必须执行一个保存点以确保所有的数据都保存,并且你必须执行数据备份,否则你不能恢复这些数据。

<enable_delta_log> ::= DISABLE DELTA LOG

DISABLE DELTA LOG: Disable table logging. If disabled, the table will not be logged. When a savepoint is done, changes to the table are only written to the data field, which causes the

Committed transactions are lost.

Use this command on initial load only!

<enable_delta_log> ::= ENABLE AUTOMERGE

Instructs the Merge Manager to process tables.

<enable_delta_log> ::= DISABLE AUTOMERGE

DISABLE AUTOMERGE: Instructs the merge manager to ignore the table.

example:

Table t is created with default value set to 10 for column b.

CREATETABLE t (a INT, b INT);

ALTERTABLE t AGE (b INTDEFAULT 10);

Column c is added to table t.

ALTERTABLE t ADD (c NVARCHAR(10) DEFAULT'NCHAR');

Create the primary key constraint prim_key for table t.

ALTERTABLE t ADDCONSTRAINT prim_key PRIMARYKEY (a, b);

The table t type is converted to columnar (COLUMN).

ALTERTABLE t COLUMN;

Set the preload flags for columns b and c

ALTERTABLE t PRELOAD (b, c);

Table t is partitioned using RANGE, and another partition is added.

ALTERTABLE t PARTITIONBY RANGE (a) (PARTITIONVALUE = 1, PARTITION OTHERS);

ALTERTABLE t ADDPARTITION 2 <= VALUES < 10;

The session type of table t is changed to HISTORY

ALTERTABLE t CREATEHISTORY;

Disable logging for table t.

ALTERTABLE t DISABLEDELTALOG;

7.1.6 CREATE AUDIT POLICY
Syntax:

CREATE AUDIT POLICY <policy_name> AUDITING <audit_status_clause> <audit_action_list> LEVEL <audit_level>

Grammatical elements:

<audit_status_clause> ::= SUCCESSFUL | UNSUCCESSFUL | ALL

<audit_action_list> ::= <audit_action_name>[,<audit_action_name>]...

<audit_action_name> ::=GRANT PRIVILEGE | REVOKE PRIVILEGE| GRANT STRUCTURED PRIVILEGE | REVOKE STRUCTURED PRIVILEGE| GRANT ROLE | REVOKE ROLE| GRANT

ANY | REVOKE ANY| CREATE USER | DROP USER| CREATE ROLE | DROP ROLE| ENABLE AUDIT POLICY | DISABLE AUDIT POLICY| CREATE STRUCTURED PRIVILEGE| DROP STRUCTURED PRIVILEGE| ALTER STRUCTURED PRIVILEGE| CONNECT| SYSTEM CONFIGURATION CHANGE| SET SYSTEM LICENSE| UNSET SYSTEM LICENSE

<audit_level> ::=EMERGENCY| ALERT| CRITICAL| WARNING| INFO

describe:

CREATE AUDIT POLICY 语句创建新的审计策略。该审计策略可以稍后启动,并将导致指定的审计活动发生的审计。

只有拥有系统权限 AUDIT ADMIN 用户才可以新建审计策略。

指定的审计策略名必须和已有的审计策略名不同。

审计策略定义将被审计的审计活动。现有的审计策略需要被激活,从而进行审计。

<audit_status_clause>定义, 成功或不成功或执行指定的审计活动进行审核。

以下的审计活动是可供使用的。它们被分在不同的组里。一组审计活动可以组合成一个审计策略,不同组的审计行动不能组合成审计策略。

GRANT PRIVILEGE                                           1              审计授予用户或角色的权限
REVOKE PRIVILEGE                                          1              审计撤销用户或角色的权限
GRANT STRUCTURED PRIVILEGE               1              审计授予用户的结构/分析权限
REVOKE STRUCTURED PRIVILEGE              1              审计撤销用户的结构/分析权限
GRANT ROLE                                                      1              审计授予用户或角色的角色
REVOKE ROLE                                                    1              审计撤销用户或角色的角色
GRANT ANY                                                        1              审计授予用户或角色的权限、结构/分析权限或角色
REVOKE ANY                                                      1              审计撤销用户或角色的权限、结构/分析权限或角色
CREATE USER                                                     2              审计用户创建
DROP USER                                                         2             审计用户删除
CREATE ROLE                                                     2              审计角色创建
DROP ROLE                                                         2              审计角色删除
CONNECT                                                            3              审计连接到数据库的用户
SYSTEM CONFIGURATION CHANGE         4              审计系统配置的更改(e.g. INIFILE)
ENABLE AUDIT POLICY                                   5              审计审核策略激活
DISABLE AUDIT POLICY                                  5             审计审核策略停用
CREATE STRUCTURED PRIVILEGE               6             审计结构化/分析权限创建
DROP STRUCTURED PRIVILEGE                  6              审计结构化/

ALTER STRUCTURED PRIVILEGE                  6              审计结构化/分析权限更改
SET SYSTEM LICENSE                                       7              审计许可证安装
UNSET SYSTEM LICENSE                                7              审计许可证删除

每一个审计分配分配至审计级别,可能的值按照重要性递减有:EMERGENCY, ALERT, CRITICAL, WARNING, INFO

为了使得审计活动发生,审计策略必须建立并启动, global_auditing_state(见下文)必须设置为true。

配置参数:

目前,审计的配置参数存储在 global.ini,部分审计配置如下:

global_auditing_state ( 'true' / 'false' )激活/关闭所有审计,无论有多少审计策略可用和启动。默认为 false,代表没有审计会发生。

default_audit_trail_type ( 'SYSLOGPROTOCOL' / 'CSVTEXTFILE' )定义如何存储审计结果。

SYSLOGPROTOCOL 为默认值。

CSVTEXTFILE 应只用于测试目的。

default_audit_trail_path 指定 存储文件的位置, CSVTEXTFILE 已经选定的情况下。

对于所有的配置参数,可以在视图 M_INIFILE_CONTENTS 中选择,如果当前用户具有这样做所需的权限。但是目前这些参数只有在被显式设置后才可见。这意味着,它们将对新安装的数据库实例不可见。

系统和监控视图

AUDIT_POLICY:显式所有审计策略和其状态

M_INIFILE_CONTENTS:显示审计有关的配置参数

只有数据库用户具有 CATALOG READ, DATA ADMIN 或 INIFILE ADMIN 权限可以在视图M_INIFILE_CONTENTS 查看任意信息,对于其他用户,该视图为空。

例子

新建的名为 priv_audit 的审计策略将审计有关成功授予和撤销权限和角色的命令,该审计策略有中等审计级别 CRITICAL.。

该策略必须显式启动(见 alter_audit_policy),使得审计策略的审计发生。

CREATEAUDIT POLICY priv_audit AUDITING SUCCESSFUL GRANT PRIVILEGE, REVOKE PRIVILEGE, GRANT ROLE, REVOKE ROLE LEVEL CRITICAL;

7.1.7     CREATE FULLTEXT INDEX
语法:

CREATE FULLTEXT INDEX <index_name> ON <tableref> '(' <column_name> ')' [<fulltext_parameter_list>]

定义全文索引名:

<fulltext_parameter_list> ::= <fulltext_parameter> [, ...]

<fulltext_parameter> ::= LANGUAGE COLUMN <column_name>

| LANGUAGE DETECTION '(' <string_literal_list> ')'

| MIME TYPE COLUMN <column_name>

| <change_tracking_elem>

| FUZZY SEARCH INDEX <on_off>

| PHRASE INDEX RATIO <on_off>

| CONFIGURATION <string_literal>

| SEARCH ONLY <on_off>

| FAST PREPROCESS <on_off>

<on_off> ::= ON | OFF

LANGUAGE COLUMN:指定文件语言的列

LANGUAGE DETECTION:语言检测设置的语言集合。

MIME TYPE COLUMN:指定文件 mime-type 的列

FUZZY SEARCH INDEX:指定是否使用模糊搜索

PHRASE INDEX RATIO:指定短语索引比率的百分比,值必须为 0.0 和 1.0 之间。

CONFIGURATION:自定义配置文件的文本分析路径。

SEARCH ONLY:如果设为 ON,将不存储原始文件内容。

FAST PREPROCESS:如果设为 ON,将使用快速处理,例如语言搜索将不可用。

<change_tracking_elem> ::= SYNC[HRONOUS]| ASYNC[HRONOUS] [FLUSH [QUEUE]<flush_queue_elem>]

SYNC:指定是否创建同步全文索引

ASYNC:指定是否创建异步全文索引

<flush_queue_elem> ::= EVERY <integer_literal> MINUTES| AFTER <integer_literal> DOCUMENTS| EVERY <integer_literal> MINUTES OR AFTER <integer_literal> DOCUMENTS

Specifies when to update the full-text index if asynchronous indexing is used.

describe:

The CREATE FULLTEXT INDEX statement creates an explicit full-text index on a given table.

example:

CREATE FULLTEXT INDEX i1 ON A(C) FUZZY SEARCH INDEXOFF

SYNC

LANGUAGE DETECTION ('EN','DE','KR')

The above example creates a full-text index named 'i1' in column C of table A, no fuzzy search index is used, and the language set of the language detection setting consists of 'EN', 'DE' and 'KR'.

7.1.8 CREATE INDEX
syntax:

CREATE [UNIQUE] [BTREE | CPBTREE] INDEX  [<schema_name>.]<index_name> ON <table_name>(<column_name_order>, ...) [ASC | DESC]

Grammatical elements:

<column_name_order> ::= <column_name> [ASC | DESC]

UNIQUE: Used to create a unique index. Duplicate checks will be performed when indexes are created and records are added to the table.

BTREE | CPBTREE: used to select the type of index used. When the data type of the column is string, binary string, decimal number or the constraint is a composite key, or a non-unique key, the default index type is CPBTREE, otherwise BTREE is used.

In order to use a B+-tree index, the BTREE keyword must be used; for a CPB+-tree index, the CPBTREE keyword must be used.

A B+-tree is a tree that maintains sorted data for efficient insertion, deletion, and search of records.

The CPB+-tree represents the compressed prefix B+-tree, which is based on the pkB-tree. A CPB+ tree is a very small index because it uses "partial keys", which are just a part of the full key of an index node. For larger keys, CPB+-trees exhibit better performance than B+-trees.

If the index type is omitted, SAP HANA Database will choose an appropriate index considering the data type of the column.

ASC | DESC: Specifies to create the index in an ascending or descending manner.

These keywords can only be used in btree indexes, and can only be used once for each column.

describe:

The CREATE INDEX statement creates an index.

example:

After table t is created, create CBPTREE index idx incrementally on column b of table t.

CREATETABLE t (a INT, b NVARCHAR(10), c NVARCHAR(20));

CREATEINDEX idx ON t(b);

Create CBPTREE index idx on column a of table t in ascending order and column b in descending order:

CREATE CPBTREE INDEX idx1 ON t(a, b DESC);

Creates the CPBTREE index idx2 on columns a and c of table t in descending order.

CREATEINDEX idx2 ON t(a, c) DESC;

Creates a unique CPBTREE index idx3 on columns b and c of table t in increasing order.

CREATEUNIQUEINDEX idx3 ON t(b, c);

Creates a unique BTREE index idx4 on column a of table t in increasing order.

CREATEUNIQUEINDEX idx4 ON t(a);

7.1.9     CREATE SCHEMA
语法:

CREATE SCHEMA <schema_name> [OWNED BY <user_name>]

OWNED BY:指定Schema所有者名字。如果省略,当前用户将为Schema所有者。

描述:

CREATE SCHEMA 语句在当前数据库创建Schema。

例子:

CREATESCHEMA my_schema OWNED BYsystem;

7.1.10   CREATE SEQUENCE
语法:

CREATE SEQUENCE <sequence_name> [<common_sequence_parameter_list>] [RESET BY <subquery>]

语法元素:

<common_sequence_parameter_list> ::= <common_sequence_parameter>, ...

<common_sequence_parameter> ::= START WITH n | <basic_sequence_parameter>

<basic_sequence_parameter> ::= INCREMENT BY n| MAXVALUE n| NO MAXVALUE| MINVALUE n| NO MINVALUE| CYCLE| NO CYCLE

INCREMENT BY:定义上个分配值递增到下一个序列值的量(即递增递减间隔值),默认值为 1。指定一个负的值来生成一个递减的序列。 INCREMENT BY 值为 0,将返回错误。

START WITH:定义起始序列值。如果未定义 START WITH 子句值,递增序列将使用 MINVALUE,递减序列将使用MAXVALUE。

MAXVALUE: Define the maximum value that the sequence can generate, must be between 0 and 4611686018427387903.

NO MAXVALUE: With the NO MAXVALUE instruction, the maximum value for the increment sequence will be 4611686018427387903, and the maximum value for the decrement sequence will be -1.

MINVALUE: Define the minimum value that the sequence can generate, must be between 0 and 4611686018427387903.

NO MINVALUE: With the NO MINVALUE instruction, the minimum value of the increment sequence will be 1, and the minimum value of the decrement sequence will be -4611686018427387903.

CYCLE: With the CYCLE command, the sequence will restart after reaching the maximum or minimum value.

NO CYCLE: With the NO CYCLE command, the sequence will not restart after reaching the maximum or minimum value.

RESET BY: During system restart, the system automatically executes the RESET BY statement, and will restart the sequence with the value determined by the RESET BY subquery.

If RESET BY is not specified, the sequence value will be stored persistently in the database. During a database restart, the next value of the sequence will be generated from the saved sequence value.

describe:

The CREATE SEQUENCE statement is used to create a sequence.

Sequences are used to generate unique integers for multiple users. CURRVAL is used to get the current value of the sequence, and NEXTVAL is used to get the next value of the sequence. CURRVAL is only valid if NEXTVAL is called within the session.

example:

Example 1:

The sequence seq is created, use CURRVAL and NEXTVAL to read values ​​from the sequence.

CREATESEQUENCE seq START WITH 11;

NEXTVAL returns 11:

SELECT seq.NEXTVAL FROM DUMMY;--11

CURRVAL returns 11:

SELECT seq.CURRVAL FROM DUMMY;--11

Example 2:

If the sequence is used to create a unique key on column A of table R, after the database is restarted, a unique key value is created by automatically assigning the maximum value of column A to the sequence, the statement is as follows:

CREATETABLE r (a INT);

CREATESEQUENCE s RESETBYSELECTIFNULL(MAX(a), 0) + 1 FROM r;

SELECT s.NEXTVAL FROM DUMMY;--1

7.1.11 CREATE SYNONYM
Syntax:

CREATE [PUBLIC] SYNONYM <synonym_name> FOR <object_name>

Grammatical elements:

<object_name> ::= <table_name>| <view_name>| <procedure_name>| <sequence_name>

describe:

CREATE SYNONYM creates an alternate name for a table, view, stored procedure, or sequence.

You can use synonyms to redirect functions and stored procedures to different tables, views, or sequences without rewriting the function or procedure.

The optional PUBLIC allows creating public synonyms. Any user can access public synonyms, but only users with the appropriate permissions on the base object can access the base object.

example:

CREATESYNONYM t_synonym FOR t;

7.1.12 CREATE TABLE
syntax:

CREATE [<table_type>] TABLE [<schema_name>.]<table_name> <table_contents_source>[<logging_option> | <auto_merge_option> | <partition_clause> | <location_clause>]

Grammatical elements:

<table_type> ::= COLUMN| ROW| HISTORY COLUMN| GLOBAL TEMPORARY| LOCAL TEMPORARY

COLUMN, ROW:如果大多数的访问是通过大量元组,而只选择少数几个属性,应使用基于列的存储。如果大多数访问选择一些记录的全部属性,使用基于行的存储是最好的。 SAP HANA 数据库使用组合启用两种方式的的存储和解释。你可以为每张表指定组织类型,默认值为 ROW。

HISTORY COLUMN:利用特殊的事务会话类型称为’HISTORY’创建表。具有会话类型’HISTORY’的表支持“时间旅行”, 对历史状态的数据库查询的执行是可能的。

时间旅行可以以如下方式完成:

会话级别时间旅行:

SET HISTORY SESSION TO UTCTIMESTAMP = <utc_timestamp>

SET HISTORY SESSION TO COMMIT ID = <commit_id>

<utc_timestamp> ::= <string_literal>

 <commit_id> ::= <unsigned_integer>

数据库会话可以设置回到一个特定时间点。该语句的 COMMIT ID 变量接受 commitid 作为参数。commitid 参数的值必须存在系统表 SYS.TRANSACTION_HISTORY 的 COMMIT_ID 列,否则将抛出异常。 COMMIT_ID 在使用用户定义的快照时是有用的。用户自定义的快照可以通过存储在提交阶段分配至事务的 commitid 来获得。 Commitid 可以通过在事务提交后执行以下查询来读取:

SELECT LAST_COMMIT_ID FROM M_TRANSACTIONS WHERE CONNECTION_ID = CURRENT_CONNECTION;

该语句的 TIMESTAMP 变量接受时间戳作为参数。在系统内部,时间戳用来在系统表SYS.TRANSACTION_HISTORY,commit_time 接近给定的时间戳的地方,查询一对(commit_time,commit_id),准确的说,选择最大 COMMIT_TIME 小于等于给定时间戳的对;如果没有找到这样的对,将会抛出异常。然后会话将使用 COMMIT_ID 变量确定的 commit-id 恢复。 要终止恢复会话切换到当前会话中,必须在数据库连接上执行明确的 COMMIT 或 ROLLBACK。

语句级别时间旅行:

<subquery> AS OF UTCTIMESTAMP <utc_timestamp>

<subquery> AS OF COMMIT ID <commit_id>

为了能使 commitid 与提交时间关联,需维护系统表 SYS.TRANSACTION_HISTORY,存储每个为历史表提交数据的事务的额外信息。有关设置会话级别时间旅行的详细信息,请参阅 SET HISTORY SESSION,关于<subquery>的信息,请参阅 Subquery。

注意:

当会话恢复时,自动提交必须关闭(否则会抛出包含相应错误消息的异常)。

非历史表在恢复会话中总显示其当前快照。

只有数据查询语句(select)可以在恢复会话中使用。

历史表必须有主键。

会话类型可以通过系统表 SYS.TABLES 的 SESSION_TYPE 列检查。

GLOBAL TEMPORARY:

表定义全局可见,但数据只在当前会话中可见。表在会话结束后截断。

全局临时表的元数据是持久的,表示该元数据一直存在直到表被删除,元数据在会话间共享。临时表中的数据是会话特定的,代表只有全局临时表的所有者才允许插入、读取、删除数据,存在于会话持续期间,并且当会话结束时,全局临时表中的数据会自动删除。全局临时表只有当表中没有任何数据才能被删除。全局临时表支持的操作:

1. Create without a primary key

2. Rename table

3. Rename column

4. Truncate

5. Drop

6. Create or Drop view on top of global temporary table

7. Create synonym

8. Select

9. Select into or Insert

10. Delete

11. Update

12. Upsert or Replace

LOCAL TEMPORARY:

The table definition and data are only visible in the current session, and the table is truncated at the end of the session.

Metadata is shared between sessions and is session-specific, meaning that only the owner of the local temporary table is allowed to view it. The data in the temporary table is session-specific, which means that only the owner of the local temporary table is allowed to insert, read, and delete data, which exists during the session, and when the session ends, the data in the local temporary table will be automatically deleted.

Operations supported by local temporary tables:

1. Create without a primary key

2. Truncate

3. Drop

4. Select

5. Select into or Insert

6. Delete

7. Update

8. Upsert or Replace

<table_contents_source> ::= (<table_element>, ...)| <like_table_clause> [WITH [NO] DATA]| [(<column_name>, ...)] <as_table_subquery> [WITH [NO] DATA]]

<table_element> ::= <column_definition> [<column_constraint>]| <table_constraint> (<column_name>, ... )

<column_definition> ::= <column_name> <data_type> [<column_store_data_type>] [<ddic_data_type>][DEFAULT <default_value>] [GENERATED ALWAYS AS

<expression>]

<data_type> ::= DATE | TIME | SECONDDATE | TIMESTAMP | TINYINT | SMALLINT | INTEGER | BIGINT |SMALLDECIMAL | DECIMAL | REAL | DOUBLE|

VARCHAR | NVARCHAR | ALPHANUM | SHORTTEXT |VARBINARY |BLOB | CLOB | NCLOB | TEXT

<column_store_data_type> ::= S_ALPHANUM | CS_INT | CS_FIXED | CS_FLOAT| CS_DOUBLE |CS_DECIMAL_FLOAT | CS_FIXED(p-s, s) | CS_SDFLOAT|

CS_STRING | CS_UNITEDECFLOAT | CS_DATE |CS_TIME | CS_FIXEDSTRING | CS_RAW | CS_DAYDATE | CS_SECONDTIME | CS_LONGDATE |CS_SECONDDATE

<ddic_data_type> ::= DDIC_ACCP | DDIC_ALNM | DDIC_CHAR | DDIC_CDAY | DDIC_CLNT | DDIC_CUKY| DDIC_CURR | DDIC_D16D | DDIC_D34D | DIC_D16R

| DDIC_D34R | DDIC_D16S | DDIC_D34S| DDIC_DATS | DDIC_DAY | DDIC_DEC | DDIC_FLTP | DDIC_GUID | DDIC_INT1 | DDIC_INT2 | DDIC_INT4| DDIC_INT8 | DDIC_LANG | DDIC_LCHR | DDIC_MIN | DDIC_MON| DDIC_LRAW | DDIC_NUMC |DDIC_PREC |DDIC_QUAN | DDIC_RAW | DDIC_RSTR | DDIC_SEC | DDIC_SRST | DDIC_SSTR |DDIC_STRG | DDIC_STXT | DDIC_TIMS | DDIC_UNIT| DDIC_UTCM | DDIC_UTCL | DDIC_UTCS |DDIC_TEXT | DDIC_VARC | DDIC_WEEK

<default_value> ::= NULL | <string_literal> | <signed_numeric_literal>| <unsigned_numeric_literal>

DEFAULT:DEFAULT 定义了 INSERT 语句没有为列提供值情况下,默认分配的值。

GENERATED ALWAYS AS:指定在运行时生成的列值的表达式。

<column_constraint> ::= NULL| NOT NULL| UNIQUE [BTREE | CPBTREE]| PRIMARY KEY [BTREE | CPBTREE]

NULL | NOT NULL:NOT NULL 禁止列的值为 NULL。如果指定了 NULL,将不被认为是常量,其表示一列可能含有空值,默认为 NULL。

UNIQUE:指定为唯一键的列。一个唯一的复合键指定多个列作为唯一键。有了 unique 约束,多行不能在同一列中具有相同的值。

PRIMARY KEY:主键约束是 NOT NULL 约束和 UNIQUE 约束的组合,其禁止多行在同一列中具有相同的值。

BTREE | CPBTREE:指定索引类型。当列的数据类型为字符串、二进制字符串、十进制数或者约束是一个组合键,或非唯一键,默认的索引类型为 PBTREE,

否则使用 BTREE。为了使用 B+-树索引,必须使用 BTREE 关键字;对于 CPB+-树索引,需使用 CPBTREE 关键字。B+-树是维护排序后的数据进行高效的插入、删除和搜索记录的树。CPB+-树表示压缩前缀 B+-树,是基于 pkB-tree 树。 CPB+树是一个非常小的索引,因为它使用“部分键”,只是索引节点全部键的一部分。对于更大的键, CPB+-树展现出比 B+-树更好的性能。如果省略了索引类型, SAP HANA 数据库将考虑列的数据类型选择合适的索引。

<table_constraint> ::= UNIQUE [BTREE | CPBTREE]| PRIMARY KEY [BTREE | CPBTREE]

定义了表约束可以使用在表的一列或者多列上。

<like_table_clause> ::= LIKE   <table_name>

创建与 like_table_name 定义相同的表。表中所有列的定义和默认值拷贝自 like_table_name。当提供了可选项 WITH DATA,数据将从指定的表填充,不过,默认值为 WITH NO DATA。

<as_table_subquery> ::= AS '(<subquery>)

创建表并且用<subquery>计算的数据填充。使用该子句只拷贝 NOT NULL 约束。如果指定了column_names,该指定的 column_names 将覆盖<subquery>中的列名。WITH [NO] DATA 指定数据拷贝自<subquery> 或 <like_table_clause>。

<logging_option> ::= LOGGING| NO LOGGING [RETENTION <retention_period>]

<retention_period> ::= <unsigned_integer>

LOGGING | NO LOGGING:LOGGING (默认值)指定激活记录表日志。NO LOGGING 指定停用记录表日志。一张 NO LOGGING 表意味表定义是持久的且全局可见的,数据则为临时和全局的。

RETENTION:指定以秒为单位,NO LOGGING 列式表的保留时间。在指定的保留期限已过,如果使用物理内存的主机达到 80%,上述表将被删除。

<auto_merge_option> ::= AUTO MERGE | NO AUTO MERGE

AUTO MERGE | NO AUTO MERGE:AUTO MERGE (默认值)指定触发自动增量合并。

<partition_clause> ::= PARTITION BY <hash_partition> [, <range_partition> | , <hash_partition>]| PARTITION BY <range_partition>| PARTITION BY <roundrobin_partition>

                                [,<range_partition>]

<hash_partition> ::= HASH (<partition_expression> [, ...]) PARTITIONS {<num_partitions> |GET_NUM_SERVERS()}

<range_partition> ::= RANGE (<partition_expression>) (<range_spec>, ...)

<roundrobin_partition> ::= ROUNDROBIN PARTITIONS {<num_partitions> | GET_NUM_SERVERS()} [,<range_partition>]

<range_spec> ::= {<from_to_spec> | <single_spec>} [, ...] [, PARTITION OTHERS]

<from_to_spec> ::= PARTITION <lower_value> <= VALUES < <upper_value>

<single_spec> ::= PARTITION VALUE <target_value>

<lower_value> ::= <string_literal> | <numeric_literal>

<upper_value> ::= <string_literal> | <numeric_literal>

<target_value> ::= <string_literal> | <numeric_literal>

<partition_expression> ::= <column_name> | YEAR(<column_name>) | MONTH(<column_name>)

<num_partitions> ::= <unsigned_integer>

GET_NUM_SERVERS()函数返回服务器数量。

PARTITION OTHERS 表示分区定义中未指定的其余值成为一个分区。

确定分区创建所在的索引服务器是可能的。如果你指定了 LOCATION,在这些实例中循环创建分区。列表中的重复项将被移除。如果你在分区定义中精确指定实例数为分区数,则每个分区将分配至列表中各自实例。所有索引列表中的服务器必须属于同一个实例。如果指定了 no location,将随机创建分区。如果分区数与服务器数匹配-例如使用

GET_NUM_SERVERS()-可以确保多个 CREATE TABLE 调用以同样的方式分配分区。对于多级别分区的情况,其适用于第一级的分区数。这个机制是很有用的,如果创建彼此语义相关的多张表。

<location_clause> ::= AT [LOCATION] {'<host>:<port>' | ('<host>:<port>', ...)}

AT LOCATION:表可以创建在 host:port 指定的位置,位置列表可以在创建分配至多个实例的分区表时定义。当位置列表没有提供<partition_clause>,表将创建至指定的第一个位置中。如果没有提供位置信息,表将自动分配到一个节点中。此选项可用于在分布式环境中的行存储和列存储表

描述:

CREATE TABLE creates a table. The table is not populated with data, except when <as_table_subquery> or <like_table_clause> are used with the WITH DATA option.

example:

Created table A with integer columns A and B. Column A has a primary key constraint.

CREATETABLE A (A INTPRIMARYKEY, B INT);

Created partition table P1, date column U. Column U has a primary key constraint and is used as the RANGE partition column.

CREATECOLUMNTABLE P1 (U DATEPRIMARYKEY) PARTITIONBY RANGE (U) (PARTITION'2010-02-03'<= VALUES < '2011-01-01', PARTITIONVALUE = '2011-05-01');

Created partition table P2 with integer columns I, J and K. Columns I, J form key constraints and are used as hash partition columns. Column K is the sub-hash partition column.

CREATECOLUMNTABLE P2 (I INT, J INT, K INT, PRIMARYKEY(I, J)) PARTITIONBY HASH (I, J) PARTITIONS 2, HASH (K) PARTITIONS 2;

Create table C1 with the same definition as table A and the same records.

CREATECOLUMNTABLE C1 LIKE A WITH DATA;

Create table C2 with the same column data types and NOT NULL constraints as table A. Table C2 does not have any data.

CREATETABLE C2 AS (SELECT * FROM A) WITHNO DATA;

7.1.13 CREATE TRIGGER
Syntax:

CREATE TRIGGER <trigger_name> <trigger_action_time> <trigger_event> ON <subject_table_name> [REFERENCING <transition_list>][<for_each_row>]

BEGIN

[<trigger_decl_list>]

[<proc_handler_list>]

<trigger_stmt_list>

END

Grammatical elements:

<trigger_name> ::= <identifier> The name of the trigger you created.

<subject_table_name> ::= <identifier> The name of the table where the trigger you created is located.

<trigger_action_time> ::= BEFORE | AFTER Specifies when the trigger action will occur.

BEFORE: Execute the trigger before operating the subject table.

AFTER: Execute the trigger after operating the subject table.

<trigger_event> ::= INSERT | DELETE | UPDATE defines the data modification command that activates the trigger event

<transition_list> ::= <transition> | <transition_list> , <transition>

<transition> ::= <trigger_transition_old_or_new> <trigger_transition_var_or_table> <trans_var_name>|<trigger_transition_old_or_new> <trigger_transition_var_or_table> AS <trans_var_name>

When a trigger transition variable is declared, the trigger can access the record that the DML is modifying.

When a row-level trigger is executed, <trans_var_name>.<column_name> represents the corresponding column of the record that the trigger is modifying.

Here, <column_name> is the column name of the subject table. See Converting Variables for an example.

<trigger_transition_old_or_new> ::= OLD | NEW

<trigger_transition_var_or_table> ::= ROW

OLD

You can access the old record of the trigger DML and replace it with an updated old record or a deleted old record.

UPDATE triggers and DELETE triggers can have OLD ROW transaction variables.

NEW

You can access the new record of the trigger DML instead of the new record inserted or the new record updated.

UPDATE triggers and DELETE triggers can have a NEW ROW transaction variable.

Only transaction variables are supported.

Transaction tables are not supported.

If you pass 'TABLE' as <trigger_transition_var_or_table>, you'll see a feature not supported error.

<for_each_row> ::= FOR EACH ROW

Triggers will be invoked row by row.

The default mode is to execute row-level triggers without the FOR EACH ROW syntax.

Currently, statement-by-statement triggering is not supported.

<trigger_decl_list> ::= DECLARE <trigger_decl>| <trigger_decl_list> DECLARE <trigger_decl>

<trigger_decl> ::= <trigger_var_decl> | <trigger_condition_decl>

<trigger_var_decl> ::= <var_name> CONSTANT <data_type> [<not_null>] [<trigger_default_assign>] ;| <var_name> <data_type> [NOT NULL] [<trigger_default_assign>] ;

<data_type> ::= DATE | TIME | SECONDDATE | TIMESTAMP | TINYINT | SMALLINT | INTEGER| BIGINT |SMALLDECIMAL | DECIMAL | REAL | DOUBLE| VARCHAR | NVARCHAR | ALPHANUM | SHORTTEXT|VARBINARY | BLOB | CLOB| NCLOB | TEXT

<trigger_default_assign> ::= DEFAULT <expression>| := <expression>

<trigger_condition_decl> ::= <condition_name> CONDITION ;| <condition_name> CONDITION FOR <sql_error_code> ;

<sql_error_code> ::= SQL_ERROR_CODE <int_const>

trigger_decl_list

You can declare trigger variables or conditions.

Declared variables can be used in scalar assignments or referenced in SQL statement triggers.

The declared condition name can be referenced in exception handling.

CONSTANT

When the CONSTANT keyword is specified, you cannot modify the variable while the trigger is executing.

<proc_handler_list> ::= <proc_handler>| <proc_handler_list> <proc_handler>

<proc_handler> ::= DECLARE EXIT HANDLER FOR <proc_condition_value_list> <trigger_stmt>

<proc_condition_value_list> ::= <proc_condition_value>| <proc_condition_value_list> , <proc_condition_value>

<proc_condition_value> ::= SQLEXCEPTION
| SQLWARNING
| <sql_error_code>
| <condition_name>
异常处理可以声明捕捉已存的 SQL 异常,特定的错误代码或者条件变量声明的条件名称。
<trigger_stmt_list> ::= <trigger_stmt>| <trigger_stmt_list> <trigger_stmt>
<trigger_stmt> ::= <proc_block>
| <proc_assign>
| <proc_if>
| <proc_loop>
| <proc_while>
| <proc_for>
| <proc_foreach>
| <proc_signal>
| <proc_resignal>
| <trigger_sql>
触发器主体的语法是过程体的语法的一部分。
参见 SAP HANA Database SQLScript guide 中的 create procedure 定义。
触发器主体的语法遵循过程体的语法,即嵌套块(proc_block),标量变量分配(proc_assign), if 块
(proc_if), loop block (proc_loop), for block (proc_for), for each block (proc_foreach), exception
signal (proc_signal), exception resignal (proc_resignal), and sql statement (proc_sql).
<proc_block> ::= BEGIN
[<trigger_decl_list>]
[<proc_handler_list>]
<trigger_stmt_list>
END ;
You can add additional 'BEGIN ... END;' blocks in a nested fashion.
<proc_assign> ::= <var_name> := <expression> ;
var_name is the variable name and should be declared in advance.
<proc_if> ::= IF <condition> THEN <trigger_stmt_list>
[<proc_elsif_list>]
[<proc_else>]
END IF ;
<proc_elsif_list> ::= ELSEIF <condition> THEN <trigger_stmt_list>

<proc_else> ::= ELSE <trigger_stmt_list>
关于 condition 的说明,参见 SELECT 的<condition>。
你可以使用 IF ... THEN ... ELSEIF... END IF 控制执行流程与条件。
<proc_loop> ::= LOOP <trigger_stmt_list> END LOOP ;
<proc_while> ::= WHILE <condition> DO <trigger_stmt_list> END WHILE ;
<proc_for> ::= FOR <column_name> IN [<reverse>] <expression> <DDOT_OP> <expression>
DO <trigger_stmt_list>
END FOR ;
<column_name> ::= <identifier>
<reverse> ::= REVERSE
<DDOT_OP> ::= ..
<proc_foreach> ::= FOR <column_name> AS <column_name> [<open_param_list>] DO
<trigger_stmt_list>
END FOR ;
<open_param_list> ::= ( <expr_list> )
<expr_list> ::= <expression> | <expr_list> , <expression>
<proc_signal> ::= SIGNAL <signal_value> [<set_signal_info>] ;
<proc_resignal> ::= RESIGNAL [<signal_value>] [<set_signal_info>] ;
<signal_value> ::= <signal_name> | <sql_error_code>
<signal_name> ::= <identifier>
<set_signal_info> ::= SET MESSEGE_TEXT = '<message_string>'
<message_string> ::= <identifier>
SET MESSEGE_TEXT
如果你为自己的消息设置 SET MESSEGE_TEXT,当触发器执行时指定的错误被抛出,消息传递给用
户。
SIGNAL 语句显式抛出一个异常。
用户定义的范围( 10000〜19999)将被允许发行错误代码。
RESIGNAL 语句在异常处理中对活动语句抛出异常。
如果没有指定错误代码, RESIGNAL 将抛出已捕捉的异常。

<trigger_sql> ::= <select_into_stmt>
| <insert_stmt>
| <delete_stmt>
| <update_stmt>
| <replace_stmt>
| <upsert_stmt>
对于 insert_stmt 的详情,参见 INSERT。
对于 delete_stmt 的详情,参见 DELETE。
对于 update_stmt 的详情,参见 UPDATE。
对于 replace_stmt 和 upsert_stmt 的详情,参见 REPLACE | UPSERT。
<select_into_stmt> ::= SELECT <select_list> INTO <var_name_list>
<from_clause >
[<where_clause>]
[<group_by_clause>]
[<having_clause>]
[{<set_operator> <subquery>, ... }]
[<order_by_clause>]
[<limit>]
<var_name_list> ::= <var_name> | <var_name_list> , <var_name>
<var_name> ::= <identifier>
关于 select_list, from_clause, where_clause, group_by_clause, having_clause,set_operator, subquery,
order_by_clause, limit 的详情,参见 SELECT。
var_name 是预先声明的标量变量名。
你可以分配选中项至标量变量中。
描述:
CREATE TRIGGER 语句创建触发器。
触发器是一种特殊的存储过程,在对表发生事件时自动执行。
CREATE TRIGGER 命令定义一组语句,当给定操作(INSERT/UPDATE/DELETE)发生在给定对象(主题表)
上执行。
只有对于给定的<subject_table_name>拥有 TRIGGER 权限的数据库用户允许创建表的触发器。
当前触发器限制描述如下:
l不支持 INSTEAD_OF 触发器。
l访问触发器定义的主题表在触发器主体中是不允许的,这代表对于触发器所在的表的任何
insert/update/delete/replace/select

l only supports row-level triggers, not statement-level triggers. A row-level trigger means that the triggering action
is executed only every time a row changes. Statement-level triggers represent triggering activities that run each time a statement is executed. The syntax 'FOR EACH ROW' means
row-based execution trigger, which is the default mode; even if 'FOR EACH ROW' is not specified, it is still a row-level trigger.
l does not support transaction tables (OLD/NEW TABLE). When the triggering SQL statement wants to refer to the data being modified by the triggering event such as
insert/update/delete, the transaction variable/table will be the
way for the SQL statement in the trigger body to access the new data and the old data. Transaction variables are only used in row-level triggers, while transaction tables are used in statement-level triggers.
l Does not support executing triggers from a node to multiple hosts or on a partitioned table in a table.
A table can only have one trigger for each DML operation, possibly an insert trigger, an update trigger, and a delete trigger, and
all three of them can be activated together.
Therefore, a table can have up to three triggers in total.
l Unsupported trigger actions (supported by stored procedures):
resultset assignment(select resultset assignment to tabletype),
exit/continue command(execution flow control),
cursor open/fetch/close(get each record data of search result by cursor and access record in loop),
procedure call(call another procedure),
dynomic sql execution(build SQL statements dynamically at runtime of SQLScript),
return(end SQL statement execution)
系统和监控视图:
TRIGGER 为触发器的系统视图:
系统视图 TRIGGER 显示:
SCHEMA_NAME, TRIGGER_NAME, TRIGGER_OID, OWNER_NAME,
OWNER_OID,SUBJECT_TABLE_SCHEMA,SUBJECT_TABLE_NAME, TRIGGER_ACTION_TIME,
TRIGGER_EVENT, TRIGGERED_ACTION_LEVEL,DEFINITION
例子:
你先需要触发器定义的表:
CREATETABLE TARGET ( A INT);
你也需要表触发访问和修改:
CREATETABLE SAMPLE ( A INT);
以下为创建触发器的例子:

CREATE TRIGGER TEST_TRIGGER AFTERINSERTON TARGET FOR EACH ROW

BEGIN

       DECLARE SAMPLE_COUNT INT;

       SELECTCOUNT(*) INTO SAMPLE_COUNT FROM SAMPLE;

       IF :SAMPLE_COUNT = 0 THEN

              INSERTINTO SAMPLE VALUES(5);

       ELSEIF :SAMPLE_COUNT = 1 THEN

              INSERTINTO SAMPLE VALUES(6);

       ENDIF;

END;

触发器 TEST_TRIGGER 将在任意记录插入至 TARGET 表后执行。由于在第一次插入中,表 SAMPLE 记录数为 0,触发器 TEST_TRIGGER 将插入 5 至表中。
在第二次插入 TARGET 表时,触发器插入 6,因为其计数为 1。
INSERTINTO TARGET VALUES (1);

SELECT * FROM SAMPLE;--5

INSERTINTO TARGET VALUES (2);

SELECT * FROM SAMPLE;--5 6

7.1.14   CREATE VIEW
语法:

CREATE VIEW  [<schema_name>.]<view_name> [(<column_name>, ... )] AS <subquery>

描述:

CREATE VIEW 可以有效地根据 SQL 结果创建虚拟表,这不是真正意义上的表,因为它本身不包含数据。

当列名一起作为视图的名称,查询结果将以列名显示。如果列名被省略,查询结果自动给出一个适当的列名。列名的数目必须与从<subquery>返回的列数目相同。

支持视图的更新操作,如果满足以下条件:

视图中的每一列必须映射到单个表中的一列。

如果基表中的一列有 NOT NULL 约束,且没有默认值,该列必须包含在可插入视图的列中。更新操作没有该条件。

例如在 SELECT 列表,必须不包含聚合或分析的功能,以下是不允许的:

. 在 SELECT 列表中的 TOP, SET, DISTINCT 操作

. GROUP BY, ORDER BY 子句

在 SELECT 列表中必须不能含有子查询。

必须不能包含序列值(CURRVAL, NEXTVAL)。

Must not contain column views as base views.

If the base view or table is updatable, the view of the base view or table is updatable on the basis of meeting the above conditions.

example:

Select all data in table a to create view v:

CREATEVIEW v ASSELECT * FROM a;

7.1.15 DROP AUDIT POLICY
Syntax:

DROP AUDIT POLICY <policy_name>

describe:

The DROP AUDIT POLICY statement drops an audit policy. <policy_name> must specify an existing audit policy.

Only database users with the system privilege AUDIT ADMIN are allowed to delete audit policies. Each database user with this authority can delete any audit policy, whether created by the user or not.

Even if an audit policy is deleted, it may happen that the activities defined in the deleted audit policy will be further audited; if other audit policies are also enabled and audit activities defined.

When an audit policy is temporarily turned off, it can be disabled instead of deleted.

System and Monitoring Views:

AUDIT_POLICY: Displays all audit policies and status.

M_INIFILE_CONTENTS: Displays database system configuration parameters.

Only users with system privileges CATALOG READ, DATA ADMIN or INIFILE ADMIN can see the content in the M_INIFILE_CONTENTS view, it is empty for other users.

example:

Assume that the audit policy has been created before using the following statement:

CREATEAUDIT POLICY priv_audit AUDITING SUCCESSFUL GRANT PRIVILEGE, REVOKE PRIVILEGE, GRANT ROLE, REVOKE ROLE LEVEL CRITICAL;

Now, the audit policy must be removed:

DROPAUDIT POLICY priv_audit;

7.1.16 DROP FULLTEXT INDEX
Syntax:

DROP FULLTEXT INDEX <fulltext_index_name>

describe:

DROP FULLTEXT INDEX 语句移除全文索引。

例子:

DROP FULLTEXT INDEX idx;

7.1.17   DROP INDEX
语法:

DROP INDEX <index_name>

描述:

DROP INDEX 语句移除索引。

例子:

DROPINDEX idx;

7.1.18   DROP SCHEMA
语法:

DROP SCHEMA <schema_name> [<drop_option>]

语法元素:

<drop_option> ::= CASCADE | RESTRICT

Default = RESTRICT

有限制的删除操作将删除对象当其没有依赖对象时。如果存在依赖对象,将抛出错误。

CASCADE级联删除:将删除对象以及其依赖对象。

描述:

DROP SCHEMA 语句移除schema。

例子:

创建集合 my_schema,表 my_schema.t,然后 my_schema 将使用 CASCADE 选项删除。

CREATESCHEMA my_schema;

CREATETABLE my_schema.t (a INT);

DROPSCHEMA my_schema CASCADE;

7.1.19   DROP SEQUENCE
语法:

DROP SEQUENCE <sequence_name> [<drop_option>]

语法元素:

<drop_option> ::= CASCADE | RESTRICT

Default = RESTRICT

级联删除将删除对象以及其依赖对象。当未指定 CASCADE 可选项,将执行非级联删除对象,不会删除依赖对象,而是使依赖对象 (VIEW, PROCEDURE) 无效。

无效的对象可以重新验证当一个对象有同样的集合,并且已创建了对象名。对象 ID,集合名以及对象名为重新验证依赖对象而保留。

有限制的删除操作将删除对象当其没有依赖对象时。如果存在依赖对象,将抛出错误。

描述:

DROP SEQUENCE 语句移除序列。

例子:

DROPSEQUENCE s;

7.1.20   DROP SYNONYM
语法:

DROP [PUBLIC] SYNONYM <synonym_name> [<drop_option>]

语法元素:

<drop_option> ::= CASCADE | RESTRICT

Default = RESTRICT

级联删除将删除对象以及其依赖对象。当未指定 CASCADE 可选项,将执行非级联删除操作,不会删除依赖对象,而是使依赖对象 (VIEW, PROCEDURE) 无效。

无效的对象可以重新验证当一个对象有同样的集合,并且已创建了对象名。对象 ID,集合名以及对象名为重新验证依赖对象而保留。

有限制的删除操作将删除对象当其没有依赖对象时。如果存在依赖对象,将抛出错误。

描述:

DROP SYNONYM 删除同义词。可选项 PUBLIC 允许删除公用同义词。

例子:

表 a 已创建,然后为表 a 创建同义词 a_synonym 和公用同义词 pa_synonym:

CREATETABLE a (c INT);

CREATESYNONYM a_synonym FOR a;

CREATEPUBLICSYNONYM pa_synonym FOR a;

删除同义词 a_synonym 和公用同义词 pa_synonym

DROPSYNONYM a_synonym;

DROPPUBLICSYNONYM pa_synonym;

7.1.21   DROP TABLE
语法:

DROP TABLE  [<schema_name>.]<table_name> [<drop_option>]

语法元素:

<drop_option> ::= CASCADE | RESTRICT

Default = RESTRICT

级联删除将删除对象以及其依赖对象。当未指定 CASCADE 可选项,将执行非级联删除操作,不会删除依赖对象,而是使依赖对象 (VIEW, PROCEDURE) 无效。

无效的对象可以重新验证当一个对象有同样的集合,并且已创建了对象名。对象 ID,集合名以及对象名为重新验证依赖对象而保留。

有限制的删除操作将删除对象当其没有依赖对象时。如果存在依赖对象,将抛出错误。

描述:

DROP TABLE 语句删除表。

例子:

创建表 A,然后删除。

CREATETABLE A(C INT);

DROPTABLE A;

7.1.22   DROP TRIGGER
DROP TRIGGER <trigger_name>

describe:

The DROP TRIGGER statement drops a trigger.

Only database users who have TRIGGER permission on the table on which the trigger applies are allowed to drop a trigger on that table.

example:

For this example, you need to create a trigger called test_trigger first, as follows:

CREATETABLE TARGET ( A INT);

CREATETABLE SAMPLE ( A INT);

CREATE TRIGGER TEST_TRIGGER AFTERUPDATEON TARGET

BEGIN

       INSERTINTO SAMPLE VALUES(3);

END;

Now you can delete the trigger:

DROP TRIGGER TEST_TRIGGER;

7.1.23 DROP VIEW
Syntax:

DROP VIEW  [<schema_name>.]<view_name> [<drop_option>]

Grammatical elements:

<drop_option> ::= CASCADE | RESTRICT

Default = RESTRICT

Cascade delete deletes the object and its dependent objects. When the CASCADE option is not specified, a non-cascade delete operation will be performed, the dependent object will not be deleted, but the dependent object (VIEW, PROCEDURE) will be invalidated.

Invalid objects can be revalidated when an object has the same collection and object name as the one created. Object IDs, collection names, and object names are preserved for revalidating dependent objects.

A restricted delete operation will delete an object when it has no dependent objects. An error will be thrown if dependent objects exist.

describe:

The DROP VIEW statement drops a view.

example:

Table t is created, then select all data in table a to create view v:

CREATETABLE t (a INT);

CREATEVIEW v ASSELECT * FROM t;

Delete view v:

DROPVIEW v;

7.1.24   RENAME COLUMN
语法:

RENAME COLUMN <table_name>.<old_column_name> TO <new_column_name>

描述:

RENAME COLUMN 语句修改列名:

例子:

创建表 B:

CREATETABLE B (A INTPRIMARYKEY, B INT);

显示表 B 中列名的列表:

SELECT COLUMN_NAME, POSITION FROM TABLE_COLUMNS WHERE SCHEMA_NAME =CURRENT_SCHEMA AND TABLE_NAME = 'B'ORDERBYPOSITION;

列 A 重名为 C:

RENAMECOLUMN B.A TO C;

7.1.25   RENAME INDEX
语法:

RENAME INDEX <old_index_name> TO <new_index_name>

描述:

RENAME INDEX 语句重命名索引名。

例子:

表 B 已创建,然后索引 idx 建立在表 B 的列 B:

CREATETABLE B (A INTPRIMARYKEY, B INT);

CREATEINDEX idx on B(B);

显示表 B 的索引名列表:

SELECT INDEX_NAME FROM INDEXES WHERE SCHEMA_NAME = CURRENT_SCHEMA AND TABLE_NAME='B';

索引 idx 重名为 new_idx:

RENAMEINDEX idx TO new_idx;

7.1.26   RENAME TABLE
语法:

RENAME TABLE <old_table_name> TO <new_table_name>

描述:

RENAME TABLE 语句在同一个Schema下,将表名修改为 new_table_name。

例子:

在当前集合创建表 A:

CREATETABLE A (A INTPRIMARYKEY, B INT);

显示当前集合下表名的列表:

SELECT TABLE_NAME FROM TABLES WHERE SCHEMA_NAME = CURRENT_SCHEMA;

表 A 重命名为 B:

RENAMETABLE A TO B;

SCHEMA  mySchema 已创建,然后创建表 mySchema.A:

CREATESCHEMA mySchema;

CREATETABLE mySchema.A (A INTPRIMARYKEY, B INT);

显示模式 mySchema 下表名的列表:

SELECT TABLE_NAME FROM TABLES WHERE SCHEMA_NAME = 'MYSCHEMA';

表 mySchema.A 重命名为 B:

RENAMETABLE mySchema.A TO B;注:修改后B还是在mySchema里

7.1.27   ALTER TABLE ALTER TYPE
语法:

<table_conversion_clause> ::= [ALTER TYPE] { ROW [THREADS <number_of_threads>] | COLUMN [THREADS <number_of_threads> [BATCH <batch_size>]] }

语法元素:

<number_of_threads> ::= <numeric_literal>

指定进行表转换的并行线程数。线程数目的最佳值应设置为可用的 CPU 内核的数量。

<batch_size> ::= <numeric_literal>

Specify the number of rows to insert in batches, the default value is 2,000,000 which is the best value. The insert table operation is committed immediately after each <batch_size> record is inserted, which can reduce memory consumption. The BATCH option can only be used when the table is converted from rows to columns. However, batch sizes larger than 2,000,000 may result in high memory consumption, so modifying this value is not recommended.

describe:

A new table with a different storage type can be created from an existing table by copying the columns and data from the existing table. This command is used to convert a table from rows to columns or from columns to rows. If the source table is row storage, the newly created table is column storage.

Configuration parameters:

The default number of threads for table conversion is defined in the [sql] section of indexserver.ini, table_conversion_parallelism =<numeric_literal> (initial value is 8).

example:

For this example, you need to first create the tables to be transformed:

CREATECOLUMNTABLE col_to_row (col1 INT, col2 INT)

CREATEROWTABLE row_to_col (col1 INT, col2 INT)

The table col_to_row will be created in column storage, and the table row_to_col will be in row storage.

Now you can change the storage type of the table col_to_row from columns to rows:

ALTERTABLE col_to_row ALTERTYPEROW

You can also change the storage type of the table row_to_col from row to column:

ALTERTABLE row_to_col ALTERTYPECOLUMN

In order to use batch conversion mode, you need to add the batch option at the end of the statement:

ALTERTABLE row_to_col ALTERTYPECOLUMN BATCH 10000

7.1.28 TRUNCATE TABLE
syntax:

TRUNCATE TABLE <table_name>

describe:

删除表中所有记录。当从表中删除所有数据时, TRUNCATE 比 DELETE FROM 快,但是 TRUNCATE无法回滚。要回滚删除的记录,应使用"DELETE FROM <table_name>"。

HISTORY 表可以通过执行该语句像正常表一样删除。历史表的所有部分(main, delta, history main and history delta)将被删除并且内容会丢失。

7.2数据操纵语句
7.2.1     DELETE
语法:

DELETE [HISTORY] FROM  [<schema_name>.]<table_name> [WHERE <condition>]

<condition> ::= <condition> OR <condition>| <condition> AND <condition>| NOT <condition>| ( <condition> )| <predicate>

关于谓词的详情,请参阅 Predicates。

描述:

DELETE 语句当满足条件时,从表中删除所有记录。如果省略了 WHERE 子句,将删除表中所有记录。

DELETE HISTORY

DELETE HISTORY 将标记选中的历史表中历史记录进行删除。这表示执行完该语句后,引用已删除记录的时间旅行查询语句可能仍然可以查看这些数据。 为了在物理上删除这些记录,必须执行下面的语句:

ALTER SYSTEM RECLAIM VERSION SPACE;  MERGE HISTORY DELTA of <table_name>;

请注意:在某些情况中,即使执行了上述两条语句,仍无法从物理上删除。

欲检查记录是否已从物理上删除,以下语句会有帮助:

SELECT * FROM <table_name> WHERE <condition> WITH PARAMETERS ('REQUEST_FLAGS'=('ALLCOMMITTED','HISTORYONLY'));

Note: The "WITH PARAMETERS ('REQUEST_FLAGS'= ('ALLCOMMITTED','HISTORYONLY'))" clause may only be suitable for verifying the execution result of the DELETE HISTORY statement.

example:

CREATETABLE T (KEYINTPRIMARYKEY, VAL INT);

INSERTINTO T VALUES (1, 1);

INSERTINTO T VALUES (2, 2);

INSERTINTO T VALUES (3, 3);

In the following example, a record will be deleted:

DELETEFROM T WHEREKEY = 1;

7.2.2 EXPLAIN PLAN
syntax:

EXPLAIN PLAN [SET STATEMENT_NAME = <statement_name>] FOR <sql_subquery>

Grammatical elements:

<statement_name> ::= string literal used to identify the name of a specific executi on plan in the output table for a given SQL statement.

If SET STATEMENT_NAME is not specified, it will be set to NULL.

describe:

The EXPLAIN PLAN statement is used to evaluate the execution plan that the SAP HANA database follows to execute the SQL statement. The results of the evaluation are stored in the view EXPLAIN_PLAN_TABLE for later inspection by the user.

SQL 语句必须是数据操纵语句,因此Schema定义语句不能在 EXPLAIN STATEMENT 中使用。

你可以从 EXPLAIN_PLAN_TABLE 视图中得到 SQL 计划,该视图为所有用户共享。这里是从视图读取 SQL 计划的例子:

SELECT * FROM EXPLAIN_PLAN_TABLE;

EXPLAIN_PLAN_TABLE 视图中的列:表 1:列名和描述

EXPLAIN_PLAN_TABLE 视图中的 OPERATOR_NAME 列。 表 2: OPERATOR_NAME 列显示的列式引擎操作符列表:

COLUMN SEARCH 为列式引擎操作符起始位置标记, ROW SEARCH 为行式引擎操作符起始位置标记。在以下的例子中, COLUMN SEARCH (ID 10)生成的中间结果被 ROW SEARCH (ID 7)使用, ROWSEARCH (ID 7) 被另一个 COLUMN SEARCH (ID 1)使用。位于 COLUMN SEARCH (ID 10) 最底层的操作符解释了 COLUMN SEARCH (ID 10) 是如何执行的。 ROW SEARCH (ID 7) 和 COLUMN SEARCH (ID 10) 之间的操作符说明了 ROW SEARCH (ID 7) 如何处理由 COLUMN SEARCH (ID 10) 生成的中间结果。位于COLUMN SEARCH (ID 1) 和 ROW SEARCH (ID 7)顶层的操作符解释了顶层 COLUMN SEARCH (ID 1) 是如何处理由 ROW SEARCH (ID 7)生成的中间结果。

SQL 计划解释例子。

该语句来自语 TPC-H 基准。例子中的所有表都是行式存储。

setschema hana_tpch;

DELETEFROM explain_plan_table WHERE statement_name = 'TPC-H Q10';

EXPLAINPLANSET STATEMENT_NAME = 'TPC-H Q10'FOR

       SELECT TOP 20 c_custkey,c_name,SUM(l_extendedprice * (1 - l_discount)) AS revenue,

              c_acctbal,

              n_name,

              c_address,

              c_phone,

              c_comment

       FROM

              customer,

              orders ,

              lineitem,

              nation

       WHERE

              c_custkey = o_custkey

              AND l_orderkey = o_orderkey

              AND o_orderdate >= '1993-10-01'

              AND o_orderdate < ADD_MONTHS('1993-10-01',3)

              AND l_returnflag = 'R'

              AND c_nationkey = n_nationkey

       GROUPBY

       c_custkey,

       c_name,

       c_acctbal ,

       c_phone,

       n_name,

       c_address ,

       c_comment

       ORDERBY revenue DESC;

      

SELECT operator_name, operator_details , table_name FROM explain_plan_table WHERE statement_name = 'TPC-H Q10';

以下是对这个查询语句的计划解释:

7.2.3     INSERT
语法:

INSERT INTO  [ <schema_name>. ]<table_name> [ <column_list_clause> ] { <value_list_clause> | <subquery> }

语法元素:

<column_list_clause> ::= ( <column_name>, ... )

<value_list_clause> ::= VALUES ( <expression>, ... )

描述:

INSERT 语句添加一条到表中。返回记录的子查询可以用来插入表中。如果子查询没有返回任何结果,数据库将不会插入任何记录。可以使用 INSERT 语句指定列的列表。不在列表中的列将显示默认值。如果省略了列的列表,数据库插入所有记录到表中。

例子:

CREATETABLE T (KEYINTPRIMARYKEY, VAL1 INT, VAL2 NVARCHAR(20));

在以下的例子中,你可以插入值:

INSERTINTO T VALUES (1, 1, 'The first');

你可以将值插入到指定的列:

INSERTINTO T (KEY) VALUES (2);

你也可以使用子查询:

INSERTINTO T SELECT 3, 3, 'The third'FROM DUMMY;

7.2.4     LOAD
语法:

LOAD <table_name> {DELTA | ALL | (<column_name>, ...)}

描述:

LOAD 语句明确地加载列式(注:Load只支持列式存储,对于行式存储不能使用)存储表的数据至内存中,而非第一次访问时加载。

DELTA

使用 DELTA,列式表的一部分将加载至内存。由于列式存储对于读操作做了优化和压缩,增量用来优化插入或更新。 所有的插入被传递给一个增量。

ALL

主要的和增量的列存储表中所有数据加载到内存中。

例子:

以下的例子加载整张表 a_table 至内存中。

LOAD a_table all;

以下的例子加载列 a_column 和表 a_table 列 another_column 至内存中。

LOAD a_table (a_column,another_column);

表加载状态可以查询:

select loaded from m_cs_tables where table_name = '<table_name>'

7.2.5     MERGE DELTA
语法:

MERGE [HISTORY] DELTA OF [<schema_name>.]<table_name> [PART n] [WITH PARAMETERS (<parameter_key_value>, ...)]

语法元素:

WITH PARAMETERS (<parameter_list>):

<parameter_list> ::= <parameter>,<parameter_list>

<parameter> ::= <parameter_name> = <parameter_setting>

<parameter_name> ::= 'SMART_MERGE' | 'MEMORY_MERGE'

<parameter_setting> ::= 'ON' | 'OFF'

Current parameters: 'SMART_MERGE' = 'ON' | 'OFF'. When SMART_MERGE is ON, the database performs a smart merge, which means that the database decides whether to merge based on the merge conditions defined in the Indexserver configuration. 'MEMORY_MERGE' = 'ON' | 'OFF' The database only merges the incremental part of the in-memory table, which will not be persisted.

describe:

With DELTA, part of the columnar table is loaded into memory. Since columnar storage is optimized and compressed for read operations, deltas are used to optimize inserts or updates. All insertions are passed to an increment section.

HISTORY can be specified to merge the incremental part of the history table into the main history part of the temporary table.

PART - Can specify to merge the delta part of the history table into the main history part of the temporary table, which is partitioned.

example:

MERGEDELTAOF A;

Merges the column store table delta part to its main part.

MERGEDELTAOF A WITH PARAMETERS('SMART_MERGE' = 'ON');

Smart merges the column store table delta part to its main part.

MERGEDELTAOF A WITH PARAMETERS('SMART_MERGE' = 'ON', 'MEMORY_MERGE' = 'ON');

Smart merges the column store table delta part to its main part non-persistent, in memory only.

MERGEDELTAOF A PART 1;

Merge the delta of partition no. 1 of table with name "A" to main part of partion no. 1.

MERGEHISTORYDELTAOF A;

Merge the history delta part of table with name "A" into its history main part.

MERGEHISTORYDELTAOF A PART 1;

Merges the column store table delta part of the history of table with name "A" to its history main part.

7.2.6     REPLACE | UPSERT
语法:

UPSERT  [ <schema_name>. ]<table_name> [ <column_list_clause> ] { <value_list_clause> [ WHERE <condition> | WITH PRIMARY KEY ] | <subquery> }

REPLACE  [ <schema_name>. ]<table_name> [ <column_list_clause> ] { <value_list_clause> [ WHERE <condition> | WITH PRIMARY KEY ] | <subquery> }

语法元素:

<column_list_clause> ::= ( <column_name>, ... )

<value_list_clause> ::= VALUES ( <expression>, ... )

<condition> ::= <condition> OR <condition>| <condition> AND <condition>| NOT <condition>| ( <condition> )| <predicate>

有关谓词的详情,请参阅 Predicates。

描述:

没有子查询的 UPSERT 或者 REPLACE 语句与 UPDATE 相似。唯一的区别是当 WHERE 子句为假(或没有Where子句)时,该语句像 INSERT 一样添加一条新的记录到表中。

对于表有 PRIMARY KEY 的情况,主键列必须包含在列的列表中。没有默认设定,由 NOT NULL 定义的列也必须包含在列的列表中。

有子查询的 UPSERT 或者 REPLACE 语句与 INSERT 一样,除了如果表中旧记录与主键的新记录值相同,则旧记录将被子查询返回的记录所修改。除非表有一个主键,否则就变得等同于 INSERT(即如果表没有设定主键,则为INSERT?),因为没有使用索引来判断新记录是否与旧记录重复。

An UPSERT or REPLACE statement with 'WITH PRIMARY KEY' is the same as a statement with a subquery. It works on a PRIMARY KEY basis.

example:

CREATETABLE T (KEYINTPRIMARYKEY, VAL INT);

You can insert a new value:

UPSERT T VALUES (1, 1);--no Where and subquery, insert directly

If the condition in the WHERE clause is false, a new value will be inserted:

UPSERT T VALUES (2, 2) WHEREKEY = 2; --insert without

You can update the first record of the column "VAL"

UPSERT T VALUES (1, 9) WHEREKEY = 1;--Update if there is one

Or you can use the "WITH PRIMARY KEY" keyword

UPSERT T VALUES (1, 8) WITHPRIMARYKEY;--Update according to the primary key, if it does not exist, insert

You can insert values ​​using a subquery:

UPSERT T SELECTKEY + 2, VAL FROM T;--Insert the result of the subquery, if it exists, update it

UPSERT T VALUES (5, 1) WITHPRIMARYKEY;--If it does not exist according to the primary key query, then insert

UPSERT T SELECT 5,3 from dummy;--if exists, then update

7.2.7 SELECT
syntax:

<select_statement> ::= <subquery> [ <for_update> | <time_travel> ]| ( <subquery> ) [ <for_update> | <time_travel> ]

<subquery> ::= <select_clause> <from_clause> [<where_clause>] [<group_by_clause>] [<having_clause>] [{<set_operator> <subquery>, ... }] [<order_by_clause>] [<limit>]

Grammatical elements:

SELECT clause:

The SELECT clause specifies an output to be returned to the user or to an external select clause, if any.

<select_clause> ::= SELECT [TOP <integer>] [ ALL | DISTINCT ] <select_list>

<select_list> ::= <select_item>[, ...]

<select_item> ::= [<table_name>.] *| <expression> [ AS <column_alias> ]

TOP n: TOP n is used to return the first n records of the SQL statement.

DISTINCT and ALL: You can use DISTINCT to return duplicate records, with only one copy for each set of selections. Use ALL to return a copy of all records selected, including any duplicate records. The default value is ALL.

Select_list: select_list allows users to define the columns they want to select from a table.

*:* All columns can be selected from the tables or views listed in the FROM clause. If the collection name and table name or the table name has an asterisk (*), it is used to limit the result set to the specified table.

column_alias: column_alias can be used to simply express expressions.

FROM: The FROM clause specifies input values ​​such as tables, views, and subqueries that will be used in the SELECT statement.

<from_clause> ::= FROM {<table>, ... }

<table> ::= <table_name> [ [AS] <table_alias> ] | <subquery> [ [AS] <table_alias> ] | <joined_table>

<joined_table> ::= <table> [<join_type>] JOIN <table> ON <predicate> | <table> CROSS JOIN <table> | <joined_table>

<join_type> ::= INNER | { LEFT | RIGHT | FULL } [OUTER]

table alias: Table aliases can be used to simply represent tables or subqueries.

join_type defines the type of join that will be performed, LEFT means left outer join, RIGHT means right outer join, FULL means full outer join. OUT may or may not be used when performing join operations.

ON <predicate>: The ON clause defines a join predicate.

CROSS JOIN: CROSS means to perform a cross join, and the cross join generates the cross product result of the two tables.

WHERE child clause

The WHERE clause is used to specify the predicate entered in the FROM clause so that the user can retrieve the required records.

<where_clause> ::= WHERE <condition>

<condition> ::=<condition> OR <condition> | <condition> AND <condition> | NOT <condition> | ( <condition> ) | <predicate>

<predicate> ::= <comparison_predicate> | <range_preciate> | <in_predicate> | <exist_predicate> | <like_predicate> | <null_predicate>

<comparison_predicate> ::= <expression> { = | != | <> | > | < | >= | <= } [ ANY | SOME | ALL ] ({<expression_list> | <subquery>})

<range_predicate> ::= <expression> [NOT] BETWEEN <expression> AND <expression>

<in_predicate> ::= <expression> [NOT] IN ( { <expression_list> | <subquery> } )

<exist_predicate> ::= [NOT] EXISTS ( <subquery> )

<like_predicate> ::= <expression> [NOT] LIKE <expression> [ESCAPE <expression>]

<null_predicate> ::= <expression> IS [NOT] NULL

<expression_list> ::= {<expression>, ... }

GROUP BY child clause

<group_by_clause> ::=GROUP BY {  { <expression>, ... } | <grouping_set> }

<grouping_set> ::= { GROUPING SETS | ROLLUP | CUBE }[BEST <integer>] [LIMIT <integer>[OFFSET <integer>] ] [WITH SUBTOTAL] [WITH BALANCE] [WITH TOTAL]

[TEXT_FILTER <filterspec> [FILL UP [SORT MATCHES TO TOP]]]  [STRUCTURED RESULT [WITH OVERVIEW] [PREFIX <string_literal>] | MULTIPLE RESULTSETS] ( <grouping_expression_list> )

<grouping_expression_list> ::= { <grouping_expression>, ... }

<grouping_expression> ::=<expression>| ( <expression>, ... ) | ( ( <expression>, ... ) <order_by_clause> )

GROUP BY is used to group selected rows based on specified column values.

GROUPING SETS

In one statement, generate multiple specific data grouping results. If optional options such as best and limit are not set, the result will be the same as UNION ALL aggregated values ​​for each specified group. For example:

"select col1, col2, col3, count(*) from t groupbygrouping sets ( (col1, col2), (col1, col3) )"与" select col1, col2, NULL, count(*) from t groupby col1, col2 unionallselect col1, NULL, col3,count(*) from t groupby col1, col3"相同。在 grouping-sets 语句中,

Each (col1, col2) and (col1, col3) defines a grouping.

ROLLUP

In one statement, generate multi-level aggregation results. For example, "rollup (col1, col2,col3)" is the same as "grouping sets ( (col1, col2, col3), (col1, col2), (col1) )" with additional aggregation, but no grouping. Thus, the number of groupings contained in the result set is the number of columns in the ROLLUP list plus a final aggregate number, if there are no additional options.

CUBE

In one statement, generate the result of a multi-level aggregation. For example, "cube (col1, col2,col3)" is the same as "grouping sets ( (col1, col2, col3), (col1, col2), (col1, col3), (col2, col3) " with additional aggregation but no grouping , (col1), (col2),(col3) )" with the same result. Thus, the result set contains the same number of groups as all possible columns arranged in the CUBE list plus a final aggregation, if no additional options are present.

BEST n

Returns the first n grouping sets in each grouping set in descending order of row aggregation (returns all records in a grouping, not a few). n can be any zero, positive or negative number. When n is zero, the effect is the same as without the BEST option. When n is a negative number, it means sorting in ascending order.

LIMIT n1 [OFFSET n2]

Returns the result after skipping N2 records in the first N1 grouping records in each grouping set (take some rows in each group).

WITH SUBTOTAL

Returns a subtotal of the returned results controlled by OFFSET or LIMIT in each grouping set. Unless OFFSET and LIMIT are set, the return value will be the same as WITH TOTAL.

WITH BALANCE

Returns the remaining result values ​​not returned by OFFSET or LIMIT in each grouping set.

WITH TOTAL

Returns an additional row of aggregated totals in each grouping set. The OFFSET and LIMIT options cannot modify this value.

TEXT_FILTER <filterspec>

Perform text filtering or highlight grouping columns with <filterspec>, where <filterspec> is a string with single quotes, the syntax is as follows:

<filterspec> ::= '[<prefix>]<element>{<subsequent>, ...}'

<prefix> ::= + | - | NOT

<element> ::= <token> | <phrase>

<token> ::= !! Unicode letters or digits

<phrase> ::= !! double-quoted string that does not contain double quotations inside

<subsequent> ::= [<prefix_subsequent>]<element>

<prefix_subsequent> ::= + | - | NOT | AND | AND NOT | OR

<filterspec>定义的过滤是由与逻辑操作符 AND, OR 和 NOT 连接的标记/词组或者短语组成。 一个标记相匹配的字符串,其中包含对应不区分大小写的单词, 'ab' 匹配 'ab cd' 和 'cd Ab' ,但不匹配'abcd'。一个标记可以包含通配字符’,匹配任何字符串, ’匹配任意字母。但是在词组内, ’和’不是通配符。逻辑运算符 AND, OR 和 NOT 可以与标记,词组一起使用。由于 OR 是默认操作符, 'ab cd' 和'ab OR cd'意义一样。注意,逻辑运算符应该大写。作为一种逻辑运算符,前缀'+' 和 '-'各自表示包含(AND) 和不包含 (AND NOT)。例如, 'ab -cd' 和 'ab AND NOT cd'意义相同。如果没有 FILL UP 选项,只返回含有匹配值的分组记录。 需要注意的是一个过滤器仅被运用到每个分组集的第一个分组列。

FILL UP

Returns not only matching grouped records, but also non-matching records. The text_filter function is useful for identifying which matches. See 'Related Functions' below.

SORT MATCHES TO TOP

Returns the grouping set with matching values ​​preceding non-matching values. This option cannot be used with SUBTOTAL, BALANCE and TOTAL.

STRUCTURED RESULT

Results are returned as temporary tables. A temporary table is created for each grouping set. If the WITH OVERVIEW option is set, an additional temporary table will be created for the overview of the grouping set. The name of the temporary table is defined by the PREFIX option.

WITH OVERVIEW

Return the overview to a separate extra sheet.

PREFIX value

Use a prefix to name temporary tables. It must start with "#", which means it is a temporary table. If omitted, the default prefix is ​​"#GN". Then, concatenate the prefix value with a non-negative integer to use as the name of the temporary table, such as "#GN0", "#GN1" and "#GN2".

MULTIPLE RESULTSETS

Returns results from multiple result sets.

related functions

The grouping_id ( <grouping_column1, ..., grouping_column> ) function returns an integer to determine which grouping set each grouping record belongs to. The text_filter ( <grouping_column> ) function is used with TEXT_FILTER, FILL UP, and SORT MATCHES TO TOP to display matching values ​​or NULL. When the FILL UP option is specified, unmatched values ​​are displayed as NULL.

return format

If neither STRUCTURED RESULT nor MULTIPLE RESULTSETS is set, returns the union of all grouping sets, with NULL values ​​populated for attributes not contained in the specified grouping sets. Using STRUCTURED RESULT, an additional temporary table is created, which can be queried with "SELECT * FROM <table name>" in the same session. Table names follow the format:

<PREFIX>0: If WITH OVERVIEW is defined, the table will contain an overview.

<PREFIX>n: The nth grouping set reordered by the BEST parameter.

With MULTIPLE RESULTSETS, multiple result sets will be returned. The grouped records for each grouping set are in a single result set.

HAVING clause:

The HAVING clause is used to select specific groupings that satisfy a predicate. If this clause is omitted, all groups are selected.

<having_clause> ::= HAVING <condition>

SET OPERATORS

SET OPERATORS combines multiple SELECT statements and returns only one result set.

<set_operator> ::= UNION [ ALL | DISTINCT ] | INTERSECT [DISTINCT] | EXCEPT [DISTINCT]

UNION ALL

Selects all (union) records in all select statements. Duplicate records will not be deleted.

UNION [DISTINCT]

Select unique records in all SELECT statements, remove duplicate records in different SELECT statements. UNION and UNION DISTINCT have the same effect.

INTERSECT [DISTINCT]

Selects unique records that are common (intersection) across all SELECT statements.

EXCEPT [DISTINCT]

Returns all unique records from the first SELECT statement after the subsequent SELECT statement removes (subtracts) duplicate records.

ORDER BY child clause

<order_by_clause> ::= ORDER BY { <order_by_expression>, ... }

<order_by_expression> ::= <expression> [ ASC | DESC ]| <position> [ ASC | DESC]

<position> ::= <integer>

The ORDER BY clause is used to sort records based on expressions or positions. position represents the index of the select list. For "select col1,col2 from t order by 2", 2 is the second expression used by col2 in the select list. ASC is used to sort records in ascending order and DESC is used to sort records in descending order. The default is ASC.

LIMIT

The LIMIT keyword defines the number of records to output.

<limit> ::= LIMIT <integer> [ OFFSET <integer> ]

LIMIT n1 [OFFSET n2]: Returns the first n1 records after skipping n2 records.

FOR UPDATE

The FOR UPDATE keyword locks the record so that other users cannot lock or modify the record until the end of the transaction.

<for_update> ::= FOR UPDATE

TIME TRAVEL

This keyword is related to time travel and is used for statement-level time travel back to the snapshot specified by commit_id or time.

<time_travel> ::= AS OF { { COMMIT ID <commit_id> } | { UTCTIMESTAMP <timestamp> }} Time travel is only available for history lists. <commit_id> can be obtained from m_history_index_last_commit_id after each commit, and its related <timestamp> can be read from sys.m_transaction_history.

createhistorycolumntable x ( a int, b int ); // after turnning off auto commit

insertinto x values (1,1);

commit;

select last_commit_id from m_history_index_last_commit_id where session_id = current_connection;// e.g., 10

insertinto x values (2,2);

commit;

select last_commit_id from m_history_index_last_commit_id where session_id = current_connection; // e.g., 20

deletefrom x;

commit;

select last_commit_id from m_history_index_last_commit_id where session_id = current_connection; // e.g., 30

select * from x asofcommit id 30; // return nothing

select * from x asofcommit id 20; // return two records (1,1) and (2,2)

select * from x asofcommit id 10; // return one record (1,1)

select commit_time from sys.transaction_history where commit_id = 10; // e.g., '2012-01-01 01:11:11'

select commit_time from sys.transaction_history where commit_id = 20; // e.g., '2012-01-01 02:22:22'

select commit_time from sys.transaction_history where commit_id = 30; // e.g., '2012-01-01 03:33:33'

select * from x asof utctimestamp '2012-01-02 02:00:00'; // return one record (1,1)

select * from x asof utctimestamp '2012-01-03 03:00:00'; // return two records (1,1) and (2,2)

select * from x asof utctimestamp '2012-01-04 04:00:00'; // return nothing

例子:

表 t1:

droptable t1;

createcolumntable t1 ( id intprimarykey, customer varchar(5), yearint, product varchar(5), sales int );

insertinto t1 values(1, 'C1', 2009, 'P1', 100);

insertinto t1 values(2, 'C1', 2009, 'P2', 200);

insertinto t1 values(3, 'C1', 2010, 'P1', 50);

insertinto t1 values(4, 'C1', 2010, 'P2', 150);

insertinto t1 values(5, 'C2', 2009, 'P1', 200);

insertinto t1 values(6, 'C2', 2009, 'P2', 300);

insertinto t1 values(7, 'C2', 2010, 'P1', 100);

insertinto t1 values(8, 'C2', 2010, 'P2', 150);

The following GROUPING SETS statement is equivalent to the second group-by query. Note that the two groups specified in the grouping set of the first query are the groups specified in the second query.

select customer, year, product, sum(sales) from t1 groupbyGROUPING SETS((customer, year),(customer, product));

select customer, year, NULL, sum(sales) from t1 groupby customer, year

unionall

select customer, NULL, product, sum(sales) from t1 groupby customer, product;

Note: In the case of Union, the number of fields in the two Select statements and the type of the corresponding fields must be the same. The MultiCube in BW inserts multiple InfoProvider records into the physical table corresponding to the MultiCube. This process is not done through the Union SQL statement, but inserts the data of the InfoProvider into the MultiCube one by one, so from the InfoProvider The number of fields can be different, but when the report is displayed, it is merged through Group

A compact representation of a grouping set frequently used by ROLLUP and CUBE. The following ROLLUP query is equivalent to the second group-by query.

select customer, year, sum(sales) from t1 groupby ROLLUP(customer, year);

select customer, year, sum(sales) from t1 groupbygrouping sets((customer, year),(customer))

unionall

selectNULL, NULL, sum(sales) from t1;

selectNULL, NULL, sum(sales) from t1;

select customer, year, sum(sales) from t1 groupbygrouping sets((customer, year),(customer))

select customer, year, sum(sales) from t1 groupbygrouping sets((customer, year),(customer),())

The following CUBE query is equivalent to the second group-by query.

select customer, year, sum(sales) from t1 groupby CUBE(customer, year);

select customer, year, sum(sales) from t1 groupbygrouping sets((customer, year),(customer),(year))

unionall

selectNULL, NULL, sum(sales) from t1;

select customer, year, sum(sales) from t1 groupbygrouping sets((customer, year),(customer),(year))

select customer, year, sum(sales) from t1 groupbygrouping sets((customer, year),(customer),(year),())

BEST 1 specifies that the following query statement can only return the top best group. In this example, there are 4 records for the (customer, year) group and 2 records for the (product) group, so the previous 4 records are returned. For 'BEST -1' instead of 'BEST 1', the last 2 records are returned.

select customer, year, product, sum(sales) from t1 groupbygrouping sets ((customer, year),(product));

select customer, year, product, sum(sales) from t1 groupbygrouping sets BEST 1((customer, year),(product));

select customer, year, product, sum(sales) from t1 groupbygrouping sets BEST 2((customer, year),(product));

select customer, year, product, sum(sales) from t1 groupbygrouping sets BEST -1((customer, year),(product));

LIMIT2 limits the maximum number of records per group to 2. For the (customer, year) group, there are 4 records, and only the first 2 records are returned; the number of records for the (product) group is 2, so all results are returned.

select customer, year, product, sum(sales) from t1 groupbygrouping sets LIMIT 2((customer, year),(product));

WITH SUBTOTAL generates an additional record for each group, showing the subtotal of the returned results (the ones that are not displayed will not be counted, which is different from With Total, please refer to the following With Total). Aggregation of these records returns NULL for the customer, year, product columns, the sum of sum(sales) in the select list.

select customer, year, product, sum(sales) from t1 groupbygrouping sets LIMIT 2 WITH SUBTOTAL((customer, year),(product));

WITH BALNACE generates an additional record for each group, displaying the subtotal of the unreturned results (if the non-returned result row does not exist, the subtotal row will still be displayed, but it is all question marks, not not displayed).

select customer, year, product, sum(sales) from t1 groupbygrouping sets WITH BALANCE((customer, year),(product));

select customer, year, product, sum(sales) from t1 groupbygrouping sets LIMIT 2 WITH BALANCE((customer, year),(product));

select customer, year, product, sum(sales) from t1 groupbygrouping sets LIMIT 1 WITH BALANCE((customer, year),(product));

WITH TOTAL generates an additional record for each group, showing the summary of all group records, regardless of whether the group record is returned (that is, the data not displayed in the group will also be summarized, such as the following 300 + 500 <> 1250, Because Limit is used to limit the number of items returned by each group, but those data that are not displayed will also be counted together, which is different from With SubTotal).

select customer, year, product, sum(sales) from t1 groupbygrouping sets LIMIT 2 WITH TOTAL((customer, year),(product))

TEXT_FILTER allows the user to get the first column of the group with the specified <filterspec>. The following query will search for columns ending with '2': customers for the first grouping set, products for the second. Only three matching records are returned. TEXT_FILTER in the SELECT list is useful to see which values ​​match.

select customer, year, product, sum(sales), text_filter(customer), text_filter(product) from t1

groupbygrouping sets TEXT_FILTER '*2'((customer, year),(product));--just search the first column in each group, such as customer and product here, but do not search the Year column, because it is not a group first column in

FILL UP is used to return matching and non-matching records containing <filterspec>. Therefore, the following query returns 6 records, while the previous query returned 3.

select customer, year, product, sum(sales), text_filter(customer), text_filter(product)

from t1 groupbygrouping sets TEXT_FILTER '*2' FILL UP ((customer, year),(product));

SORT MATCHES TO TOP is used to improve matching records. For each grouping set, its grouping records are sorted.

select customer, year, product, sum(sales), text_filter(customer), text_filter(product)

from t1 groupbygrouping sets TEXT_FILTER '*2' FILL UP SORT MATCHES TO TOP((customer, year),(product));

STRUCTURED RESULT creates a temporary table for each grouping set and, optionally, a summary table. The table "#GN1" is the grouping set (customer, year), and the table "#GN2" is the grouping set (product). Note that each table contains only one related column. That is, the table "#GN1" does not contain the column "product" and the table "#GN2" does not contain the columns "customer" and "year".

select customer, year, product, sum(sales) from t1 groupbygrouping sets STRUCTURED RESULT((customer, year),(product));

select * from"#GN1";

select * from"#GN2";

WITH OVERVIEW creates a temporary table "#GN0" for the overview table.

select customer, year, product, sum(sales)

from t1 groupbygrouping sets structured result WITH OVERVIEW((customer, year),(product));

select * from"#GN0";

select * from"#GN1";

select * from"#GN2";

用户可以通过使用 PREFIX 关键字修改临时表的名字。注意,名字必须以临时表的前缀'#'开始,下面与上面结果是一样,只是临时表名不一样而已:

select customer, year, product, sum(sales)

from t1

groupbygrouping sets STRUCTURED RESULT WITH OVERVIEW PREFIX '#MYTAB'((customer, year),(product));

select * from"#MYTAB0";

select * from"#MYTAB1";

select * from"#MYTAB2";

当相应的会话被关闭或用户执行 drop 命令,临时表被删除。 临时列表是显示在m_temporary_tables。

select * from m_temporary_tables;

MULTIPLE RESULTSETS 返回多个结果的结果集。在 SAP HANA Studio 中,以下查询将返回三个结果集:一个为总览表,两个为分组集。

select customer, year, product, sum(sales) from t1 groupbygrouping sets MULTIPLE RESULTSETS((customer, year),(product));

7.2.8     UNLOAD
语法:

UNLOAD <table_name>

描述:

UNLOAD 语句从内存中卸载列存储表, 以释放内存。表将在下次访问时重新加载。

例子:

在下面的例子中,表 a_table 将从内存中卸载。

UNLOAD a_table;

卸载表的状态可以通过以下语句查询:

select loaded from m_cs_tables where table_name = 't1';

7.2.9     UPDATE
语法

UPDATE  [<schema_name>.]<table_name>  [ AS <alias_name> ] <set_clause> [ WHERE <condition> ]

<set_clause> ::= SET {<column_name> = <expression>},...

关于表达式的详情,请参见 Expressions。

<condition> ::= <condition> OR <condition> | <condition> AND <condition> | NOT <condition> | ( <condition> ) | <predicate>

关于谓词的详情,请参见 Predicates。

描述:

UPDATE 语句修改满足条件的表中记录的值。如果 WHERE 子句中条件为真,将分配该列至表达式的结果中。如果省略了 WHERE 子句,语句将更新表中所有的记录。

例子:

CREATETABLE T (KEYINTPRIMARYKEY, VAL INT);

INSERTINTO T VALUES (1, 1);

INSERTINTO T VALUES (2, 2);

如果 WHERE 条件中的条件为真,记录将被更新。

UPDATE T SET VAL = VAL + 1 WHEREKEY = 1;

如果省略了 WHERE 子句,将更新表中所有的记录。

UPDATE T SET VAL = KEY + 10;

7.3系统管理语句
7.3.1     SET SYSTEM LICENSE
语法:

SET SYSTEM LICENSE '<license key>'

描述:

安装许可证密钥的数据库实例。许可证密钥(<license key>="">) 将从许可证密钥文件中复制黏贴。

执行该命令需要系统权限 LICENSE ADMIN。

例子:

SETSYSTEM LICENSE '----- Begin SAP License -----

SAPSYSTEM=HD1

HARDWARE-KEY=K4150485960

INSTNO=0110008649

BEGIN=20110809

EXPIRATION=20151231

LKEY=...

SWPRODUCTNAME=SAP-HANA

SWPRODUCTLIMIT=2147483647

SYSTEM-NR=00000000031047460'

7.3.2     ALTER SYSTEM ALTER CONFIGURATION
语法:

ALTER CONFIGURATION (<filename>, <layer>[, <layer_name>]) SET | UNSET <parameter_key_value_list> [ WITH RECONFIGURE]

语法元素:

<filename> ::= <string_literal>

行存储引擎配置的情况下,文件名是'indexserver.ini'。 所使用的文件名必须是一个位于’DEFAULT’层的 ini 文件。如果选择文件的文件名在所需的层不存在,该文件将用 SET 命令创建。

<layer> ::= <string_literal>

设置配置变化的目标层。 该参数可以是'SYSTEM'或'HOST'。 SYSTEM 层为客户设置的推荐层。 HOST层应该一般仅可用于少量的配置,例如, daemon.ini 包含的参数。

<layer_name> ::= <string_literal>

如果上述的层设为’HOST’, layer_name 将用于设置目标 tenant 名或者目标主机名。例如,

'selxeon12' 为目标 'selxeon12' 主机名。

SET

SET 命令更新键值,如果该键已存在,或者需要的话插入该键值。

UNSET

UNSET 命令删除键及其关联值。

<parameter_key_value_list> ::={(<section_name>,<parameter_name>) = <parameter_value>},...

指定要修改的 ini 文件的段、键和值语句如下:

<section_name> ::= <string_literal>

将要修改的参数段名:

<parameter_name> ::= <string_literal>

将要修改的参数名:

<parameter_value> ::= <string_literal>

将要修改的参数值。

WITH RECONFIGURE

当指定了 WITH RECONFIGURE,配置的修改将直接应用到 SAP HANA 数据库实例。

When WITH RECONFIGURE is not specified, the new configuration will be written to the file ini, however, the new value will not be applied to the current running system, only applied at the next startup of the database. This means that there may be inconsistencies between the contents of the ini file and the actual configuration values ​​used by the SAP HANA database.

describe:

Set or remove configuration parameters in the ini file. ini file configuration for DEFAULT, SYSTEM, HOST layers.

NOTE: The DEFAULT layer configuration cannot be changed or deleted with this command.

The following are examples of ini file locations:

DEFAULT: /usr/sap/<SYSTEMNAME>/HDB<INSTANCENUMBER>/exe/config/indexserver.ini

SYSTEM: /usr/sap/<SYSTEMNAME>/SYS/global/hdb/custom/config/indexserver.ini

HOST: /usr/sap/<SYSTEMNAME>/HDB<INSTANCENUMBER>/<HOSTNAME>/indexserver.ini

The priority of the configuration layer: DEFAULT < SYSTEM < HOST. This means that the HOST layer has the highest priority, followed by the SYSTEM layer, and finally the DEFAULT layer. The configuration with the highest priority will be applied to the running environment. If the configuration with the highest priority is deleted, the configuration with the next highest priority will be applied.

System and Monitoring Views:

The currently available ini files are listed in the system table M_INIFILES, and the current configuration is visible in the system table M_INIFILE_CONTENTS.

example:

An example of modifying the system layer configuration is as follows:

ALTERSYSTEMALTER CONFIGURATION ('filename', 'layer') SET ('section1', 'key1') = 'value1', ('section2','key2') = 'value2', ... [WITH RECONFIGURE];

ALTERSYSTEMALTER CONFIGURATION ('filename', 'layer', 'layer_name' ) UNSET ('section1', 'key1'),('section2'), ...[WITH RECONFIGURE];

7.3.3     ALTER SYSTEM ALTER SESSION SET
语法:

ALTER SYSTEM ALTER SESSION <session_id> SET <key> = <value>

语法元素:

<session_id> ::= <unsigned_integer>

应当设置变量的会话的 ID。

<key> ::= <string_literal>

会话变量的键值,最大长度为 32 个字符。

<value> ::= <string_literal>

会话变量的期望值,最大长度为 512 个字符。

描述:

使用该命令,你可以设置数据库会话的会话变量:

注意:有几个只读会话变量,你不能使用该命令修改值: APPLICATION, APPLICATIONUSER,TRACEPROFILE。

Session variables can be obtained using the SESSION_CONTEXT function and unset using the ALTER SYSTEM ALTER SESSION UNSET command.

example:

In the following example, you set the variable 'MY_VAR' to 'dummy' in session 200006:

ALTERSYSTEMALTER SESSION 200006 SET'MY_VAR'= 'dummy';

7.3.4 ALTER SYSTEM ALTER SESSION UNSET
Syntax:

ALTER SYSTEM ALTER SESSION <session_id> UNSET <key>

Grammatical elements:

<session_id> ::= <unsigned_integer>

The session id for which the variable should be unset.

<key> ::= <string_literal>

The key value of the session variable, the maximum length is 32 characters.

describe:

With this command, you can unset session variables for a database session.

Sessions can be obtained through the SESSION_CONTEXT function.

example:

Get the session variables for the current session:

SELECT * FROM M_SESSION_CONTEXT WHERE CONNECTION_ID = CURRENT_CONNECTION

Remove a session variable from a specific session:

ALTERSYSTEMALTER SESSION 200001 UNSET 'MY_VAR';

7.3.5 ALTER SYSTEM CANCEL [WORK IN] SESSION
Syntax

ALTER SYSTEM CANCEL [WORK IN] SESSION <session_id>

Grammatical elements:

<session_id> ::= <string_literal>

The session ID of the desired session.

describe:

Cancels the currently running statement by specifying a session ID. A canceled session will be rolled back after cancellation, and the executing statement will return error code 139 (current operation canceled by request and transaction rolled back).

example:

You can use the following query to get the current connection IDs and the statements they execute.

SELECT C.CONNECTION_ID, PS.STATEMENT_STRING

FROM M_CONNECTIONS C JOIN M_PREPARED_STATEMENTS PS

ON C.CONNECTION_ID = PS.CONNECTION_ID AND C.CURRENT_STATEMENT_ID = PS.STATEMENT_ID

WHERE C.CONNECTION_STATUS = 'RUNNING'AND C.CONNECTION_TYPE = 'Remote'

Using the connection ID obtained from the above query statement, you can now cancel a running query with the following statement:

ALTERSYSTEM CANCEL SESSION '400037';

7.3.6 ALTER SYSTEM CLEAR SQL PLAN CACHE
Syntax:

ALTER SYSTEM CLEAR SQL PLAN CACHE

describe:

SQL PLAN CACHE stores the plans generated by previously executed SQL statements, and SAP HANA database uses this plan cache to speed up the execution of query statements, if the same SQL statement is executed again. The plan cache also collects data about plan preparation and execution.

You can find out more about SQL cache plans from the following monitoring views:

M_SQL_PLAN_CACHE, M_SQL_PLAN_CACHE_OVERVIEW

ALTER SYSTEM CLEAR SQL PLAN CACHE 语句删除所有当前计划缓存没有执行的 SQL 计划。该命令还可以从计划缓存中删除所有引用计数为 0 的计划,并重置所有剩余计划的统计数据。最后,该命令也重置监控视图 M_SQL_PLAN_CACHE_OVERVIEW 的内容。

例子:

ALTERSYSTEM CLEAR SQLPLANCACHE

7.3.7     ALTER SYSTEM CLEAR TRACES
语法:

ALTER SYSTEM CLEAR TRACES (<trace_type_list>)

语法元素:

<trace_type_list> ::= <trace_type> [,...]

通过在逗号分隔的列表中加入多个 trace_types,您可以同时清除多个追踪。

<trace_type> ::= <string_literal>

你可以通过设置 trace_type 为以下类型之一,有选择地清除特定的追踪文件:

描述:

你可以使用 ALTER SYSTEM CLEAR TRACES 清除追踪文件中的追踪内容。当您使用此命令所有开设了 SAP HANA 数据库的跟踪文件将被删除或清除。在分布式系统中,该命令将清除所有主机上的所有跟踪文件。

使用此命令可以减少大跟踪文件使用的磁盘空间,例如,当追踪组件设为 INFO 或 DEBUG。

你可以使用系统表 M_TRACEFILES, M_TRACEFILE_CONTENTS 各自监控追踪文件及其内容。

例子:

要清除警告的跟踪文件,使用下面的命令:

ALTERSYSTEM CLEAR TRACES('ALERT');

要清除警告和客户端跟踪文件,使用下面的命令:

ALTERSYSTEM CLEAR TRACES('ALERT', 'CLIENT');

7.3.8 ALTER SYSTEM DISCONNECT SESSION
Syntax:

ALTER SYSTEM DISCONNECT SESSION <session_id>

Grammatical elements:

<session_id> ::= <string_literal>

The session ID to disconnect.

describe:

You use ALTER SYSTEM DISCONNECT SESSION to disconnect a database-specific session. All running operations associated with the session will be terminated before disconnecting.

example:

You get the session ID of an idle session with the following command:

SELECT CONNECTION_ID, IDLE_TIME FROM M_CONNECTIONS WHERE CONNECTION_STATUS = 'IDLE'AND CONNECTION_TYPE = 'Remote'ORDERBY IDLE_TIME DESC

You disconnect the session with the following command:

ALTERSYSTEMDISCONNECT SESSION '400043'

7.3.9 ALTER SYSTEM LOGGING
Syntax:

ALTER SYSTEM LOGGING <on_off>

Grammatical elements:

<on_off> ::= ON | OFF

describe:

Enable or disable logging.

After logging is disabled, any log entries will not be persisted. When completing a savepoint, only the data area is written with data.

This may result in the loss of committed transactions and the indexserver being terminated while loading. In case of termination, you have to truncate and insert all data again.

After enabling logging, you must perform a savepoint to ensure that all data is saved, and you must perform a data backup, otherwise you will not be able to restore the data.

Only use this command on initial load!

You can do it for a single list with ALTER TABLE ... ENABLE/DISABLE DELTA LOG .

7.3.10 ALTER SYSTEM RECLAIM DATAVOLUME
Syntax:

ALTER SYSTEM RECLAIM DATAVOLUME [SPACE] [<host_port>] <percentage_of_overload_size>

<shrink_mode>

Grammatical elements:

<host_port> ::= 'host_name:port_number'

Specifies the size by which the server should be reduced in the persistence layer:

<percentage_of_overload_size> ::= <int_const>

Specifies the percentage by which the amount of overloaded data should be reduced.

<shrink_mode> ::= DEFRAGMENT | SPARSIFY

Specifies the strategy for persistent layer reduction in size, the default is DEFRAGEMENT. Note that SPARSIFY is not yet supported and is reserved for future use

describe:

This command should be used when freeing unused space in the persistence layer. It reduces the data volume to N% of the overloaded volume; it works like a hard disk defragmentation, the data scattered on the page will be moved to the front of the data volume and the free space at the end of the data volume will be truncated.

If <host_port> is omitted, the statement is persistently assigned to all servers.

example:

In the example below, all server persistence layers in the architecture will be defragmented and reduced to 120% of the overload size.

ALTERSYSTEM RECLAIM DATAVOLUME 120 DEFRAGMENT

7.3.11 ALTER SYSTEM RECLAIM LOG
syntax:

ALTER SYSTEM RECLAIM LOG

describe:

When a large number of log segments have accumulated in the database, you can use this command to reclaim disk space for currently unused log segments.

The accumulation of log segments can be caused in a number of ways. For example, when automatic log backups are not long-term operational or log savepoints are blocked for a long time, when such problems occur, you should only use the ALTER SYSTEM CLAIM LOG command after fixing the root cause of the log accumulation.

example:

To reclaim disk space for currently unused log segments, use the following command:

ALTERSYSTEM RECLAIM LOG

7.3.12 ALTER SYSTEM RECLAIM VERSION SPACE
Syntax:

ALTER SYSTEM RECLAIM VERSION SPACE

describe:

Perform MVCC version garbage collection to reuse resources.

7.3.13 ALTER SYSTEM RECONFIGURE SERVICE
Syntax:

ALTER SYSTEM RECONFIGURE SERVICE (<service_name>,<host>,<port>)

Grammatical elements:

<service_name> ::= <string_literal>

The name of the service you wish to reconfigure. See the monitoring view M_SERVICE_TYPES for a list of available service types.

<host> ::= <string_literal>

<port> ::= <integer>

You will reconfigure the host and port number of the service.

describe:

You can use ALTER SYSTEM RECONFIGURE SERVICE to reconfigure a specified service by applying the current configuration parameters.

Use this command to modify multiple configuration parameters using ALTER CONFIGURATION without the RECONFIGURE option. See ALTER SYSTEM ALTER CONFIGURATION.

To reconfigure a specific service, specify values ​​for <host> and <port> and leave <service_name> blank.

欲重新配置一种类型的所有服务,指定<service_name> 的值, 而 host> 和 <port>留空。

欲重新配置所有服务,所有参数留空。

例子:

你可以使用以下命令来重新配置 ld8520.sap.com 主机上所有使用端口号 30303 的服务:

ALTERSYSTEM RECONFIGURE SERVICE ('','ld8520.sap.com',30303)

你可以使用以下命令重新配置类型 indexserver 的所有服务:

ALTERSYSTEM RECONFIGURE SERVICE ('indexserver','',0)

参见 ALTER SYSTEM ALTER CONFIGURATION。

7.3.14   ALTER SYSTEM REMOVE TRACES
语法:

ALTER SYSTEM REMOVE TRACES (<host>, <trace_file_name_list>)

<trace_file_name_list> ::= <trace_file>,...

语法元素:

<host> :== <string_literal>

将要删除追踪记录的主机名。

<trace_file_name_list> ::= <trace_file> [,..]

你可以通过在逗号分隔的列表中添加多条 trace_file 记录,同时删除多条追踪记录。

<trace_file> :== see table below.

你可以将 trace_file 设置为以下类型之一:

描述:

你可以使用该命令删除指定主机中的追踪文件,减少大追踪文件占用的硬盘空间。当某个服务的追踪文件已打开,则不能被删除。这种情况下,你可以使用 ALTER SYSTEM CLEAR TRACES 命令清除追踪文件。

例子:

你使用以下命令删除主机 lu873.sap.com 上所有 ALERT 追踪文件:

ALTERSYSTEM REMOVE TRACES ('lu873.sap.com', '*alert_*.trc');

参见 ALTER SYSTEM CLEAR TRACES。

7.3.15   ALTER SYSTEM RESET MONITORING VIEW
语法:

ALTER SYSTEM RESET MONITORING VIEW <view_name>

语法元素:

<view_name> ::= <identifier>

重设可重置监控视图的名字。

注意:不是所有监控视图可以使用该命令进行重置。可重设视图的名字后缀为"_RESET",你可以通过其名字判断是否可以重置。

描述:

你可以使用此命令重置指定的监视视图的统计数据。

你可以使用此命令来定义测量的起始点。首先,你重置监控视图,然后执行一个操作。当该操作完成后,查询监控视图"_RESET"版本获得从上次重置之后收集到的统计信息。

例子:

在以下的例子中,你重置"SYS"."M_HEAP_MEMORY_RESET"监控视图:

ALTERSYSTEMRESET MONITORING VIEW"SYS"."M_HEAP_MEMORY_RESET"

7.3.16   ALTER SYSTEM SAVE PERFTRACE
语法:

ALTER SYSTEM SAVE PERFTRACE [INTO FILE <file_name>]

语法元素:

<file_name> ::= <string_literal>

原始性能数据保存的文件。

描述:

你可以使用命令收集.prf 文件中的原始性能数据,保存该信息至.tpt 文件。 .tpt 文件保存在 SAP HANA 数据库实例的追踪文件目录中。如果你未指定文件名,则文件将保存为'perftrace.tpt'。

性能追踪数据文件(.tpt)可以从'SAP HANA Computing Studio'->Diagnosis-Files 下载,之后性能追踪可以利用 SAP HANA 实例中的 HDBAdmin 加载和分析。

监控视图:

性能文件的状态可以从 M_PERFTRACE 监控。

例子:

你可以使用如下命令将原始性能数据保存至'mytrace.tpt'文件:

ALTERSYSTEM SAVE PERFTRACE INTO FILE 'mytrace.tpt'

7.3.17   ALTER SYSTEM SAVEPOINT
语法:

ALTER SYSTEM SAVEPOINT

描述:

持久层管理器上执行保存点。 保存点是一个数据库的完整连续镜像保存在磁盘上的时间点,该镜像可以用于重启数据库。

通常情况下,保存点定期执行,由[persistence]部分的参数 savepoint_interval_s 配置。 对于特殊的(通常测试)的目的,保存点可能会被禁用。在这种情况下,你可以使用此命令来手动执行保存点。

7.3.18   ALTER SYSTEM START PERFTRACE
语法:

ALTER SYSTEM START PERFTRACE [<user_name>] [<application_user_name>] [PLAN_EXECUTION][FUNCTION_PROFILER] [DURATION <duration_seconds>]

语法元素:

<user_name> ::= <identifier>

限制 perftrace 收集为指定的 SQL 用户名。

<application_user_name> ::= <identifier>

限制为指定的 SQL 用户名收集 perftrace,应用用户可以通过会话变量 APPLICATIONUSER 定义。

PLAN_EXECUTION

收集计划执行细节:

FUNCTION_PROFILER

收集函数级别细节:

<duration_seconds> ::= <numeric literal>

经过 duration_seconds 后, perftrace 自动停止。如果未指定该参数,仅停止有 ALTER SYSTEM STOP PERFTRACE 的 perftrace。

描述:

开始性能追踪。

利用'Explain Plan' 或 'Visualize Plan',你可以在逻辑级别查看语句的执行。利用'Perfomance Trace',语句的执行将记录在线程和函数级别。

一次只能有一个 perftrace 活动。

性能追踪文件状态可以从 M_PERFTRACE 监控。

例子:

ALTERSYSTEM START PERFTRACE sql_user app_user PLAN_EXECUTION FUNCTION_PROFILER

7.3.19   ALTER SYSTEM STOP PERFTRACE
语法:

ALTER SYSTEM STOP PERFTRACE

describe:

Stops a previously started performance trace. After stopping, you need to use ALTER SYSTEM SAVE PERFTRACE to collect and save performance trace data.

example:

ALTERSYSTEM STOP PERFTRACE

7.3.20 ALTER SYSTEM STOP SERVICE
Syntax:

ALTER SYSTEM STOP SERVICE <host_port> [IMMEDIATE [WITH COREFILE]]

Grammatical elements:

<host_port> ::= <host_name:port_number> | ('<host_name>',<port_number>)

The location of the service that will be stopped.

IMMEDIATE

Immediately stops (aborts) a service without waiting for a graceful shutdown.

WITH COREFILE

Write to the core file.

describe:

Stop or terminate single or multiple services. Normally, the service will be restarted by the daemon.

Use after modifying parameters that cannot be changed online.

example:

ALTERSYSTEM STOP SERVICE 'ld8520:30303'

UNSET SYSTEM LICENSE ALL

grammar:

7.3.21 UNSET SYSTEM LICENSE ALL
Description:

Delete all installed license keys. Immediately after using this command, the system will be locked and a new valid license key will be required before further use. Executing this command requires LICENSE ADMIN authority.

example:

UNSET SYSTEM LICENSE ALL

7.4 Session management statement
7.4.1 CONNECT
syntax:

CONNECT <connect_option>

Grammatical elements:

<connect_option> ::=<user_name> PASSWORD <password> | WITH SAML ASSERTION '<xml>'

describe:

Connect to the database instance by specifying a user_name and password or by specifying a SAML assertion.

example:

CONNECT my_user PASSWORD myUserPass1

7.4.2 Syntax of SET HISTORY SESSION
:

SET HISTORY SESSION TO <when>

Grammatical elements:

<when>:

The user should specify an exact session travel time.

<when> ::= NOW | COMMIT ID <commit_id> | UTCTIMESTAMP <utc_timestamp>

describe:

SET HISTORY SESSION causes the current session to view past versions of the history table. The user can specify the version in COMMIT ID or UTCTIMESTAMP format, or return to the current version by specifying NOW. After issuing SET HISTORY SESSION with COMMIT ID or UTCTIMESTAMP, the current session sees an old version of the history table and cannot write anything to the system. If the NOW option is given, the current session reverts to a normal session, sees the current version of the history table, and can write to the system. This command only applies to the history table, and the visibility of ordinary tables will not be affected.

example:

SELECT CURRENT_UTCTIMESTAMP FROM SYS.DUMMY

SELECT LAST_COMMIT_ID FROM M_HISTORY_INDEX_LAST_COMMIT_ID WHERE SESSION_ID =

CURRENT_CONNECTION COMMIT

SET HISTORY SESSION TO UTCTIMESTAMP '2012-03-09 07:01:41.428'

SET HISTORY SESSION TO NOW

7.4.3     SET SCHEMA
语法:

SET SCHEMA <schema_name>

描述:

你可以修改会话的当前schema。 如果表前不限制schema,则使用当前用户的schema。

7.4.4     SET [SESSION]
语法:

SET [SESSION] <key> = <value>

(SESSION选项可以省略)

语法元素:

<key> ::= <string_literal>

会话变量的键值,最大长度为 32 个字符。

<value> ::= <string_literal>

会话变量的期望值,最大长度为 512 个字符。

描述:

你可以使用该命令设置你数据库会话的会话变量,通过提供键值对。

注意:有几个只读会话变量,你不能使用该命令修改值: APPLICATION, APPLICATIONUSER,TRACEPROFILE。

会话变量可以使用 SESSION_CONTEXT 函数获得,使用 UNSET [SESSION]命令取消设置。

例子:

SET'MY_VAR' = 'dummy';

SELECT SESSION_CONTEXT('MY_VAR') FROM dummy;

UNSET 'MY_VAR';

7.4.5     UNSET [SESSION]
语法:

UNSET [SESSION] <key>

语法元素:

<key> ::= <string_literal>

会话变量的键值,最大长度为 32 个字符。

描述:

你可以使用 UNSET [SESSION]取消设置当前会话的会话变量。

注意:有几个只读会话变量,你不能使用该命令修改值: APPLICATION, APPLICATIONUSER,TRACEPROFILE。

例子:

SET'MY_VAR'= 'dummy';

SELECT SESSION_CONTEXT('MY_VAR') FROM dummy;

UNSET 'MY_VAR';

7.5 Transaction management statement
7.5.1 COMMIT
syntax:

COMMIT

describe:

The system supports transactional consistency, which ensures that the current job is fully applied to the system or discarded. If the user wants to permanently apply the current job to the system, the user should use the COMMIT command. If the COMMIT command is issued and processed successfully, any changes will be applied to the system at the completion of the current transaction, and the changes will also be visible to other jobs started in the future. Work that has been committed via the COMMIT command cannot be recovered. In a distributed system, follow the standard two-phase commit protocol. In the first phase, the transaction coordinator will ask each participant if they are ready to commit and send the result to the second phase participants. The COMMIT command is only available for 'autocommit' disabled sessions.

example:

COMMIT

7.5.2 LOCK TABLE
syntax:

LOCK TABLE <table_name> IN EXCLUSIVE MODE [NOWAIT]

describe:

The LOCK TABLE command explicitly attempts to acquire a mutex lock on a table. If the NO WAIT option is specified, it just tries to acquire a lock on the table. If the lock cannot be acquired with the NOWAIT option specified, an error code will be returned, but the current transaction will be rolled back.

example:

LOCKTABLE t1 INEXCLUSIVEMODE NOWAIT

7.5.3 ROLLBACK
Syntax:

ROLLBACK

describe:

该系统支持事务一致性,保证了当前作业是完全应用到系统中或者弃用。在事务的中间过程, 可以显式恢复,因为由于 ROLLBACK 命令,事务尚未执行。发布 ROLLBACK 命令后, 将完全恢复事务系统做的任何变化,当前会话将处于闲置状态。 ROLLBACK 命令只适用于'autocommit'的禁用会话。

例子:

ROLLBACK

7.5.4     SET TRANSACTION
语法:

SET TRANSACTION <isolation_level> | <transaction_access_mode>

语法元素:

isolation_level ::= ISOLATION LEVEL <level>

隔离级别设置数据库中的数据语句级读一致性。如果省略了 isolation_level,默认值为 READ COMMITTED。

level ::= READ COMMITTED(提交读取) | REPEATABLE READ(可重复读) | SERIALIZABLE(序列化读)

READ COMMITTED

READ COMMITTED 隔离级别提供事务过程中语句级别读一致性。 在语句开始执行时,事务中的每条语句都能看到已提交状态的数据。这意味着在同一事务中,每个语句可能会看到执行时数据库中不同的快照,因为数据可以在事务中提交。

REPEATABLE READ/SERIALIZABLE

REPEATABLE READ/SERIALIZABLE 隔离级别提供了事务级快照隔离。事务所有语句共享数据库同样的快照。该快照包含所有已提交的事务开始的时间以及事务本身的修改。

transaction_access_mode ::= READ ONLY | READ WRITE

SQL 事务访问模式控制事务是否可以在执行期间修改数据。如果省略了transaction_access_mode,默认值为 READ ONLY。

READ ONLY

If the READ ONLY access mode is set, only read-only SELECT statements are allowed. An exception will be thrown if an update or insert operation is attempted in this mode.

READ WRITE

If the READ WRITE access mode is set, statements within a transaction are free to read or change database data as needed.

describe:

The SAP HANA database uses multi-version concurrency control (MVCC) to ensure the consistency of read operations (commit read isolation level + MVCC, which can solve the problem of non-repeatable data read). Concurrent read operations do not block the consistent view of data in the database with concurrent write operations. Concurrent read operations do not block concurrent writes to the consistent view of database data. Update operations are performed by inserting a new version of the data rather than overwriting existing data.

The specified isolation level determines the type of locking operation that will be used. The system supports both statement-level snapshot isolation and transaction-level snapshot isolation.

 For statement-level snapshot isolation, use READ COMMITED.

 For transaction-level snapshot isolation, use REPEATABLE READ or SERIALIZABLE.

In a transaction, when a record is inserted, updated, or deleted, the system sets the duration of the mutex for the affected record during the execution of the transaction, and also sets the lock for the affected table. This guarantees that the table will not be deleted or changed while records in the table are being updated. The database releases these locks at the end of the transaction.

Note: Read operations do not set any locks on tables or rows in the database, regardless of the isolation level used.

Data Definition Language and Transaction Isolation

Data Definition Language (DDL) statements (CREATE TABLE, DROP TABLE, CREATE VIEW, etc ) always take effect immediately on subsequent SQ statements, regardless of the isolation level used. For an example of this behavior, consider the following sequence:

1. A long-running SERIALIZABLE isolated transaction starts operating on table C.

2. Some DDL statements run outside a transaction, adding a new column to table C.

3. Within the SERIALIZABLE isolation transaction, as long as the DDL statement is executed, the newly generated columns can be accessed. Access occurs regardless of the isolation level used.

example:

SETTRANSACTION READ COMMITTED;

The isolation level of the database: the role of concurrency.

l Read Uncommited (read uncommitted): The data can be read without committing (Insert is issued, but it can be read without commit.) Rarely used. At this isolation level, all transactions can see the execution results of other uncommitted transactions. This isolation level is rarely used in practical applications, because its performance is not much better than other levels. Reading uncommitted data is also called dirty reading (Dirty Read).

l Read Committed (commit to read): can only be read after submission, commonly used. This is the default isolation level for most database systems (but not for MySQL, which defaults to repeatable read for MySql). It satisfies the simple definition of isolation: a transaction can only see changes made by committed transactions. Dirty read problem solved.

l Repeatable Read (repeatable read): mysql default level, must be submitted to see, the data is locked when reading data. It ensures that multiple instances of the same transaction see the same rows when they read data concurrently. But in theory, this can lead to another thorny problem: phantom read (Phantom Read). Simply put, phantom reading means that when a user reads a range of data rows, another transaction inserts a new row in the range. When the user reads the range of data rows again, it will find that there are new Phantom" line. The InnoDB and Falcon storage engines solve this problem through the multiversion concurrency control (MVCC, Multiversion Concurrency Control) mechanism. Solved the problem of non-repeatable read

l Serialiazble (serialized read): The highest isolation level, serial type, I can only operate after you have finished the operation, and the concurrency is particularly bad. This is the highest isolation level, and it solves phantom reads by enforcing the ordering of transactions so that they cannot conflict with each other. In short, it adds a shared lock on each data row read. At this level, a large number of timeouts and lock contention can result.

isolation level

Is there a dirty read

Is there non-repeatable read

Is there phantom reading

Read Uncommitted (read uncommitted)

Y

Y

Y

Read Committed

N

Y (can be solved by pessimistic locking)

Y

Repeatable Read (repeatable read)

N

N

Y

Serialiazble (serialized read)

N

N

N

Problems may occur when transactions are concurrent: dirty reads, non-repeatable reads, phantom reads

Dirty read: Data that can be read without committing is called dirty read

Non-repeatable read: read it again, the data is different from what you uploaded. called non-repeatable read.

Phantom reading: After querying the data of a certain condition, after the query is started, others add or delete some data, and the original data is different when it is read again.

Dirty Read: A transaction has updated a piece of data, and another transaction has read the same piece of data at this time. For some reason, the previous RollBack operation, the data read by the latter transaction will be is incorrect.

Non-repeatable read (Non-repeatable read): The data is inconsistent between the two queries of a transaction. This may be due to the insertion of the original data updated by a transaction in the middle of the two query processes.

 Phantom Read: The number of data items in the two queries of a transaction is inconsistent. For example, one transaction queries a few rows of data, while another transaction inserts a few new rows of data at this time. Previously In the next query of the transaction, you will find that there are several columns of data that it did not have before.

Set the JDBC transaction isolation level (note that most databases do not support all isolation levels):

1  java.sql.Connection.TRANSACTION_READ_COMMITTED

2  java.sql.Connection.TRANSACTION_READ_UNCOMMITTED

4  java.sql.Connection.TRANSACTION_REPEATABLE_READ

8  java.sql.Connection.TRANSACTION_SERIALIZABLE

Generally, we set the level to 1 (committed read) level, and then use the program to avoid the "non-repeatable read" problem.

If we do not set the isolation level of the database in Hibernate, the default is dependent on the database, so we'd better set it.

The banking system needs to set the isolation level of the database to "repeatable read".

1. Pessimistic lock
Pessimistic lock: exclusive (after I lock the current data, others cannot see the data)

Pessimistic locking is generally done by the data mechanism. select...for update

1. The realization of pessimistic locking
usually depends on the database mechanism. During the renovation process, the data is locked, and no other user can read or modify it (for example: only after I modify it can others modify it)

2. Applicable scenarios of pessimistic locks:
pessimistic locks are generally suitable for short transactions (such as adding 1 after a certain data is fetched, and releasing it immediately)

Long transactions occupy time (if they occupy 1 hour, then others cannot use the data for 1 hour), and are not commonly used.

3. Examples:
 
     
 

 
User 1 and User 2 read the data at the same time, but User 2 first -200, at this time the database is 800, and now User 1 also starts -200, but the data read by User 1 just now is 1000, now the user just uses The data 1000-200 read at the beginning is 800, and when user 1 is updating, the data in the database is 800 updated by the user. It is reasonable to say that user 1 should be 800-200=600, but now it is 800, which causes Updates are lost. How to deal with this situation, two methods can be used: pessimistic locking and optimistic locking. Let’s take a pessimistic lock first: After user 1 reads the data, he locks the data he read with a lock. At this time, user 2 cannot read the data. User 2 can only read the data after user 1 releases the lock. Similarly The data read by user 2 is also locked. This will solve the problem of lost updates.

2. Optimistic lock
Optimistic lock: not a lock, but a conflict detection mechanism, such as Hibernate.

        The concurrency of optimistic locking is better, because when I change it, others will modify it along the way.

        Optimistic lock implementation method: the commonly used method is the version (each data table has a version field version, after a user updates the data, the version number +1, another user modifies it and then +1, when the user updates the current version of the database If the version number is inconsistent with the version number when reading the data (equal to less than the current version number of the database), it cannot be updated.

7.5.4.1 Concepts related to database locks
In order to ensure the correctness of concurrent users accessing the same database object (that is, no lost modification, repeatable reading, and no reading of "dirty" data), a locking mechanism is introduced in the database. There are two basic types of locks: exclusive locks (exclusive locks are recorded as x locks) and shared locks (share locks are recorded as s locks). Locking is a very important technology to achieve database concurrency control. Before a transaction operates on a data object, it first sends a request to the system to lock it. After locking, the transaction has certain control over the data object. Before the transaction releases the lock, other transactions cannot update the data object.

Exclusive lock: If transaction t adds x lock to data d, no other transaction can add any type of lock to d until t releases the x lock on d; it is generally required to add row to the data before modifying the data It locks, so an exclusive lock is also called a write lock.

Shared lock: If transaction t adds s lock to data d, other transactions can only add s lock to d, but cannot add x lock until t releases the s lock of d; it is generally required to add s lock to the data before reading data. Shared locks, so shared locks are also called read locks.

7.5.4.1.1 Pessimistic blocking
The lock takes effect before the user modifies it:
Select ..for update (nowait)
Select * from tab1 for update
After the user issues this command, oracle will establish a row-level blockade for the data in the returned set, to prevent modification by other users.
If other users perform dml or ddl operations on the data returned by the result set at this time, an error message will be returned or a block will occur.
1: The update or delete operation on the returned result set will be blocked.
2: The ddl operation on the table will report: Ora-00054: resource busy and acquire with nowait specified.

原因分析
此时Oracle已经对返回的结果集上加了排它的行级锁,所有其他对这些数据进行的修改或删除操作都必须等待这个锁的释放,产生的外在现象就是其他的操作将发生阻塞,这个这个操作commit或rollback.
同样这个查询的事务将会对该表加表级锁,不允许对该表的任何ddl操作,否则将会报出ora-00054错误::resource busy and acquire with nowait specified.

7.5.4.1.2      乐观封锁
乐观的认为数据在select出来到update进取并提交的这段时间数据不会被更改。这里面有一种潜在的危险就是由于被选出的结果集并没有被锁定,是存在一种可能被其他用户更改的可能。因此Oracle仍然建议是用悲观封锁,因为这样会更安全。乐观锁一般通过程序版本控制来实现,如Hibernate

7.5.4.1.3      阻塞
定义:
当一个会话保持另一个会话正在请求的资源上的锁定时,就会发生阻塞。被阻塞的会话将一直挂起,直到持有锁的会话放弃锁定的资源为止。4个常见的dml语句会产生阻塞
INSERT
UPDATE
DELETE
SELECT…FOR UPDATE


INSERT

Insert发生阻塞的唯一情况就是用户拥有一个建有主键约束的表。当2个的会话同时试图向表中插入相同的数据时,其中的一个会话将被阻塞,直到另外一个会话提交或会滚。一个会话提交时,另一个会话将收到主键重复的错误。回滚时,被阻塞的会话将继续执行。

UPDATE 和DELETE当执行Update和delete操作的数据行已经被另外的会话锁定时,将会发生阻塞,直到另一个会话提交或会滚。

Select …for update

当一个用户发出select..for update的错作准备对返回的结果集进行修改时,如果结果集已经被另一个会话锁定,就是发生阻塞。需要等另一个会话结束之后才可继续执行。可以通过发出 select… for update nowait的语句来避免发生阻塞,如果资源已经被另一个会话锁定,则会返回以下错误:Ora-00054:resource busy and acquire with nowait specified.

7.5.4.1.4      死锁-deadlock
定义:当两个用户希望持有对方的资源时就会发生死锁.
即两个用户互相等待对方释放资源时,oracle认定为产生了死锁,在这种情况下,将以牺牲一个用户作为代价,另一个用户继续执行,牺牲的用户的事务将回滚.
例子:
1:用户1对A表进行Update,没有提交。
2:用户2对B表进行Update,没有提交。
此时双反不存在资源共享的问题。
3:如果用户2此时对A表作update,则会发生阻塞,需要等到用户一的事物结束。
4:如果此时用户1又对B表作update,则产生死锁。此时Oracle会选择其中一个用户进行会滚,使另一个用户继续执行操作。
起因:
Oracle的死锁问题实际上很少见,如果发生,基本上都是不正确的程序设计造成的,经过调整后,基本上都会避免死锁的发生。

7.6访问控制语句
7.6.1     ALTER SAML PROVIDER
ALTER SAML PROVIDER <saml_provider_name> WITH SUBJECT <subject_name> ISSUER <issuer_distinguished_name>

语法元素:

<subject_name> ::=

<string_literal>

<issuer_distinguished_name> ::=

<string_literal>

描述:

ALTER SAML PROVIDER 语句修改 SAP HANA 数据库已知的 SAML 提供商的属性。

<saml_provider_name> 必须是一个现有的 SAML 提供商。只有拥有系统权限 USER ADMIN 的数据库

用户允许修改 SAML 提供商。

<subject_name> 以及 <issuer_distinguished_name>是 SAML 身份提供程序中证书对应的名字。

系统和监控视图:

SAML_PROVIDER:显示所有 SAML 提供商主题名和 issuer_name。

7.6.2     ALTER USER
语法:

ALTER USER <user_name> <alter_user_option>

语法元素:

<alter_user_option> ::=PASSWORD <password> [<user_parameter_option>]

| <user_parameter_option>

| IDENTIFIED EXTERNALLY AS <external_identity> [<user_parameter_option>]

| RESET CONNECT ATTEMPTS

| DROP CONNECT ATTEMPTS

| DISABLE PASSWORD LIFETIME

| FORCE PASSWORD CHANGE

| DEACTIVATE [USER NOW]

| ACTIVATE [USER NOW]

| DISABLE <authentication_mechanism>

| ENABLE <authentication_mechanism>

| ADD IDENTITY <provider_identity>...

| ADD IDENTITY <external_identity> FOR KERBEROS

| DROP IDENTITY <provider_info>...

| DROP IDENTITY FOR KERBEROS

| <string_literal>

<authentication_mechanism> ::= PASSWORD | KERBEROS | SAML

<provider_identity> ::=<mapped_user_name> FOR SAML PROVIDER <saml_provider_name>| <external_identity> FOR KERBEROS

<mapped_user_name> ::=ANY | <string_literal>

<saml_provider_name> ::=<simple_identifier>

<provider_info> ::= FOR SAML PROVIDER <saml_provider_name>

<password> ::=<letter_or_digit>...

<user_parameter_option> ::=<set_user_parameters>  [<clear_user_parameter_option>] | <clear_user_parameter_option>

<set_user_parameters> ::=SET PARAMETER CLIENT = <string_literal>

<clear_user_parameter_option> ::=CLEAR PARAMETER CLIENT| CLEAR ALL PARAMETERS

<external_identity> ::=<simple_identifier>

describe:

The ALTER USER statement modifies a database user. <user_name> must specify an existing database user.

Each user can execute ALTER USER for itself. But not all <alter_user_option> can be specified by users themselves. For <alter_user_option> other users, only users with system authority USER ADMIN authority can execute ALTER USER.

A user created with PASSWORD cannot be modified EXTERNALLY and vice versa. But their <password> or <external_identity> can be modified.

You can use this command to change a user's password. Password modification must follow the rules defined by the current database, including minimum password length and defined character types (uppercase, lowercase, numbers, special characters) must be part of the password. According to the policy defined by the specified database instance, the user must change the password periodically, or the user who connects to the database instance for the first time must change the password by himself.

You can change external authentication. External users require authentication to use external systems, for example, Kerberos systems. These users do not have passwords, but have Kerberos entity names. For more information on external identities, please contact your domain administrator.

<user_parameter_option> can be used to set, modify or clear the user parameter CLIENT.

<set_user_parameters> is used to set user parameters CLIENT for users in the database.

When using reports, the user parameter CLIENT can be used to limit user <user_name>'s access to information about a specific client.

<user_parameter_option> cannot be specified by the user.

If the number of errors defined by the parameter MAXIMUM_INVALID_CONNECT_ATTEMPTS (see monitoring view M_PASSWORD_POLICY ) is reached before a successful connection (correct user/password combination), the user will be locked out for a few minutes before being allowed to reconnect. A user with system authority USER ADMIN or the user himself can use the command ALTER USER <user_name>

RESET CONNECT ATTEMPTS removes information about invalid connection attempts that have occurred.

Users with system authority USER ADMIN can use the command ALTER USER <user_name> DISABLE PASSWORD

LIFETIME Excludes all password lifetime checks for user <user_name>. This should only be used by technical users, not normal database users.

Users with system authority USER ADMIN can use the command ALTER USER <user_name> FORCE PASSWORD CHANGE to force user <user_name> to change the password immediately after the next connection, and then it can work normally.

A user with system authority USER ADMIN can use the command ALTER USER <user_name> DEACTIVATE USER NOW to close/lock the account of user <user_name>. After the account of user <user_name> is closed/locked, the user will not be able to connect to

SAP HANA database. To reactivate/unlock user <user_name>, the system privilege USER ADMIN user uses the command USER <user_name> ACTIVATE USER NOW, or, in case the user uses the PASSWORD authentication mechanism, ALTER USER <user_name> PASSWORD <password> RESET user password.

A user with system authority USER ADMIN can use the command ALTER USER <user_name> ACTIVATE USER NOW to reactivate/unlock the previously closed account of user <user_name>.

Configuration parameters:

For password configuration parameters, see the monitoring view M_PASSWORD_POLICY. These parameters are stored in

indexserver.ini, in the 'password policy' section. Related parameter descriptions can be found in SAP HANA Security Guide, Appendix, Password Policy Parameters.

System and Monitoring Views:

USERS: Displays information about all users, who created them, when they were created, and their current status.

USER_PARAMETERS: Display the defined user_parameters, currently only CLIENT is provided.

INVALID_CONNECT_ATTEMPTS: Displays the number of invalid connection attempts per user.

LAST_USED_PASSWORDS: Shows the user's last password modification date.

M_PASSWORD_POLICY: Displays configuration parameters describing the allowed styles of passwords and their lifetimes.

example:

Before it is possible to connect to the database with the given password and the existing SAML provider OUR_PROVIDER assertion, a user with username NEW_USER has been created. Since the assertion will provide the database user name, <mapped_user_name> is set to ANY. This is done with the following statement:

CREATEUSER new_user PASSWORD Password1 WITHIDENTITYANYFOR SAML PROVIDER OUR_PROVIDER;

现在,该用户将被强制修改密码,用户被禁止使用 SAML。

ALTERUSER new_user FORCE PASSWORD CHANGE;

ALTERUSER new_user DISABLE SAML;

假设用户已经过于频繁的尝试一个错误的密码,管理员将重置无效的连接尝试数为零。

ALTERUSER new_user RESETCONNECT ATTEMPTS;

用户 new_user 应当允许使用 KERBEROS 机制进行身份验证。因此,需要定义该连接的外部身份。

ALTERUSER new_user ADDIDENTITY'testkerberosName'FOR KERBEROS;

ALTERUSER new_user ENABLE KERBEROS;

另一方面,用户 new_user 将放松使用 SAML 提供商 OUR_PROVIDER 断言的可能性。

ALTERUSER new_user DROPIDENTITYFOR SAML PROVIDER OUR_PROVIDER;

最后,管理员希望禁止此用户 new_user 的所有连接,因为他最近执行的可疑操作。

ALTERUSER new_user DEACTIVATE;

7.6.3     CREATE ROLE
语法:

CREATE ROLE <role_name>

语法元素:

<role_name> ::= <identifier>

描述:

CREATE ROLE 语句创建一个新的角色。

只有拥有系统权限 ROLE ADMIN 的用户可以创建新角色。

指定的角色名称不能与现有用户或角色的名称相同。

角色是权限的一个命名集合,可以授予一个用户或角色。如果你想允许多个数据库用户执行相同的操作,你可以创建一个角色,授予该角色所需的权限,并将角色授予不同的数据库用户。

每个用户允许将权限授予一个已有的角色,但只有只有拥有系统权限 ROLE ADMIN 的用户可以将角色授予角色和用户。

SAP HANA 数据库提供了四种角色:

PUBLIC:每个数据库用户默认已被授予该角色。

该角色包括只读访问系统视图、监控视图和一些存储过程的执行权限。这些权限可以被撤销。

该角色可以授予过后将被撤销的权限。

MODELING:该角色包含使用 SAP HANA Studio 信息建模器所需的权限。

CONTENT_ADMIN:该角色包含与 MODELING 角色相同的角色,但是使用扩展该角色将被允许授予其他用户这些权限。此外,它包含了与导入对象工作的元库权限。

MONITORING:该角色包含所有元数据、当前的系统状态、监控视图和服务器统计数据的只读访问。

系统和监控视图:

ROLES:显示所有角色、它们的创建者和创建时间。

GRANTED_ROLES:显示每个用户或角色被授予的角色。

GRANTED_PRIVILEGES:显示每个用户或角色被授予的权限。

例子:

创建名称为 role_for_work_on_my_schema 的角色。

CREATE ROLE role_for_work_on_my_schema;

7.6.4     CREATE SAML PROVIDER
语法:

CREATE SAML PROVIDER <saml_provider_name> WITH SUBJECT <subject_distinguished_name> ISSUER <issuer_distinguished_name>

描述:

CREATE SAML PROVIDER 语句定义 SAP HANA 数据库已知的 SAML 提供商。 <saml_provider_name>必须与已有的 SAML 提供商不同。

只有拥有系统权限 USER ADMIN 的用户可以创建 SAML 提供商,每个有该权限的用户允许删除任何 SAML 提供商。

需要一个现有的 SAML 提供商,能够为用户指定 SAML 连接。 <subject_distinguished_name> 和<issuer_distinguished_name>是 SAML 提供商使用的 X.509 证书的主题和发布者的 X.500 可分辨名字。这些名字的语法可以在 ISO/IEC 9594-1 中找到。

SAML 概念的详细细节可以在 Oasis SAML 2.0 中找到。

系统和监控视图:

SAML_PROVIDERS:显示所有 SAML 提供商主题名和发布者名字。

例子:

创建一个名称为 gm_saml_provider 的 SAML 提供商,指定主题和发布者所属的公司。

CREATE SAML PROVIDER gm_saml_provider WITH SUBJECT 'CN = wiki.detroit.generalmotors.corp,OU = GMNet,O = GeneralMotors,C = EN'

ISSUER 'E = [email protected],CN = GMNetCA,OU = GMNet,O = GeneralMotors,C = EN';

7.6.5     CREATE USER
语法:

CREATE USER <user_name> [PASSWORD <password>] [IDENTIFIED EXTERNALLY AS <external_identity>] [WITH IDENTITY <provider_identity>...] [<set_user_parameters>]

语法元素:

<external_identity> ::=<simple_identifier> | <string_literal>

<provider_identity> ::=<mapped_user_name> FOR SAML PROVIDER <saml_provider_name> | <external_identity> FOR KERBEROS

<mapped_user_name> ::=ANY | <string_literal>

<saml_provider_name> ::=<simple_identifier>

<set_user_parameters> ::=SET PARAMETER CLIENT = <string_literal>

描述:

CREATE USER 创建一个新的数据库用户。

只有拥有系统权限 USER ADMIN 的用户可以创建另一个数据库用户。

指定的用户名必须不能与已有的用户名、角色名或集合名相同。

SAP HANA 数据库提供的用户有: SYS, SYSTEM, _SYS_REPO,_SYS_STATISTICS。

数据库中的用户可以通过不同的机制进行身份验证,内部使用密码的身份验证机制和而外部则使用 Kerberos 或 SAML 等机制验证。用户可以同时使用不止一种方式进行身份验证,但在同一时间,只有一个密码和一个外部识别有效。与之相反的是,同一时间可以有一个以上<provider_identity>为一个用户存在。至少需指定一种验证机制允许用户连接和在数据库实例上工作。

由于兼容性原因,语法 IDENTIFIED EXTERNALLY AS <external_identity>以及<external_identity> FORKERBEROS 会继续使用。

密码必须遵循当前数据库定义的规则。密码的修改必须遵循当前数据库定义的规则,包括最小密码长度和定义的字符类型(大写、小写、数字、特殊字符)必须是密码的一部分.用户根据指定数据库实例定义的策略,必须定期更换密码。在执行 CREATE USER 命令期间提供的密码将被视为已提供, <user_name>将会修改为大写作为每个<simple_identifier>。

外部用户使用外部系统进行身份验证,例如 Kerberos 系统。这些用户没有密码,但是有 Kerberos实体名称。有关外部身份的详细信息,请联系您的域管理员。

如果 ANY 作为映射的用户名, SAML 断言将包含断言生效的数据库用户名。 <saml_provider_name>必须指定一个已有的 SAML 提供商。

<set_user_parameters>可以用于为数据库中的用户设置用户参数 CLIENT。

当使用报表时,该用户参数 CLIENT 可以用于限制用户 <user_name>访问有关特定客户端的信息。

<user_parameter_option>不能由用户自己指定。

For each database user, a data collection will be created containing the username. This cannot be explicitly deleted. When the user deletes, the collection will also be deleted. The database user owns the collection and uses it as his default collection when he does not specify a collection name explicitly.

Configuration parameters:

Password-related configuration parameters can be viewed in the monitoring view M_PASSWORD_POLICY. These parameters are stored in the 'password policy' section of indexserver.ini. Related parameter descriptions can be found in SAP HANA Security Guide, Appendix, Password Policy Parameters.

System and Monitoring Views:

USERS: Displays information about all users, who created them, when they were created, and their current status.

USER_PARAMETERS: Display the defined user_parameters, currently only CLIENT is provided.

INVALID_CONNECT_ATTEMPTS: Displays the number of invalid connection attempts per user.

LAST_USED_PASSWORDS: Shows the user's last password modification date.

M_PASSWORD_POLICY: Displays configuration parameters describing the allowed styles of passwords and their lifetimes.

SAML_PROVIDERS; Display existing SAML providers.

SAML_USER_MAPPING: Displays the mapped username for each SAML provider.

example:

Before it is possible to connect to the database with the given password and the existing SAML provider OUR_PROVIDER assertion, a user with username NEW_USER has been created. Since the assertion will provide the database user name, <mapped_user_name> is set to ANY. This is done with the following statement:

CREATEUSER new_user PASSWORD Password1 WITHIDENTITYANYFOR SAML PROVIDER OUR_PROVIDER;

7.6.6 DROP ROLE
Syntax:

DROP ROLE <role_name>

example:

The DROP ROLE statement drops a role. <drop_name> must specify an existing role.

只有拥有系统权限 ROLE ADMIN 的用户可以删除角色。任何有该权限的用户允许删除任意角色。

只有 SAP HANA 提供的角色可以删除: PUBLIC, CONTENT_ADMIN, MODELING and MONITORING。

如果一个角色授予用户或角色,在角色删除时将被撤销。撤销角色可能会导致一些视图无法访问或者存储过程再也不工作,如果一个视图或存储过程依赖于该角色中的任意权限,会发生这种情况。

系统和监控视图:

ROLES:显示所有角色、它们的创建者和创建时间。

GRANTED_ROLES:显示每个用户或角色被授予的角色。

GRANTED_PRIVILEGES:显示每个用户或角色被授予的权限。

例子:

创建名为 role_for_work_on_my_schema 的角色,随后立即删除。

CREATE ROLE role_for_work_on_my_schema;

DROP ROLE role_for_work_on_my_schema;

7.6.7     DROP SAML PROVIDER
语法:

DROP SAML PROVIDER <saml_provider_name>

描述:

DROP SAML PROVIDER 语句删除指定的 SAML 提供商。 <saml_provider_name>必须是一个已有的SAML 提供商。如果指定的 SAML 提供商正在被 SAP HANA 用户使用,则该提供商不能被删除。

只有拥有系统 USER ADMIN 权限的用户可以删除 SAML 提供商。

系统和监控视图:

SAML_PROVIDERS:显示所有 SAML 提供商主题名称和 issuer_name。

7.6.8     DROP USER
语法:

DROP USER <user_name> [<drop_option>]

语法元素:

<drop_option> ::= CASCADE | RESTRICT

Default = RESTRICT

描述:

DROP USER 语句删除数据库用户。 <user_name>必须指定一个已有的数据库用户。

只有拥有系统 USER ADMIN 权限的用户可以删除用户。拥有该权限的用户可以删除任何用户。 SAPHANA 数据库提供的用户不能删除: SYS, SYSTEM, _SYS_REPO,_SYS_STATISTICS。

如果显式或隐式指定了<drop_option> RESTRICT,则当用户为数据集合的所有者或以及创建了其他集合,或者该用户集合下存有非本人创建的对象时,该用户不能被删除。

如果指定了<drop_option> CASCADE,包含用户名的集合和属于该用户的集合,连同所有存在这些集合中的对象(即使是由其他用户创建)一起删除。用户拥有的对象,即使为其他集合中的一部分,将被删除。依赖于已删除对象的对象将被删除,即使已删除的用户所拥有的公共同义词。

已删除对象的权限将被撤销,授予已删除用户的权限也将被撤销。撤销权限可能会造成更多的撤销操作,如果这些权限被进一步授予。

已删除用户创建的用户和由他们创建的角色将不会被删除。已删除的用户创建的审核策略也不会被删除。

如果用户存在一个已打开的会话,仍然可以删除该用户。

系统和监控视图:

已删除用户将从以下视图删除:

USERS: 显示所有用户、用户的创建者、创建时间和当前状态的信息。

USER_PARAMETERS:显示定义的 user_parameters,目前只提供 CLIENT。

INVALID_CONNECT_ATTEMPTS:显示每个用户无效连接的尝试次数。

LAST_USED_PASSWORDS: 显示用户上次密码修改日期。

M_PASSWORD_POLICY:显示描述密码所允许的样式的配置参数及其生命周期。

对象的删除可能影响所有描述对象的系统视图,例如 TABLES, VIEWS,PROCEDURES, ... .

对象的删除可能影响描述权限的视图:例如 GRANTED_PRIVILEGES 以及所有监控视图,例如M_RS_TABLES, M_TABLE_LOCATIONS, ...

例子:

例如,使用这条语句创建名称为 NEW_USERd 的用户:

CREATEUSER new_user PASSWORD Password1;

已有的用户 new_user 将被删除,连同其所有对象一起:

DROPUSER new_user CASCADE;

7.6.9     GRANT
语法:

GRANT <system_privilege>,... TO <grantee> [WITH ADMIN OPTION] | GRANT <schema_privilege>,... ON SCHEMA <schema_name> TO <grantee> [WITH GRANT OPTION]| GRANT <object_privilege>,... ON <object_name> TO <grantee> [WITH GRANT OPTION] | GRANT <role_name>,... TO <grantee> [WITH ADMIN OPTION]| GRANT STRUCTURED PRIVILEGE <privilege_name> TO <grantee>

语法元素:

<system_privilege> ::=AUDIT ADMIN | BACKUP ADMIN| CATALOG READ | CREATE SCENARIO| CREATE SCHEMA | CREATE STRUCTURED PRIVILEGE| DATA ADMIN | EXPORT| IMPORT 
| INIFILE ADMIN| LICENSE ADMIN | LOG ADMIN| MONITOR ADMIN | OPTIMIZER ADMIN| RESOURCE ADMIN | ROLE ADMIN| SAVEPOINT ADMIN | SCENARIO ADMIN| SERVICE ADMIN | SESSION ADMIN| STRUCTUREDPRIVILEGE ADMIN | TRACE ADMIN| USER ADMIN | VERSION ADMIN| <identifier>.<identifier>

System privileges are used to restrict administrative tasks. Define the following system permissions:

AUDIT ADMIN

This privilege controls the execution of the following audit-related commands: CREATE AUDIT POLICY, DROP AUDIT POLICY and ALTER AUDIT POLICY.

BACKUP ADMIN

This authority authorizes the ALTER SYSTEM BACKUP command to define and start a backup process or to perform a restore process.

CATALOG READ

This privilege grants all users unfiltered read-only access to all system and monitoring views. Normally, the contents of these views are filtered according to the permissions of the accessing user. The privilege CATALOG READ gives the user read-only access to the contents of all system and monitoring views.

CREATE SCENARIO

This permission controls the creation of calculation scenarios and cubes (database calculations).

CREATE SCHEMA

This authority controls the creation of database data collections using the CREATE SCHEMA command. Each user has a collection. With this permission, the user is allowed to create more collections.

CREATE STRUCTURED PRIVILEGE

该权限授权创建结构化权限(分析权限)。注意,只有分析权限的所有者可以进一步授予其他用户或者角色,以及撤销。

DATA ADMIN

该强大的权限授权读取系统和监控视图中的所有数据,包括在 SAP HANA 数据库中执行 DDL (DataDefinition Language) 以及 DDL 命令。这表示拥有该权限的用户不能选择或者修改存储在其他用户表中的数据,但是可以修改表的定义或甚至删除该表。

EXPORT

该权限授权通过 EXPORT TABLE 命令导出数据库中的活动。注意,除了该权限,用户仍需要将要导出的源表 SELECT 权限。

IMPORT

该权限授权通过 IMPORT TABLE 命令导入数据库中的活动。注意,除了该权限,用户仍需要将要导入的目标表 SELECT 权限。

INIFILE ADMIN

该权限授权修改系统设置的不同方式。

LICENSE ADMIN

该权限授权 SET SYSTEM LICENSE 命令安装一个新的许可。

LOG ADMIN

该权限授权 ALTER SYSTEM LOGGING [ON|OFF] 命令启用或禁用 刷新日志机制。

MONITOR ADMIN

该权限授权关于 EVENT 的 ALTER SYSTEM 命令。

OPTIMIZER ADMIN

该权限授权关于 SQL PLAN CACHE 和 ALTER SYSTEM 的 ALTER SYSTEM 命令。

UPDATE STATISTICS 命令影响查询优化器的行为。

RESOURCE ADMIN

该权限授权关于资源,例如 ALTER SYSTEM RECLAIM、 DATAVOLUME 和 ALTER SYSTEM RESET

MONITORING VIEW 的命令,并且授权 Management Console 中的许多命令。

ROLE ADMIN

该权限授权使用 CREATE ROLE 命令创建和删除角色。同时也授权使用 GRANT 和 REVOKE 命令授予和撤销角色。

SAVEPOINT ADMIN

This authority authorizes the execution of the savepoint process using the ALTER SYSTEM SAVEPOINT command.

SCENARIO ADMIN

This permission authorizes all computing scene-related activities, including creating new ones.

SERVICE ADMIN

This authority authorizes the ALTER SYSTEM [START|CANCEL|RECONFIGURE] command to manage system services in the database.

SESSION ADMIN

This authority authorizes session-related ALTER SYSTEM commands to stop or reconnect user sessions or modify session parameters.

STRUCTUREDPRIVILEGE ADMIN

This permission authorizes the creation, reactivation, and deletion of structured permissions.

TRACE ADMIN

This authority authorizes the operation of the database trace file ALTER SYSTEM [CLEAR|REMOVE] TRACES command.

USER ADMIN

This privilege authorizes the creation and modification of users using the CREATE USER, ALTER USER, and DROP commands.

VERSION ADMIN

This authority authorizes the Multiversion Concurrency Control (MVCC) ALTER SYSTEM RECLAIM VERSION SPACE command.

<identifier>.<identifier>

SAP HANA database components can create the permissions they need. These permissions use the component name as the first identifier for the system permission and the component-authority-name as the second identifier. Currently meta libraries use this feature. See the repository manual for permissions named REPO.<identifier>.

<schema_privilege> ::=CREATE ANY| DEBUG| DELETE| DROP| EXECUTE| INDEX| INSERT| SELECT| TRIGGER| UPDATE

Data collection permissions are used for access and modification of the collection and the objects stored in that collection. Collection permissions are defined as follows:

CREATE ANY

This privilege allows the user to create various objects in the database, especially tables, views, sequences, synonyms, SQL scripts, or stored procedures.

DELETE, DROP, EXECUTE, INDEX, INSERT, SELECT, UPDATE

The specified permissions are granted to every object currently and future stored in the collection. For permission details, see the section below that describes object permissions, check which types of objects the following permissions apply to.

<object_privilege> ::=ALL PRIVILEGES| ALTER| DEBUG| DELETE| DROP| EXECUTE| INDEX| INSERT| SELECT| TRIGGER| UPDATE| <identifier>.<identifier>

Object privileges are used to restrict users from accessing and modifying database objects such as tables, views, sequences or stored procedures and the like. Not all of these privileges apply to all types of database objects.

See the table below for the permissions allowed for the object type.

Object permissions are defined as follows:

ALL PRIVILEGES

This permission is a combination of all DDL (Data Definition Language) and DML (Data Manipulation Language) permissions. On the one hand, the authority is the authority that the grantor currently has and is allowed to be further granted, and on the other hand, the authority that can be granted on a specific object. The composition is evaluated dynamically for a given grantor and object. ALL PRIVILEGES applies to tables or views.

ALTER

This DDL authority authorizes the ALTER command for the object.

DEBUG

This DML authority authorizes debugging of stored procedures or calculation views.

DELETE

This DML authority authorizes DELETE and TRUNCATE commands on objects.

DROP

This DDL authority authorizes the DROP command of the object.

EXECUTE

This DML authority authorizes SQL Script functions or stored procedures that use the CALLS or CALL commands.

INDEX

This DDL permission authorizes the creation, modification, or deletion of object indexes.

INSERT

This DML authority authorizes INSERT commands for objects. INSERT together with UPDATE permission allows REPLACE on the object

and UPSERT commands.

SELECT

This DML authority grants the use of the object's SELECT command or sequence.

TRIGGER

This DDL permission authorizes the CREATE TRIGGER / DROP TRIGGER commands for the specified table or tables in the specified collection.

UPDATE

This DML authority authorizes the UPDATE command INSERT on an object together with the UPDATE authority to allow the use of REPLACE on the object

and UPSERT commands.

<identifier>.<identifier>

SAP HANA database components can create the permissions they need. These permissions use the component name as the first identifier for the system permission and the component-authority-name as the second identifier. Currently meta libraries use this feature. See the repository manual for permissions named REPO.<identifier>.

DELETE, INSERT and UPDATE operations on views are only applicable to updatable views, which means that these views obey such restrictions: contain no joins, UNION, no aggregation and some further restrictions.

DEBUG applies only to computed views, not to other types of views.

These restrictions apply to synonyms, and so do the objects represented by the synonyms.

<object_name> ::=<table_name>| <view_name>| <sequence_name>| <procedure_name>| <synonym_name>

Object permissions are used to restrict users from accessing and modifying database objects such as tables, views, sequences, stored procedures, and synonyms.

<grantee> :: =<user_name>| <role_name>

grantee can be a user or a role. In the case of a permission or role granted to a role, all users to whom the role is granted will have the specified permission or role.

A role is a named collection of permissions that can be granted to a user or role.

If you want to allow multiple database users to perform the same operation, you can create a role, grant the required privileges to that role, and grant the role to different database users.

When granting roles to roles, a character tree is built. When a role (R) is granted to another role or user (G), G will have all the permissions and roles directly granted to R.

describe:

GRANT is used to grant permissions and structured permissions to users and roles, and to grant permissions to users and other roles.

The specified users, roles, objects, and structured permissions must already exist before using the GRANT command.

Privileges can only be granted by users who have the privilege and are allowed to grant further privileges. Each user with ROLE ADMIN authority is allowed to grant roles to other roles and users.

Users cannot grant permissions to themselves.

The SYSTEM user has at least one system privilege and the PUBLIC role. All other users also have the PUBLIC role. These permissions and roles cannot be revoked by themselves.

Although the SYSTEM user has many privileges, this user cannot select or modify other users' tables if he has not been explicitly authorized to do so.

The SYSTEM user has permission to create objects in its own default collection with the same name as the user itself.

For tables created by users, they have all privileges and can grant privileges to users and roles.

For other objects that depend on, for example, table-based views, it can happen that a user who doesn't have permissions on the underlying object doesn't have permissions on the dependent object either. Or it may happen that the user has permission, but no further authorization is allowed. This user will not be able to grant these permissions.

WITH ADMIN OPTION and WITH GRANT OPTION specify that assigned privileges can be further assigned by specific users, or assigned by users with specified roles.

Using GRANT STRUCTURED PRIVILEGE <structured_privilege_name>, a previously defined analysis privilege (based on generic structured privileges) is assigned to a user or role. The Analysis permission is used to restrict read-only access to data specific to Analytical Views, Attribute Views, and Calculation Views, by filtering on attribute values.

System and Monitoring Views:

USERS: Displays information about all users, who created them, when they were created, and their current status.

ROLES: Displays all roles, who created them, and when they were created.

GRANTED_ROLES: Displays the roles each user or role is granted to.

GRANTED_PRIVILEGES: Displays the privileges granted to each user or role.

example:

Assuming a user has been created with permissions to create collections, roles and users, he creates a new data collection:

CREATESCHEMA myschema;

In addition, he created a new table called work_done in this collection.

CREATETABLE myschema.work_done (t TIMESTAMP, userNVARCHAR (256), work_done VARCHAR (256);

He creates a new user named worker, possibly connecting to the database with the given password and role named role_for_work_on_my_schema

CREATEUSER worker PASSWORD His_Password_1;

CREATE ROLE role_for_work_on_my_schema;

He grants SELECT permission on all objects under his collection to the role_for_work_on_my_schema role:

GRANTSELECTONSCHEMA myschema TO role_for_work_on_my_schema;

Additionally, the user grants INSERT permission on the table work_done to the role_for_work_on_my_schema role:

GRANTINSERTON myschema.work_done TO role_for_work_on_my_schema;

Next, he grants the role to the new user:

GRANT role_for_work_on_my_schema TO worker WITHGRANTOPTION;

Also, the worker user is directly granted table delete privileges. Options for this permission allow further granting of this permission.

GRANTDELETEON myschema.work_done TO worker;

Now, the user grants permission to create objects of any type to the worker user:

GRANTCREATEANYONSCHEMA myschema TO worker;

结果, worker 用户拥有集合 myschema 下所有表和视图的 SELECT 权限,表 myschema.work_done的 INSERT 和 DELETE 权限,以及在集合 myschema 下创建对象的权限。另外,该用户允许授予表myschema.work_done 的 DELETE 权限给其他用户和角色。

第二个例子中,用户有相应的权限,包括允许进一步授予权限、将系统权限 INIFILE ADMIN 和TRACE ADMIN 授予已有的用户 worker。他允许 worker 进一步授予这些权限。

GRANT INIFILE ADMIN, TRACE ADMIN TO worker WITH ADMIN OPTION;

7.6.10   REVOKE
语法:

REVOKE <system_privilege>,... FROM <grantee>|| REVOKE <schema_privilege>,... ON SCHEMA <schema_name> FROM <grantee>| REVOKE <object_privilege>,... ON <object_name> FROM <grantee>| REVOKE <role_name>,... FROM <grantee>| REVOKE STRUCTURED PRIVILEGE <privilege_name> FROM <grantee>

语法元素:

有关语法元素的定义,参见 GRANT。

描述:

REVOKE 语句撤销指定的角色或者结构化权限或者从指定用户或角色的指定对象中撤销权限。

只有拥有授权的用户可以撤销该权限。这对于有 ROLE ADMIN 的用户和角色的撤销也一样。

The SYSTEM user has at least one system privilege and the PUBLIC role. All other users also have the PUBLIC role. These permissions and roles cannot be revoked by themselves.

If a user is also granted a role, it is not possible to revoke some permissions belonging to that role. In this case, all roles must be revoked and the permissions that the user has granted to him are required.

If a role is granted to a user or role, it will be revoked when the role is deleted. Revoking a role can cause some views to become inaccessible or stored procedures to no longer work, which can happen if a view or stored procedure depends on any permissions in the role.

Revoking a privilege granted with WITH GRANT OPTION or WITH ADMIN OPTION will result in revoking not only from the specified user, but also from all privileges granted to users and roles by that user, directly or indirectly.

Since a permission can be granted to a user or role by a different user, revoking the permission by the user does not necessarily mean that the user will lose the permission. See GRANT for details on the syntax elements.

System and Monitoring Views:

USERS: Displays information about all users, who created them, when they were created, and their current status.

ROLES: Displays all roles, who created them, and when they were created.

GRANTED_ROLES: Displays the roles each user or role is granted to.

GRANTED_PRIVILEGES: Displays the privileges granted to each user or role.

example:

Suppose the user has executed the following statement:

CREATEUSER worker PASSWORD His_Password_1;

CREATE ROLE role_for_work_on_my_schema;

CREATETABLE myschema.work_done (t TIMESTAMP, userNVARCHAR (256), work_done VARCHAR (256);

GRANTSELECTONSCHEMA myschema TO role_for_work_on_my_schema;

GRANTINSERTON myschema.work_done TO role_for_work_on_my_schema;

GRANT role_for_work_on_my_schema TO worker;

GRANT TRACE ADMIN TO worker WITH ADMIN OPTION;

GRANTDELETEON myschema.work_done TO worker WITHGRANTOPTION;

Authorized users are allowed to revoke these rights. He revokes the privilege from the role, and therefore, by implication, revokes the privilege from all users who have been granted the role. Also, the worker user will no longer have TRACE ADMIN privileges. Revoking a permission will cause the revoke to happen to all users to whom the worker user granted that permission.

REVOKESELECTONSCHEMA myschema FROM role_for_work_on_my_schema;

REVOKE TRACE ADMIN FROM worker;

7.7 Data import and export statement
7.7.1 EXPORT
syntax:

EXPORT <object_name_list> AS <export_format> INTO <path> [WITH <export_option_list>]

Grammatical elements:

WITH <export_option_list>:

EXPORT options can be passed in using the WITH clause.

<object_name_list> ::= <OBJECT_NAME>,... | ALL

<export_import_format> ::= BINARY | CSV

<path> ::= 'FULL_PATH'

<export_option_list> ::= <export_option> | <export_option_list> <export_option>

<export_option> ::=REPLACE |CATALOG ONLY |NO DEPENDENCIES |SCRAMBLE [BY <password>] |THREADS <number_of_threads>

describe:

The EXPORT command exports a table, view, column view, synonym, sequence, or stored procedure in the specified format BINARY or CSV. Temporary table data and "no logging" tables cannot be exported using EXPORT.

OBJECT_NAME

The SQL name of the object that will be exported. To export all objects under all collections, you use the ALL keyword. If you want to export objects under a specific collection, you should use the collection name and an asterisk, such as "SYSTEM"."*".

BINARY

Table data will be exported in internal BINARY format. Exporting data this way is orders of magnitude faster than in CSV format. Only columnar tables can be exported in binary format. Row-style tables are always exported in CSV format, even if BINARY format is specified.

CSV

Table data will be exported in CSV format. Exported data can be imported into other databases. In addition, the order of the exported data may be disturbed. Both column and row tables can be exported in CSV format.

FULL_PATH

The server path that will be exported.

Note: When using a distributed system, FULL_PATH must point to a shared disk. For security reasons, paths may not contain symlinks, and may not point within folders of the database instance, except for the 'backup' and 'work' subfolders. Examples of valid paths (assuming the database instance is located at /usr/sap/HDB/HDB00):

'/tmp'<br>

'/usr/sap/HDB/HDB00/backup'<br>

'/usr/sap/HDB/HDB00/work'<br>

REPLACE

With the REPLACE option, previously exported data will be deleted and the latest exported data will be kept. If the REPLACE option is not specified, an error will be thrown if there is previously exported data under the specified directory.

CATALOG ONLY

Use the CATALOG ONLY option to export only the database catalog without data.

NO DEPENDENCIES

With the NO DEPENDENCIES option, dependent objects of an exported object will not be exported.

SCRAMBLE

When exporting in CSV format, use SCRAMBLE [BY '<password>'] to scramble sensitive customer data. When no additional database is specified, the default scramble password will be used. Only string data can be scrambled. When data is imported, scrambled data is imported out of order, making it unreadable by end users and impossible to restore.

THREADS

Indicates the number of threads used for parallel export.

number of threads used

A given THREADS number specifies the number of objects to export in parallel, defaults to 1. Increasing the number may reduce export time, but also affects system performance.

The following should be considered:

 For a single table, THREADS has no effect.

 For views or stored procedures, use 2 or more threads (up to number of objects).

 Consider using more than 10 threads (up to a maximum of 10 depending on the number of system cores) for the entire collection.

 For thousands of tables in the whole BW/ERP system (ALL keyword), a large number of threads is reasonable (up to 256).

System and Monitoring Views:

You can monitor the progress of the export using the system view M_EXPORT_BINARY_STATUS.

You can terminate the export session from the corresponding view using the session ID in the following statement.

ALTERSYSTEM CANCEL [WORKIN] SESSION 'sessionId'

The detailed results of the export are stored in the local session temporary table #EXPORT_RESULT.

example:

EXPORT"SCHEMA"."*"AS CSV INTO'/tmp'WITHREPLACE SCRAMBLE THREADS 10

7.7.2     IMPORT
语法:

IMPORT <object_name_list> [AS <import_format>] FROM <path> [WITH <import_option_list>]

语法元素:

WITH <import_option_list>:可以使用 WITH 子句传入 IMPORT 选项。

<object_name_list> ::= <object_name>,... | ALL

<import_format> ::= BINARY | CSV

<path> ::= 'FULL_PATH'

<import_option_list> ::= <import_option> | <import_option_list> <import_option>

<import_option> ::=REPLACE |CATALOG ONLY |NO DEPENDENCIES |THREADS <number_of_threads>

描述:

IMPORT 命令导入表、视图、列视图、同义词、序列或者存储过程。临时表的数据和"no logging"表不能使用 IMPORT 导入。

OBJECT_NAME

将导入对象的 SQL 名。欲导入路径中的所有对象,你要使用 ALL 关键字。如果你想将对象导入至指定集合下,你应该使用集合名和星号,如"SYSTEM"."*"。

BINARY | CSV

导入过程可能忽略格式的定义,因为在导入过程中,将自动检测格式。将以导出的同样格式导入。

FULL_PATH

从该服务器路径导入。

Note: When using a distributed system, FULL_PATH must point to a shared disk. If the REPLACE option is not specified, an error will be thrown if a table with the same name exists in the specified directory.

CATALOG ONLY

Use the CATALOG ONLY option to import only the database catalog without data.

NO DEPENDENCIES

With the NO DEPENDENCIES option, dependent objects of imported objects will not be imported.

THREADS

Indicates the number of threads used for parallel imports.

number of threads used

A given number of THREADS specifies the number of objects to import in parallel, defaults to 1. Increasing the number may reduce import time, but also affects system performance.

The following should be considered:

 For a single table, THREADS has no effect.

 For views or stored procedures, use 2 or more threads (up to number of objects).

 Consider using more than 10 threads (up to a maximum of 10 depending on the number of system cores) for the entire collection.

 For thousands of tables in the whole BW/ERP system (ALL keyword), a large number of threads is reasonable (up to 256).

System and Monitoring Views:

You can monitor the progress of the import using the system view M_IMPORT_BINARY_STATUS.

You can terminate the import session from the corresponding view using the session ID in the following statement.

ALTER SYSTEM CANCEL [WORK IN] SESSION 'sessionId'

The detailed results of the import are stored in the local session temporary table #IMPORT_RESULT.

7.7.3 IMPORT FROM
Syntax:

IMPORT FROM [<file_type>] <file_path> [INTO <table_name>] [WITH <import_from_option_list>]

Grammatical elements:

WITH <import_from_option_list>:

IMPORT FROM options can be passed in using the WITH clause.

<file_path> ::= '<character>...'

<table_name> ::= [<schema_name>.]<identifier>

<import_from_option_list> ::= <import_from_option> | <import_from_option_list> <import_from_option>

<import_from_option> :: =THREADS <number_of_threads> |BATCH <number_of_records_of_each_commit> |TABLE LOCK |NO TYPE CHECK |SKIP FIRST <number_of_rows_to_skip>
 ROW |COLUMN LIST IN FIRST ROW |COLUMN LIST ( <column_name_list> ) |RECORD DELIMITED BY '<string_for_record_delimiter>' |FIELD DELIMITED BY '<string_for_field_delimiter>' |OPTIONALLY ENCLOSED BY '<character_for_optional_enclosure>' |DATE FORMAT '<string_for_date_format>' |TIME FORMAT '<string_for_time_format>' |TIMESTAMP FORMAT '<string_for_timestamp_format>' |

describe:

The IMPORT FROM statement imports data from an external csv file into an existing table.

THREADS: Indicates the number of threads that can be used for parallel export. The default value is 1 and the maximum value is 256.

BATCH: Indicates the number of records that can be inserted in each commit.

THREADS and BATCH can achieve high performance loading by enabling parallel loading and committing multiple records at once. In general, 10 parallel load threads and a commit frequency of 10,000 records are good settings for columnar tables.

TABLE LOCK: Lock the table for faster data import into the columnar table. If NO TYPE CHECK is specified, the record will be inserted without checking the type of each field.

SKIP FIRST <int> ROW: Skip inserting the first n records.

COLUMN LIST IN FIRST ROW: Indicates the column of the first row in the CSV file.

COLUMN LIST ( <column_name_list> ): Indicates the list of fields to be inserted.

RECORD DELIMITED BY '<string>': Indicates the record delimiter in the CSV file.

FIELD DELIMITED BY '<string>': Indicates the field delimiter in the CSV file.

OPTIONALLY ENCLOSED BY '<character>': Indicates an optional closing character for field data.

DATE FORMAT '<string>': Indicates the date format of characters. If the CSV file has a date type, the specified format will be used for the date type field.

TIME FORMAT '<string>': Indicates the time format of characters. If the CSV file has a time type, the specified format will be used for the time type field.

TIMESTAMP FORMAT '<string>': Indicates the timestamp format of characters. If the CSV file has a timestamp type, the specified format will be used for date type fields.

example:

IMPORTFROM CSV FILE '/data/data.csv'INTO"MYSCHEMA"."MYTABLE"WITH RECORD DELIMITED BY'\n' FIELD DELIMITED BY','

Guess you like

Origin blog.csdn.net/weixin_45987577/article/details/126094151