Introduction to TimescaleDB Time Series Library API Functions of Postgresql Database
(To be continued)
Article Directory
- Introduction to TimescaleDB Time Series Library API Functions of Postgresql Database
-
- A show_chunks() View the sub-chunks
- Two drop_chunks() delete chunks
- Three create_hypertable() create a hypertable
- Four add_dimension() to add additional partitions
- Five set_chunk_time_interval() modify the partition time (range)
- Six set_number_partitions() modify the partition space (range)
- Seven compress_chunk() compression function
A show_chunks() View the sub-chunks
View sub-blocks to
obtain a list of blocks associated with the super table.
Optional parameters
Name | Description |
---|---|
hypertable | The name of the time series metadata table that owns the data block; if not specified, the data blocks of all time series metadata tables are displayed |
older_than | Show any complete block older than this timestamp |
newer_than | Show any complete blocks later than this timestamp |
select show_shunks();
--查看所有块
select show_shunks('超表名');
--查看某个超表底下的所有块
SELECT show_chunks(older_than => INTERVAL '10 days', newer_than => INTERVAL '20 days');
-- 查询10天到20天的的块
Two drop_chunks() delete chunks
Delete sub-blocks
Delete data blocks whose time range falls completely before (or after) the specified time. It can span all super-table operations or target specific super-table operations. Display a list of chunks deleted in the same style as the show_chunks function.
Required parameters The
function requires at least one of the following parameters. These parameters have the same semantics as the show_chunks function.
Name | Description |
---|---|
hypertable | Super table or continuous aggregation from which blocks are deleted. |
older_than | Delete any complete blocks older than this timestamp |
newer_than | Delete any complete blocks later than this timestamp |
Optional parameters
Name | Description |
---|---|
schema_name | The schema name of the supertable from which the block is to be deleted. The default is public |
cascade | Whether to delete in cascade on the block, so delete dependent objects on the block to be deleted. The default value is FALSE. |
cascade_to_materializations | Setting to TRUE can also delete block data from any associated continuous aggregation. Set to FALSE to delete only the original block (while keeping the data in continuous aggregation). The default is NULL, if there is continuous aggregation, an error will occur. |
SELECT drop_chunks(newer_than => now() + INTERVAL '3 months', table_name => '超表名');
--删除超表名上超过3个月的块。
SELECT drop_chunks('2020-01-01'::DATE, 'conditions');
--删除2020年1月1日之前的块
SELECT drop_chunks(interval '3 months');
--删除所有3个月块
SELECT drop_chunks(older_than => interval '3 months', newer_than => interval '4 months', table_name => 'cs')
--删除超表cs上的所有比3个月前旧的和比4个月前新的块:
Three create_hypertable() create a hypertable
Create a super table, the time series metadata table is partitioned by time column by default. At the same time it has the ability to partition according to multiple column combinations. ALTER TABLE and SELECT can operate time series metadata tables.
Required parameters
Name | Description |
---|---|
main_table | The physical table associated with the time series metadata table |
time_column_name | Primary partition column containing time |
Optional parameters
Name | Description |
---|---|
partitioning_column | Additional partition column, used in conjunction with number_partitions |
number_partitions | Additional partition partitioning_column number of partitions, must be greater than 0 |
chunk_time_interval | Partition coverage time range, must be greater than 0, the default value is 7 days |
create_default_indexes | Boolean value, used to determine whether to create a default index on the partition column, the default is TRUE |
if_not_exists | Boolean value, used to determine whether to print alarm information when the time series metadata table has been created. Default value FALSE |
partitioning_func | Partition calculation function |
associated_schema_name | Internal name of time series metadata table |
associated_table_prefix | The internal table partition prefix name, the default value is "_hypter" |
migrate_data | Boolean value, when set to true, the main_table data will be migrated to the partition data block of the new time series metadata table, the default value is false |
return value
Column | Description |
---|---|
hypertable_id | ID of the time series metadata table recorded in the spatiotemporal database |
schema_name | Schema name of the time series metadata table |
table_name | Time series metadata table name |
created | If the time series metadata table is created successfully, it returns true; the creation fails and if_not_exists is set to true, it returns false |
Example
Convert the table conditions to a super table, and only perform time partitioning on the column time:
SELECT create_hypertable('conditions', 'time');
The table condition is converted to a super table, and the column time is set to a partition every 24 hours.
SELECT create_hypertable('conditions', 'time', chunk_time_interval => 86400000000);
SELECT create_hypertable('conditions', 'time', chunk_time_interval => INTERVAL '1 day');
Four add_dimension() to add additional partitions
添加额外的分区,新的分区列既可以按照区间分区,也可以通过哈希分区
注意: add_dimension 命令只能在时序元数据表创建后执行
必选参数
Name | Description |
---|---|
main_table | 添加新分区的时序元数据表 |
column_name | 分区列名 |
可选参数
Name | Description |
---|---|
number_partitions | column_name列的分区个数,需大于 0 |
chunk_time_interval | 每个分区覆盖范围需大于 0 |
partitioning_func | 分区计算函数(见 create_hypertable ) |
if_not_exists | 布尔值,用来确定当时序元数据表已经创建时,是否打印告警信息,缺省值FALSE |
返回值
Column | Description |
---|---|
dimension_id | 时序数据库内部记录的时序元数据表ID |
schema_name | 时序元数据表Schema 名 |
table_name | 时序元数据表表名 |
column_name | 分区列列名 |
created | 时序元数据表创建成功,返回true;创建失败且if_not_exists 设置成true,返回false。 |
实例
以时间列为分区列,为conditions 表创建时序元数据表。然后为以location列为分区键,为时序元数据表添加额外的分区
SELECT create_hypertable('conditions', 'time');
SELECT add_dimension('conditions', 'location', number_partitions => 4);
以时间列为分区列,为conditions 表创建时序元数据表。添加额外分区列time_received, 分区间隔为1天。再添加额外分区列device_id。
SELECT create_hypertable('conditions', 'time', 'location', 2);
SELECT add_dimension('conditions', 'time_received', chunk_time_interval => INTERVAL '1 day');
SELECT add_dimension('conditions', 'device_id', number_partitions => 2);
SELECT add_dimension('conditions', 'device_id', number_partitions => 2, if_not_exists => true);
五 set_chunk_time_interval() 修改分区时间(范围)
设置超表数据块的时间间隔。设置后新生成的数据块使用新值,已有数据块不受影响。
必选参数
Name | Description |
---|---|
main_table | 时序元数据表表名 |
chunk_time_interval | 数据块覆盖的时间区间,需大于 0 |
可选参数
Name | Description |
---|---|
dimension_name | 时间分区名,有且只有当时序元数据表有多个分区时使用 |
把超表 分区改为24小时一分区
SELECT set_chunk_time_interval('超表名', interval '24 hours');
SELECT set_chunk_time_interval('conditions', 86400000000);
-- TIMESTAMP类型
六 set_number_partitions() 修改分区空间 (范围)
设置超表上空间维度的分区(片)数量。新的分区只影响新的块。
必选参数
Name | Description |
---|---|
main_table | 时序元数据表表名 |
number_partitions | 分区数量,值范围: 0 到 32,768之间 |
可选参数
Name | Description |
---|---|
dimension_name | 时间主分区外的其它分区键名。有且只有时序元数据表有多个分区键时使用。 |
number_partitions | 分区数量,值范围: 0 到 32,768之间 |
实例
对于只有一个空间维度的表
SELECT set_number_partitions('conditions', 2);
对于有多个空间维度的表:
SELECT set_number_partitions('conditions', 2, 'device_id');
七 compress_chunk() 压缩函数
compress_chunk函数用于压缩特定的块。当用户想要更多地控制压缩调度时,最常用这个函数来代替add_compression_policy函数
必选参数
Name | Description |
---|---|
chunk_name | 要压缩的块的名称 |
可选参数
Name | Description |
---|---|
if_not_compressed | 设置为true将跳过已经压缩的块。默认值为false。 |
实例
压缩快
SELECT compress_chunk('_timescaledb_internal._hyper_1_2_chunk');