Key points and difficulties of Flink: Flink Table&SQL must be known and must be known (1)

Series you should have read before reading this article:

What is Table API and Flink SQL

Flink itself is a unified processing framework for batch streams, so Table API and SQL are the upper-level processing APIs for batch streams. At present, the function has not been perfected, and it is in the active development stage.

The Table API is a set of query APIs embedded in the Java and Scala languages ​​that allow us to combine queries from some relational operators (such as select, filter, and join) in a very intuitive way. For Flink SQL, you can directly write SQL in the code to implement some Query operations. Flink's SQL support is based on Apache Calcite (Apache open source SQL parsing tool) that implements the SQL standard.

Regardless of whether the input is batch input or streaming input, in both sets of APIs, the specified query has the same semantics and produces the same results.

Dependencies that need to be imported

Depending on the programming language you use, for example here, we choose the Scala API to build your Table API and SQL programs:

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-table-api-scala-bridge_2.11</artifactId>
  <version>1.11.0</version>
  <scope>provided</scope>
</dependency>

In addition, if you want to run your program locally in the IDE, you need to add the following modules, which one depends on which Planner you use, here we choose to use blink planner:

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-table-planner-blink_2.11</artifactId>
  <version>1.11.0</version>
  <scope>provided</scope>
</dependency>

If you want to implement a custom format for parsing Kafka data, or a custom function, use the following dependencies:

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-table-common</artifactId>
  <version>1.11.0</version>
  <scope>provided</scope>
</dependency>
  • flink-table-planner-blink: the planner planner, which is the most important part of the table API, provides a runtime environment and a planner for generating program execution plans;
  • flink-table-api-scala-bridge: bridge bridge, mainly responsible for the connection support of table API and DataStream/DataSet API, divided into java and scala according to language.

The two dependencies here need to be added to run in the IDE environment; if it is a production environment, there is already a planner in the lib directory by default, and only a bridge is required.

It should be noted that flink table itself has two planner planners. After flink 1.11, blink planner has been used by default. If you want to know the old planner, you can refer to the official documentation.

The difference between the two planners (old & blink)

  • Batch stream unification: Blink treats batch jobs as a special case of stream processing. Therefore, blink does not support conversion between tables and DataSets. Batch jobs will not be converted into DataSet applications, but will be converted into DataStream programs for processing like stream processing.
  • Because the batch stream is unified, Blink planner does not support BatchTableSource, and uses bounded StreamTableSource instead.
  • Blink planner only supports brand new catalogs, not the deprecated ExternalCatalog.
  • The FilterableTableSource implementations of the old planner and Blink planner are not compatible. The old planner will push down PlannerExpressions into the filterableTableSource, and the blink planner will push down the Expressions.
  • String-based key-value configuration options are only available for Blink planner.
  • PlannerConfig is implemented differently in the two planners.
  • Blink planner will optimize multiple sinks in one DAG (only supported on TableEnvironment, not on StreamTableEnvironment). And the optimization of the old planner always put each sink in a new DAG, where all DAGs are independent of each other.
  • The old planner does not support directory statistics, while the Blink planner does.

1 Basic program structure

The program structure of Table API and SQL is similar to the program structure of stream processing; it can also be roughly considered to have several steps: first create an execution environment, and then define source, transform and sink.

The specific operation process is as follows:

val tableEnv = ... // 创建表环境

// 创建表
tableEnv.connect(...).createTemporaryTable("table1")
// 注册输出表
tableEnv.connect(...).createTemporaryTable("outputTable")

// 使用 Table API query 创建表
val tapiResult = tableEnv.from("table1").select(...)
// 使用 SQL query 创建表
val sqlResult  = tableEnv.sqlQuery("SELECT ... FROM table1 ...")

// 输出一张结果表到 TableSink,SQL查询的结果表也一样
TableResult tableResult = tapiResult.executeInsert("outputTable");
tableResult...

// 执行
tableEnv.execute("scala_job")

2 Create a table environment

TableEnvironment is the core concept of integrating Table API & SQL in flink. It is responsible for:

  • Register Table in internal catalog
  • Register an external catalog
  • Loading pluggable modules
  • Execute SQL query
  • Register a custom function (scalar, table or aggregation)
  • Convert DataStream or DataSet to Table
  • holds a reference to an ExecutionEnvironment or StreamExecutionEnvironment

When creating TableEnv, you can pass in one more EnvironmentSettings or TableConfig parameter, which can be used to configure some features of TableEnvironment.

Tables are always bound to a specific TableEnvironment. Tables from different TableEnvironments cannot be used in the same query, for example, by joining or unioning them.

TableEnvironment can be created in StreamExecutionEnvironment or ExecutionEnvironment by static method BatchTableEnvironment.create() or StreamTableEnvironment.create(), TableConfig is optional. TableConfig can be used to configure the TableEnvironment or to customize the query optimization and transformation process (see Query Optimization).

Make sure to select the specific planner BatchTableEnvironment/StreamTableEnvironment that matches your programming language.

If the jars for both planners are on the classpath (the default behavior), you should explicitly set the planner to be used in the current program.

Stream processing environment based on blink version (Blink-Streaming-Query):

import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.table.api.EnvironmentSettings
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment

val bsEnv = StreamExecutionEnvironment.getExecutionEnvironment
val bsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
val bsTableEnv = StreamTableEnvironment.create(bsEnv, bsSettings)

Only stream processing settings for blink planner are provided here. For the batch and stream processing settings of the old planner, and the batch processing settings of the blink planner, please refer to the official documentation.

3 Register in Catalog

TableEnvironment maintains a map of table catalogs created by identifiers. The identifier consists of three parts: the catalog name, the database name, and the object name. If the catalog or database is not specified, the current default value will be used.

Tables can be virtual (views VIEWS) or regular (tables TABLES). Views VIEWS can be created from an existing Table, usually the result of a Table API or SQL query. The table TABLES describes external data, such as files, database tables, or message queues.

Temporary Table and Permanent Table Tables can be temporary and related to the lifetime of a single Flink session, or permanent and in multiple Flink sessions and clusters visible.

Permanent tables require a catalog (such as the Hive Metastore) to maintain the table's metadata. Once a permanent table is created, it will be visible to any Flink session connected to the catalog and will persist until explicitly deleted.

On the other hand, temporary tables are usually kept in memory and only exist for the duration of the Flink session in which they were created. These tables are not visible to other sessions. They are not tied to any catalog or database but can be created in a namespace. Temporary tables are not dropped even if their corresponding database is dropped.

create table

virtual table

In SQL terminology, the objects of the Table API correspond to views (virtual tables). It encapsulates a logical query plan. It can be created in the catalog by:

// get a TableEnvironment
val tableEnv = ... // see "Create a TableEnvironment" section

// table is the result of a simple projection query
val projTable: Table = tableEnv.from("X").select(...)

// register the Table projTable as table "projectedTable"
tableEnv.createTemporaryView("projectedTable", projTable)

extension table identifier

Tables are always registered with a ternary identifier, including catalog name, database name, and table name.

User can specify a catalog and database as "current catalog" and "current database". With that, the first two parts of the ternary identifier just mentioned can be omitted. If the first two parts of the identifier are not specified, then the current catalog and current database will be used. Users can also switch the current catalog and current database through Table API or SQL.

Identifiers follow the SQL standard, so they need to be escaped with backticks when used.

// get a TableEnvironment
val tEnv: TableEnvironment = ...;
tEnv.useCatalog("custom_catalog")
tEnv.useDatabase("custom_database")

val table: Table = ...;

// register the view named 'exampleView' in the catalog named 'custom_catalog'
// in the database named 'custom_database' 
tableEnv.createTemporaryView("exampleView", table)

// register the view named 'exampleView' in the catalog named 'custom_catalog'
// in the database named 'other_database' 
tableEnv.createTemporaryView("other_database.exampleView", table)

// register the view named 'example.View' in the catalog named 'custom_catalog'
// in the database named 'custom_database' 
tableEnv.createTemporaryView("`example.View`", table)

// register the view named 'exampleView' in the catalog named 'other_catalog'
// in the database named 'other_database' 
tableEnv.createTemporaryView("other_catalog.other_database.exampleView", table)

4 Table query

Using the connector of the external system, we can read and write data and register it in the environment's Catalog. Next, you can query and transform the table.

Flink provides us with two query methods: Table API and SQL.

Table API call

The Table API is a query API integrated in the Scala and Java languages. Unlike SQL, Table API queries are not represented by strings, but are called step-by-step in the host language.

The Table API is based on the Table class representing a "table" and provides a complete set of method APIs for operation processing. These methods return a new Table object that represents the result of applying the transformation operation to the input table. Some relational conversion operations can be composed of multiple method calls to form a chain call structure. For example table.select(…).filter(…), where select(…) represents the field specified in the selection table, and filter(…) represents the filter condition.

The implementation in the code is as follows:

// 获取表环境
val tableEnv = ...

// 注册订单表

// 扫描注册的订单表
val orders = tableEnv.from("Orders")
// 计算来自法国的客户的总收入
val revenue = orders
  .filter($"cCountry" === "FRANCE")
  .groupBy($"cID", $"cName")
  .select($"cID", $"cName", $"revenue".sum AS "revSum")

// 输出或者转换表
// 执行查询

Note: Implicit type conversion required for import

org.apache.flink.table.api._
org.apache.flink.api.scala._
org.apache.flink.table.api.bridge.scala._

SQL query

Flink's SQL integration is based on Apache Calcite, which implements the SQL standard. In Flink, regular strings are used to define SQL queries. The result of the SQL query is a new Table.

The code is implemented as follows:

// get a TableEnvironment
val tableEnv = ... // see "Create a TableEnvironment" section

// register Orders table

// compute revenue for all customers from France
val revenue = tableEnv.sqlQuery("""
  |SELECT cID, cName, SUM(revenue) AS revSum
  |FROM Orders
  |WHERE cCountry = 'FRANCE'
  |GROUP BY cID, cName
  """.stripMargin)

// emit or convert Table
// execute query

The following example shows how to specify an update query to insert the results of the query into a registered table.

// get a TableEnvironment
val tableEnv = ... // see "Create a TableEnvironment" section

// register "Orders" table
// register "RevenueFrance" output table

// compute revenue for all customers from France and emit to "RevenueFrance"
tableEnv.executeSql("""
  |INSERT INTO RevenueFrance
  |SELECT cID, cName, SUM(revenue) AS revSum
  |FROM Orders
  |WHERE cCountry = 'FRANCE'
  |GROUP BY cID, cName
  """.stripMargin)

5 Convert DataStream to Table

Flink allows us to convert Table and DataStream: we can read the data source stream based on a DataStream, then map it into a sample class, and then convert it into a Table. The column fields of the Table are the fields in the sample class, so that you don't need to bother to define the schema.

code expression

The implementation in the code is very simple, just use tableEnv.fromDataStream() directly. The default converted Table schema corresponds to the field definitions in the DataStream, or can be specified separately.

This allows us to change the order of fields, rename them, or select only certain fields, which is equivalent to doing a map operation (or the select operation of the Table API).

The code is as follows:

val inputStream: DataStream[String] = env.readTextFile("sensor.txt")
val dataStream: DataStream[SensorReading] = inputStream
  .map(data => {
    val dataArray = data.split(",")
    SensorReading(dataArray(0), dataArray(1).toLong, dataArray(2).toDouble)
  })

val sensorTable: Table = tableEnv.fromDataStream(dataStream)

val sensorTable2 = tableEnv.fromDataStream(dataStream, 'id, 'timestamp as 'ts)

Correspondence between data types and Table schema

In the example in the previous section, the corresponding relationship between the data type in the DataStream and the schema of the table is based on the field name in the sample class (name-based mapping), so you can also use as for renaming .

Another way to correspond is to directly correspond to the position of the field (position-based mapping). During the corresponding process, you can directly specify a new field name.

Correspondence based on name:

val sensorTable = tableEnv
  .fromDataStream(dataStream, $"timestamp" as "ts", $"id" as "myId", "temperature")

Location-based correspondence:

val sensorTable = tableEnv
  .fromDataStream(dataStream, $"myId", $"ts")

Flink's DataStream and DataSet APIs support multiple types.

Composite types, such as tuples (built-in Scala and Java tuples), POJOs, Scala case classes, and Flink's Row type, etc., allow nested data structures with multiple fields that can be accessed in Table expressions. Other types are treated as atomic types.

For tuple types and atomic types, it is generally better to use positional correspondence; if you have to use name correspondence, it is also possible:

For tuple types, the default names are "_1 , "_2"; for atomic types, the default names are "f0".

6 Create a temporary view

The first way to create a temporary view is to convert directly from a DataStream. Similarly, you can directly correspond to the field conversion; you can also specify the corresponding field when converting.

code show as below:

tableEnv.createTemporaryView("sensorView", dataStream)
tableEnv.createTemporaryView("sensorView",
  dataStream, $"id", $"temperature", $"timestamp" as "ts")

In addition, of course, you can also create views based on Table:

tableEnv.createTemporaryView("sensorView", sensorTable)

The Schema of View and Table are exactly the same. In fact, in the Table API, View and Table can be considered equivalent.

7 Output table

Update Mode

In stream processing, the handling of tables is not as simple as traditionally defined.

For Streaming Queries, you need to declare how to perform transformations between (dynamic) tables and external connectors. The type of messages exchanged with the external system, specified by update mode.

There are three update modes in the Flink Table API:

  1. Append Mode

In append mode, the table (dynamic table) and the foreign connector only exchange Insert messages.

\2. Retract Mode

In retract mode, the table and the external connector exchange: Add and Retract messages.

  • Insert (Insert) will be encoded as an add message;
  • Delete (Delete) is encoded as a withdrawal message;
  • Updates are encoded as a recall message for an updated row (previous row), and an add message for an updated row (new row).

In this mode, keys cannot be defined, which is completely different from upsert mode.

\3. Upsert mode

In Upsert mode, the dynamic table and the foreign connector exchange Upsert and Delete messages.

This mode requires a unique key through which update messages can be delivered. In order to properly apply the message, the external connector needs to know the properties of this unique key.

  • Both Insert and Update are encoded as Upsert messages;
  • Delete (Delete) is encoded as Delete information.

The main difference between this mode and the Retract mode is that the Update operation is encoded with a single message, so it will be more efficient.

8 Convert the table to a DataStream

Tables can be converted to DataStream or DataSet. This way, custom stream processing or batch programs can continue to run on the results of Table API or SQL queries.

When converting a table to a DataStream or DataSet, you need to specify the generated data type, that is, the data type to which each row of the table is to be converted. Usually, the most convenient conversion type is Row. Of course, because all fields of the result are of explicit type, we often use tuple types to represent them.

Tables are dynamically updated as a result of streaming queries. Therefore, the data stream converted from this dynamic query also needs to encode the update operation of the table, and then there are different conversion modes.

There are two modes of table to DataStream in the Table API:

  • Append Mode

Useful for scenarios where the table will only be changed by an Insert operation.

  • Retract Mode

for any scene. Some are similar to the Retract mode in the update mode, it only has two types of operations: Insert and Delete.

The obtained data will add a Boolean type identification bit (the first field returned), which is used to indicate whether it is new data (Insert) or deleted data (old data, Delete).

The code is implemented as follows:

val resultStream: DataStream[Row] = tableEnv
  .toAppendStream[Row](resultTable)

val aggResultStream: DataStream[(Boolean, (String, Long))] = tableEnv
  .toRetractStream[(String, Long)](aggResultTable)

resultStream.print("result")
aggResultStream.print("aggResult")

Therefore, if there is no aggregation operation such as groupby, you can directly use toAppendStream to convert; and if there is an update operation after aggregation, you must generally use toRetractDstream.

9 Query interpretation and execution

The Table API provides a mechanism to explain (Explain) the logic of computing tables and optimize query plans. This is done via the TableEnvironment.explain(table) method or the TableEnvironment.explain() method.

The explain method returns a string describing the three plans:

  • Unoptimized logical query plan
  • Optimized logical query plan
  • actual execution plan

We can view the execution plan in the code:

val explaination: String = tableEnv.explain(resultTable)
println(explaination)

The interpretation and execution process of Query, the old planner and the blink planner are generally the same, but different. Overall, Query will be represented as a logical query plan, and then explained in two steps:

  1. Optimize query plans
  2. Interpreted as DataStream or DataSet program

The Blink version is a unified batch stream, so all Query will only be interpreted as a DataStream program; in addition, in the batch environment TableEnvironment, the Blink version will not start interpretation until the tableEnv.execute() call is executed.

Table API and SQL are essentially based on relational table operations; while relational tables, relational algebra, and SQL itself are generally bounded and more suitable for batch processing scenarios. This leads to a slightly more complicated understanding in the process of stream processing, and some special concepts need to be introduced.

1 The difference between stream processing and relational algebra (tables, and SQL)

img

As you can see, relational algebra (mainly referring to tables in relational databases) and SQL are mainly for batch processing, which is inherently different from stream processing.

2 dynamic table

Because the data faced by stream processing is continuous, which is completely different from the "table" saved in the relational database that we are familiar with. Therefore, if we convert the stream data into a table, and then perform a select operation similar to a table, the result is not static, but will be continuously updated as new data arrives.

We can keep updating the results on the previous basis as new data comes in. The tables obtained in this way are called "Dynamic Tables" in the Flink Table API concept.

Dynamic tables are the core concept of Flink's Table API and SQL support for streaming data. Unlike static tables, which represent batch data, dynamic tables change over time. A dynamic table can be queried like a static batch table. Querying a dynamic table will result in a Continuous Query. The continuous query never terminates and another dynamic table is generated. A Query continuously updates its dynamic result table to reflect changes on its dynamic input table.

3 The process of streaming continuous query

The following diagram shows the relationship between streams, dynamic tables, and continuous queries:

img

The process of streaming continuous query is:

  1. Streams are converted to dynamic tables
  2. Calculate continuous queries on dynamic tables to generate new dynamic tables
  3. The generated dynamic table is converted back

3.1 Convert the stream to a table (Table)

In order to process a stream with relational queries, it must first be converted to a table.

Conceptually, each data record of the stream is interpreted as an Insert modification to the result table. Because the stream is continuous, and the previous output cannot be changed. Essentially, we're building a table from a stream of insert-only changelogs.

To better illustrate the concepts of dynamic tables and persistent queries, let's take a concrete example.

For example, our current input data is the user's access behavior on the website. The data type (Schema) is as follows:

{
  user:  VARCHAR,   // 用户名
  cTime: TIMESTAMP, // 访问某个URL的时间戳
  url:   VARCHAR    // 用户访问的URL
}

The following diagram shows how to convert the access URL event stream, or click event stream (on the left) into a table (on the right).

img

The resulting table will keep growing as more access event stream records are inserted.

3.2 Continuous Query

Continuous query will perform calculation processing on the dynamic table, and generate a new dynamic table as a result. Unlike batch queries, continuous queries never terminate and update their result tables based on updates on the input tables.

At any point in time, the results of a continuous query are semantically equivalent to the results of the same query executed in batch mode on a snapshot of the input table.

In the example below, we show a continuous query on a stream of click events.

This Query is very simple. It is a query for grouping and aggregation for count statistics. It groups the clicks table on the user field and counts the number of urls visited. The graph shows how the query is calculated over time as the clicks table is updated by other rows.

img

3.3 Converting a dynamic table to a stream

Like regular database tables, dynamic tables can be modified continuously through Insert, Update, and Delete changes. These changes need to be encoded when converting a dynamic table to a stream or writing it to an external system. Flink's Table API and SQL support three ways to encode changes to dynamic tables:

  • Append-only stream

Dynamic tables that are modified by inserting changes only, can be converted directly to an "append-only" stream. The data emitted in this stream is each new row in the dynamic table.

  • Retract flow

A Retract stream is a stream that contains two types of messages, Add messages and Retract messages.

A dynamic table is converted into a retract stream by encoding INSERT as an add message, DELETE as a retract message, UPDATE as a retract message for the changed row (previous row), and an add message for the updated row (new row).

The following diagram shows the process of converting a dynamic table to a Retract stream.

img

  • Upsert stream

Upsert streams contain two types of messages: Upsert messages and delete messages. A dynamic table converted to an upsert stream needs to have a unique key.

Dynamic tables with Unique Keys can be converted to streams by encoding INSERT and UPDATE changes as upsert messages and DELETE changes as DELETE messages.

The following diagram shows the process of converting a dynamic table to an upsert stream.

img

These concepts have been mentioned before. It should be noted that when converting a dynamic table to a DataStream in the code, only Append and Retract streams are supported. The TableSink interface that outputs dynamic tables to external systems can have different implementations. For example, the ES we mentioned earlier can have an Upsert mode.

4 Time characteristics

Time-based operations (such as Table API and window operations in SQL) need to define relevant time semantics and information about the source of time data. Therefore, Table can provide a logical time field, which is used in the table handler to indicate the time and access the corresponding timestamp.

Time attribute, which can be part of each table schema. Once a time property is defined, it can be referenced as a field and used in time-based operations.

Time properties behave like regular timestamps, can be accessed, and computed.

4.1 Processing time

Processing time semantics allow table handlers to generate results based on the machine's local time. It is the simplest concept of time. It neither needs to extract timestamps nor generate watermarks.

There are three ways to define the processing time attribute: specify it directly when the DataStream is converted; specify it when defining the Table Schema; specify it in the DDL that creates the table.

  • Specify when converting DataStream to Table

When converting from a DataStream to a table, you can specify the field name behind to define the Schema. During the definition of the Schema, you can use .proctime to define the processing time field.

Note that this proctime attribute can only extend the physical schema by appending logical fields. Therefore, it can only be defined at the end of the schema definition.

code show as below:

val stream = env.addSource(new SensorSource)
val sensorTable = tableEnv
  .fromDataStream(stream, $"id", $"timestamp", $"temperature", $"pt".proctime())
  • Create table specified in DDL

In the DDL for creating the table, add a field and specify it as proctime, or specify the current time field.

code show as below:

val sinkDDL: String =
  """
    |create table dataTable (
    |  id varchar(20) not null,
    |  ts bigint,
    |  temperature double,
    |  pt AS PROCTIME()
    |) with (
    |  'connector.type' = 'filesystem',
    |  'connector.path' = 'sensor.txt',
    |  'format.type' = 'csv'
    |)
  """.stripMargin

tableEnv.sqlUpdate(sinkDDL) // 执行 DDL

Note: Blink Planner must be used to run this DDL.

4.2 Event Time

Event-time semantics, allowing table handlers to generate results based on the time contained in each record. This allows correct results to be obtained even when there are out-of-order events or delayed events.

In order to handle out-of-order events and distinguish between on-time and late events in the stream; Flink needs to extract timestamps from event data and use them to advance the progress of event time (watermark).

  • Specify when converting DataStream to Table

During the definition of DataStream into Table and schema, use .rowtime to define event time properties. Note that timestamps and watermarks must be assigned in the transformed data stream.

When converting a data stream to a table, there are two ways to define time properties. Depending on whether the specified .rowtime field name exists in the schema of the data stream, the timestamp field can:

  • Append to schema as new field
  • replace existing fields

In both cases, the defined event timestamp field will hold the value of the event timestamp in the DataStream.

code show as below:

val stream = env
  .addSource(new SensorSource)
  .assignAscendingTimestamps(r => r.timestamp)
// 将 DataStream转换为 Table,并指定时间字段
val sensorTable = tableEnv
  .fromDataStream(stream, $"id", $"timestamp".rowtime(), 'temperature)
  • Create table specified in DDL

Event time attributes are defined using the WARDMARK statement in the CREATE TABLE DDL. A watermark statement that defines a watermark-generating expression on an existing event-time field that marks the event-time field as an event-time attribute.

code show as below:

val sinkDDL: String =
  """
    |create table dataTable (
    |  id varchar(20) not null,
    |  ts bigint,
    |  temperature double,
    |  rt AS TO_TIMESTAMP( FROM_UNIXTIME(ts) ),
    |  watermark for rt as rt - interval '1' second
    |) with (
    |  'connector.type' = 'filesystem',
    |  'connector.path' = 'file:///D:\\..\\sensor.txt',
    |  'format.type' = 'csv'
    |)
  """.stripMargin

tableEnv.sqlUpdate(sinkDDL) // 执行 DDL

Here FROM_UNIXTIME is the built-in time function of the system, which is used to convert an integer (seconds) into a date and time in the "YYYY-MM-DD hh:mm:ss" format (by default, it can also be passed in as the second String parameter) String (date time string); then use TO_TIMESTAMP to convert it to Timestamp.

To be continued.

Guess you like

Origin blog.csdn.net/liuwei0376/article/details/123824425