Flink 1.10 version supports SQL DDL features of this article to kafka consumption data from the write jdbc example to introduce the entire process. Specific operation is as follows:
1. Download flink1.10 installation package and unpack: https://www.apache.org/dist/flink/flink-1.10.0/flink-1.10.0-bin-scala_2.11.tgz into the flink lib directory using wget or download a copy of the local download dependence connector, required dependencies have
flink-JSON-1.10.0.jar, flink-SQL-Connector-kafka_2.11-1.10.0.jar,
flink -jdbc_2.11-1.10.0.jar, the Java-MySQL-Connector-5.1.48.jar
2. execution ./bin/start-cluster.sh start flink cluster, after a successful start can be http: // localhost: 8081 access to Flink Web UI.
3. Perform ./bin/sql-client.sh embedded start SQL CLI. Squirrels will see a welcome screen,
4. the establishment of a data source table DDL
CREATE TABLE source_table (
id BIGINT,
name STRING,
score BIGINT
) WITH (
'connector.type' = 'kafka', 使用kafka connector
'connector.version' = 'universal', kafka版本,universal支持0.11以上的版本
'connector.topic' = 'flink-ddl-test', topic
'connector.properties.zookeeper.connect' = 'localhost:2181', zookeeper地址
'connector.properties.bootstrap.servers' = 'localhost:6667', broker service的地址
'format.type' = 'json' 数据格式
);
- Set up a data sink table
create table sink_table(
id BIGINT,
name String,
score BIGINT
) WITH (
'connector.type' = 'jdbc', 使用jdbc connector
'connector.url' = 'jdbc:mysql://localhost/test', jdbc url
'connector.table' = 'my_test', 数据表名
'connector.username' = 'root', 用户名
'connector.password' = '123456', 密码
'connector.write.flush.max-rows' = '1' 处理数据记录数,默认5000条
);
6. execute insert statement submitted flink task
insert into sink_table select id,name,score from source_table;
- Flink on the web ui interface can see the task submitted consumption data can kafka in, and write jdbc.