flink的DAG可视化使用(visualizer的使用)

根据官方文档[1]

来做一些实验吧,代码采用[2]中代码

步骤如下:

mvn clean scala:compile compile package

nc -lk 9999

flink run -c wordcount_increstate /home/appleyuchi/桌面/Flink_Code/flink_state/checkpoint/Scala/target/datastream_api-1.0-SNAPSHOT.jar

提交到集群的时候,会在终端返回一个json

{
  "nodes" : [ {
    "id" : 1,
    "type" : "Source: Socket Stream",
    "pact" : "Data Source",
    "contents" : "Source: Socket Stream",
    "parallelism" : 1
  }, {
    "id" : 2,
    "type" : "Flat Map",
    "pact" : "Operator",
    "contents" : "Flat Map",
    "parallelism" : 2,
    "predecessors" : [ {
      "id" : 1,
      "ship_strategy" : "REBALANCE",
      "side" : "second"
    } ]
  }, {
    "id" : 3,
    "type" : "Map",
    "pact" : "Operator",
    "contents" : "Map",
    "parallelism" : 2,
    "predecessors" : [ {
      "id" : 2,
      "ship_strategy" : "FORWARD",
      "side" : "second"
    } ]
  }, {
    "id" : 4,
    "type" : "Map",
    "pact" : "Operator",
    "contents" : "Map",
    "parallelism" : 2,
    "predecessors" : [ {
      "id" : 3,
      "ship_strategy" : "FORWARD",
      "side" : "second"
    } ]
  }, {
    "id" : 6,
    "type" : "aggregation",
    "pact" : "Operator",
    "contents" : "aggregation",
    "parallelism" : 2,
    "predecessors" : [ {
      "id" : 4,
      "ship_strategy" : "HASH",
      "side" : "second"
    } ]
  }, {
    "id" : 7,
    "type" : "Sink: Print to Std. Out",
    "pact" : "Data Sink",
    "contents" : "Sink: Print to Std. Out",
    "parallelism" : 2,
    "predecessors" : [ {
      "id" : 6,
      "ship_strategy" : "FORWARD",
      "side" : "second"
    } ]
  } ]
}

把这个json复制到https://flink.apache.org/visualizer/中,即可得到下图:

Reference:

[1]Execution Plans

[2]Flink增量Checkpoint实验

猜你喜欢

转载自blog.csdn.net/appleyuchi/article/details/109071632