Flink engine

Flink engine

Overview of Flink

  • what is big data

Refers to a collection of data that cannot be acquired, stored, managed, and processed with conventional software tools within a certain period of time.

The difference between batch computing and stream computing

  • Why stream computing is needed

The real-time nature of big data brings greater value, eg: real-time recommendation, data monitoring

  • Features of Flink

  • Exactly-Once

  • High throughput and low latency, real-time fast

  • High fault tolerance

  • Flow batch integration

  • Streaming/Batch SQL

Flink overall framework

  • Flink layered framework

  • SDK layer: support SQL/Table, DataStream(java), Python

  • Execution engine layer: provide a unified DAG (directed acyclic graph) to describe the pipeline of data processing; the scheduling layer converts the DAG into tasks in a distributed environment; transfer data between tasks through Shuffle

  • State storage layer: store operator state information

  • Resource scheduling layer: Flink can support deployment in multiple environments

  • Flink overall framework

A Flink cluster mainly includes two core components: JM (JobManager), TM (TaskManager)

  • JM is responsible for the coordination of the entire task, including: scheduling tasks, triggering and coordinating tasks to do checkpoints, and coordinating fault-tolerant recovery. The core has the following three components:

  • Dispatcher: Receive the job, pull up the JobManager to execute the job, and resume the job after the Job Master hangs up;

  • Job Master: Manage the entire life cycle of a job, apply for a slot from the Resource Manager, and schedule the task to the corresponding TM;

  • Resource Manager: Responsible for the management and scheduling of slot resources, Task manager will register with RM after pulling up;

  • TM is responsible for executing each task of a DataFlow Graph and the buffer and data exchange of data streams.

How Flink achieves stream-batch integration

  • When stream computing and batch computing are independent:

  • High labor costs: the logic of the batch and flow systems is similar, but they need to be developed twice;

  • Data link redundancy: the calculation content itself is consistent, and two sets of logically similar links are used to process, resulting in a certain waste of resources;

  • Inconsistent data caliber: The two sets of links will more or less produce errors, which will cause troubles for the business side.

  • 为什么可以实现流批一体:

  • 站在 Flink 的角度,Everything is Streams,无边界数据集是一种数据流,可以按照时间分成一个个有界数据集;

  • 而批计算可以看作是流计算的特例,其有界数据集也是一种特殊数据流。

  • 因此,无论是无边界数据集还是有界数据集,Flink都可以支持,并且从API到底层处理都是统一的,实现了流批一体。

  • 流批一体的 Scheduler 层

  • Scheduler 主要负责将作业的 DAG 转化为在分布式环境中可以执行的 Task;

  • EAGER模式(Streaming 场景):申请一个作业所需要的全部资源,然后同时调度这个作业的全部 Task,所有的 Task 之间采取 Pipeline 的方式进行通信;

  • LAZY模式(Batch 场景):先调度上游,等待上游产生数据或结束后再调度下游,类似 Spark 的 Stage 执行模式。

  • 流批一体的 Shuffle Service 层

  • Shuffle:在分布式计算中,用来连接上下游数据交互的过程叫做 Shuffle。

  • 为了统一在Streaming和Batch模式下的Shuffle架构,Flink实现了一个Pluggable的Shuffle Service框架,抽象出一些公共模块,详情如下

经过在 DataStream 层、Scheduler层、Shuffle Service 层进行改造和优化,Flink已经可以方便地解决流和批场景的问题。

Guess you like

Origin blog.csdn.net/m0_51561690/article/details/128687112