Software Architecture and Design (4) -----Data Flow Architecture

Data Flow Architecture

In a dataflow architecture, the entire software system is viewed as a continuous set or transformation of a set of input data, with data and operations independent of each other. In this approach, data enters the system and then flows through the modules one at a time until they are assigned to some final destination (output or data storage).

Connections between components or modules can be implemented as I/O streams, I/O buffers, piped, or other types of connections. Data can flow in a graph topology with loops, in a linear structure without loops, or in a tree structure.

The main goal of this approach is to achieve qualities of reuse and modifiability. It applies to applications that involve a well-defined series of independent data transformations or computations with sequentially defined inputs and outputs, such as compilers and business data processing applications. There are three types of execution order between modules −

  • batch order
  • Pipes and filters or non-sequential pipe mode
  • process control

batch order

Batch sequential is a classic data processing model in which a data transformation subsystem starts its process only after its previous subsystems have fully passed,

  • Data Flow Overall data from one subsystem to another.

  • Communication between modules takes place via temporary intermediate files that can be removed by the continuation subsystem.

  • It is suitable for applications with batch data, where each subsystem reads an associated input file and writes an output file.

  • Typical applications of this architecture include business data processing such as banking and utility billing.
    insert image description here

Advantages
In general, batch order provides easier partitioning on subsystems. Each subsystem can be an independent program that processes input data and produces output data.

shortcoming

Guess you like

Origin blog.csdn.net/LJX646566715/article/details/125807030