[Model-Driven Software Design] Versioning of "Process and Engineering"

    Effective versioning and configuration management is always very important in software development projects.

1. The concept of versioning

    In an MDSD project, the following aspects must be managed and versioned:

  • Generate tools. Currently, the tools themselves are still being developed, so it makes sense to have them version controlled.
  • Generator configuration: The generated part of the domain architecture, this includes DSL/configuration file definitions, metamodels, templates and transformations.
  • The non-generated part of the domain architecture: the MDSD platform.
  • The application itself: models, specifications, and hand-developed code.

   Ideally, the generated code is not versioned, since it can be reproduced from the model at any given time, and thus does not constitute the actual program source code. Of course, this idea can only be realistically applied if there is a structural separation of hand-created and generated code in the file system.

       One goal of model-driven development is to develop some applications based on the same domain architecture. Therefore, it is necessary to completely separate the platform and generator configuration from the application.

2. Projects and dependencies

  The focus is on managing dependencies of various projects, including their versions. For application projects, it is necessary to specify which version of the domain architecture the project is based on. If the underlying platform improves, the domain architecture and possibly even the application project may need to be improved at the same time. A framework metaphor can be used to demonstrate this: imagine the domain architecture as a framework. If you improve the framework, you need to change its client application. These things are of the same relevance, so the same approach is applied in the improvement of the domain architecture.

3. The structure of the application project

Projects, Artifacts, and Repositories

       The diagram above shows how an application project is set up at the highest level, and how the generator and compiler utilize it.

     The application's model, as well as the manually created code, resides in the application repository. Generators create generated code backed by generator configurations, including configuration files, etc. Configuration files are located in the domain architecture repository. Next the application is generated with the help of a build script. This step will use the application and platform's handcrafted code, obtaining the platform's handcrafted code from the domain architecture repository.

4. Version management and construction process of mixed files

         It's not always possible or practical to completely separate generated and non-generated code. Relevant examples include:

  • Custom code in the J2EE deployment descriptor;
  • Custom code in JSP files;
  • Custom code in properties files or other configuration files.

      In general, these are places where the target language does not provide an adequate delegation mechanism.

      Obviously, the use of this protected area results in a file in which generated and non-generated code is mixed. The problem here is that these files are usually only versionable as a whole. This leads to the fact that redundant code will be registered, since the generated code is not the source after all - the source will be the model from which the code is generated.

     This redundant code can lead to inconsistencies during team development. This inconsistency will become a growing problem as the team evolves.

      Merge conflicts between developers are detected exclusively using the application sources in the repository.

      It should be emphasized again that, in our opinion, the separation between generated and non-generated code is the preferable approach and should be attempted in any case.

5. Modeling in the team and versioning of some models

      Large systems must be partitioned. Their constituent parts or subsystems are developed more or less independently of each other. Interfaces define how the system will interact. The normal integration step is to bring the pieces together. This approach is especially useful if the parts are developed in different locations or by different teams. Of course, this primarily affects the development process and communication within the team, and perhaps the system architecture as well.

1. Partitions and subdomains

     First, it's important to point out the difference between partitioning and subdomain usage

Subdomains are aspects that isolate the entire system. Each subdomain has its own metamodel and DSL. The different metamodels are conceptually unified through the gateway metaclass. In the context of enterprise systems, these might be subdomain business processes, persistence, and GUI.

Instead, partitions describe the definition of parts of the system. For reasons of efficient project organization or complexity, decompose a large number of technically similar requirements into separate parts that can be integrated using interfaces.

2. Various generated software architectures

      If different build architectures are used in different partitions of the project, the question arises whether the generated artifacts will work together. In general, this means that integration should happen at the generated code level. As an example, assume that different versions of the build infrastructure are utilized, all of which will create parts of a comprehensive J2EE application.

3. Development of DSL

      DSLs typically continue to be developed over the course of a project. DSLs will evolve as knowledge and understanding of the domain grows and deepens. To make things easier, it is necessary to ensure that the DSL maintains backward compatibility during its development.

4. Partitioning and integration

    Suppose different teams need the same interface, perhaps because one team implements a component that uses code from another team.

     When the work to be done is model-driven, it is required to enforce a model with interfaces in at least two of the models. However, this approach is not ideal because of the consistency issues involved in duplicating information in both models. Depending on the tool, other options exist.

1). Integration in the model

      If the modeling tool supports it, you should ensure that the interface exists in only one place and is referenced from both models. From the generator's point of view, this will produce a consistent model.

     Whether this approach is possible depends on the modeling tool. Among UML tools, a repository-based tool that supports discrete modeling is the ideal choice.

2). Integration via model synchronization in the generator

      Integration can also occur at the generator level if the modeling tool does not provide an appropriate integration choice. The generator reads some input models, each row containing specific model elements.

     In this case, the task of the generator is to solve possible consistency problems.

3). Integration by reference in the generator

     A further integration option is to use references.

Guess you like

Origin blog.csdn.net/zhb15810357012/article/details/131273081