Application Architecture COLA 2.0

Many students gave me feedback more than once that our system is very messy, mainly in:

  • The hierarchical structure of the application is chaotic: I don’t know how the application should be layered, what components should be included, and what is the relationship between the components;
  • Lack of normative guidance and constraints: I don’t know where to put a new piece of business logic (which class, which package), and what name should be appropriate?

Solving these problems is one of the original intentions of creating COLA- trying to explore a set of practical application architecture specifications. This specification is not a high-level talk, but a guide that can be copied, understood, implemented, and can control complexity. And constraints .

Since the birth of COLA, I have received many opinions and suggestions. At the same time, in my own practice, I also found many shortcomings of COLA 1.0 . Some designs are redundant and not very necessary, and some key elements are not covered. For example, my recent thinking about application architecture core and complex business code governance is a reflection on COLA 1.0.

Combining the exploration in practice and the continuous thinking about complexity governance, I decided to carry out a comprehensive upgrade to COLA, so I have the current COLA 2.0.
From 1.0 to 2.0, it is not only a simple change of numbers, but also an upgrade of architecture and design concepts. The main changes include:

  • New architecture layering : The Domain layer no longer directly depends on the Infrastructure layer.
  • New component division : redefine and divide the components, add new components, and remove some old components (Validator, Converter, etc.).
  • New extension point design : A new concept is introduced to make extension more flexible.
  • The positioning of the new two-party library : the second-party library is not only a DTO, but also a lightweight expression and implementation of the Domain Model.

New architecture layering

In COLA 1.0, our layering is the classic layered structure as shown in the figure below:
image.png

In COLA 2.0, these levels are still there, but the dependency relationship has changed. The Domain layer no longer directly depends on the Infrastructure layer, but introduces a concept of Gateway, using DIP (Dependency Inversion Principle, Dependency Inversion Principle) to reverse the Domain layer and The dependencies of the Infrastructure layer are shown in the following figure:
image.png

The advantage of this is that the Domain layer will become more pure, completely getting rid of the dependence on technical details (and the complexity brought by technical details), and only need to handle business logic with peace of mind.

In addition, there are two benefits:
1. Parallel development : As long as the interface is agreed upon in Domain and Infrastructure, two students can write code for Domain and Infrastructure in parallel.

2. Testability : The domains without any dependencies are all POJO classes. Unit testing will become very convenient and very suitable for TDD development.

New component division

Definition of modules and components

First of all, let's clarify the definition of the concept of Component. Component is in Java (or in this article), and its scope is Java Package.

There is also a word called module (Module), the two concepts of component and module are more likely to be confused. For example, in "Realizing Domain Driven Design", the author said:

If you are using Java or C#, you are already familiar with Modules, though you know them by another name. Java calls them packages. C# calls them namespaces.

He thinks Module is Package, and I think this definition is easy to cause confusion. Especially when using Maven, in Maven, Module is an Artifact, usually a Jar instead of Package. For example, COLA Framework includes the following four Modules:

<modules>
	<module>cola-common</module>
	<module>cola-core</module>
	<module>cola-extension</module>
	<module>cola-test</module>
</modules>

Indeed, the two concepts of Module and Component are very similar and can easily cause confusion. For example, there is a question on StackOverflow about the difference between Module and Component. The answer that gets the highest praise is distinguished by scope.

The terms are similar. I generally think of a “module” as being larger than a “component”. A component is a single part, usually relatively small in scope.

This answer is consistent with my intuitive reaction that Module is bigger than Component. Based on the above information, I am here to define and explain Module and Component. In this article, the following definition and Notation (notation) will be followed.

  • Module (Module) : Consistent with the definition of Module in Maven, the simple understanding is Jar. It is represented by a cube.
  • Component : Similar to the definition in UML, the simple understanding is Package. Represented by UML component diagram.

A Moudle is usually composed of multiple components, and its relationship and representation are shown in the following figure:
image.png

COLA 2.0 components

In COLA 2.0, we redesigned the components, introduced some new components, and removed some old components. The purpose of these changes is to make the application structure clearer and the responsibilities of the components to be clearer, so as to provide better development guidance and constraints.

The new component structure is shown below:
image.png

Each of these components has its own scope of responsibility. The responsibilities of components are an important part of COLA, which is the "guidance and constraints" we mentioned above. The detailed responsibilities of these components are described as follows:

  1. Two-party library components

    • api: Store the external interface of the application.
    • dto.domainmodel: Lightweight domain object used for data transmission.
    • dto.domainevent: The domain event used for data transmission.
  2. Components in Application

    • service: The facade implemented by the interface has no business logic and can contain adapters for different terminals.
    • eventhandler: handle domain events, including local and foreign domains.
    • executor: Used to process commands (Command) and queries (Query). For complex services, it can include Phase and Step .
    • Interceptor: AOP handling mechanism for all requests provided by COLA.
  3. Components in the domain

    • domain: domain entity, allowing inheritance of domainmodel.
    • domainservice: Domain service, used to provide more coarse-grained domain capabilities.
    • gateway: The gateway interface that externally depends on, including storage, RPC, Search, etc.
  4. Components in Infrastructure

    • config: Configuration information related.
    • message: Message processing related.
    • Repository: storage-related, is a specialization of gateway, mainly used for CRUD operations of data in this domain.
    • gateway: The implementation of the externally dependent gateway interface (gateway in the Domain).

When using COLA, please try to build our application in accordance with the component specification constraints. This can make our application structure clear and rule-based. In this way, the maintainability and understandability of the code will be greatly improved.

New extension point design

Introduce new concepts

Before the discussion, let us clarify the new concepts introduced in the COLA2.0 extension design: business, use cases, and scenarios.

  • Business : It is a financial entity that is responsible for its own profits and losses. For example, tmall, Taobao, and retail are three different businesses.
  • Use Case (Use Case) : describes the interaction between the user and the system, each use case provides one or more scenarios. For example, paying for orders is a typical use case.
  • Scenario : Scenario is also called Instance of use case, including all possible situations (normal and abnormal) of use case. For example, for the use case of "Order Payment", there are multiple scenarios such as "Can use Huabei", "Insufficient Alipay Balance", and "Insufficient Bank Account Balance".

Simply put, a business is composed of multiple use cases, and a use case is composed of multiple scenarios. Using Taobao as a simple example, the relationship between business, use case and scenario is as follows:
image.png

Implementation of new extension points

In COLA 2.0, the implementation mechanism of the extension has not changed, and the main change lies in the new concept introduced above. Because the expansion design idea of ​​COLA 1.0 comes from Transwarp, the original granularity of the expansion also copied Transwarp's "business identity". The expanded positioning method of COLA 1.0 is shown in the figure below:
image.png

However, in actual work, scenarios that can support multiple services like Starring are not common. More is the differentiated support for different use cases or different scenarios of the same use case. For example, "create a product" and "update a product" are two use cases, but most of the business codes can be reused, and only a small part needs to be differentiated.

In order to support this more fine-grained extension support, in addition to the previous "Business Identity (BizId)", I also introduced the two concepts of Use Case and Scenario. The new extended positioning is shown below:
Insert picture description here

It can be seen that under the new extension framework, originally it can only support the extension of "business identity", and now it can support the three-level extension of "business identity", "use case" and "scenario", which is undoubtedly more flexible than before. More, and better than before in expression and comprehensibility.

Under the new extension framework, for example, we implement the extension shown in the figure above: in the tmall business-the order use case-the 88VIP scenario-the user identity verification is extended, we only need to declare one as follows The extension implementation (Extension) on it.

@Extension(bizId = "tmall", useCase = "placeOrder", scenario = "88vip")
public class IdentityCheck88VipExt implements IdentityCheckExtPt{
    
    
    
}

New Erfang Library Positioning

On the surface of the positioning of the second party library, it is a simple problem, because the second party library of the service is nothing more than exposing the interface and transferring data (DTO). However, thinking deeper, it is not a simple problem, because it involves collaboration between different bounded contexts (Bounded Context). It is an important architectural design problem of how to collaborate between different services (SOA, RPC, microservices, different names and the same essence) in a distributed environment.

Cooperation between Bounded Context

There is a methodology for how to realize the collaboration between different domains while ensuring the integrity of the concepts in the respective domains. In general, there are roughly two ways: Shared Kernel and Anti-Corruption Layer (ACL).

1. Shared Kernel

It’s possible that only one of the teams will maintain the code, build, and test for what is shared. A Shared Kernel is often very difficult to conceive in the first place, and difficult to maintain, because you must have open communication between teams and constant agreement on what constitutes the model to be shared.

image.png

The above is a quote from "DDD Distilled" (the author is Vaughn Vernon) on the description of Shared Kernel. Its advantage is Share (reduce repetitive construction), and its disadvantage is also Share (tight coupling between teams).

2. Anti-Corruption Layer (ACL, Anti-Corruption Layer)

An Anticorruption Layer is the most defensive Context Mapping relationship, where the downstream team creates a translation layer between its Ubiquitous Language (model) and the Ubiquitous Language (model) that is upstream to it.

image.png

Also from "DDD Distilled", the anti-corrosion layer is the most thorough method of isolation. Its advantage is that there is no Share (completely decoupled and independent), and its disadvantage is that there is no Share (with a certain conversion cost).

However, my opinion is similar to that of Vernon, and both agree with the approach of anti-corrosion coating. Because the increased semantic transformation costs are completely worthwhile compared to the maintainability and understandability of the system.

Whenever possible, you should try to create an Anticorruption Layer between your downstream model and an upstream integration model, so that you can produce model concepts on your side of the integration that specifically fit your business needs and that keep you completely isolated from foreign concepts.

Repositioning of the second party library

In most cases, the second party library is indeed used to define service interfaces and data protocols. But the difference between the second party library and JSON is that it is not only a protocol, it is also a Java object, a Jar package .

Since it is a Java object, it means that it is possible for DTO to carry more functions besides getter and setter. This issue has not attracted my attention before, but when thinking about the domain model recently, I found that we can make the second party library take on more responsibilities and play a greater role.

In fact, in Alibaba, I found that some teams are already doing this, and I think the effect is pretty good. For example, the second party library of China and Taiwan has made a good demonstration in this matter. Category is the more complex logic in the product, which involves a lot of calculations. Let's take a look at how the code of the category two party library is written:

public class DefaultStdCategoryDO implements StdCategoryDO {
    
    
    private int categoryId;
    private String name;
    private DefaultStdCategoryDO parent;
    private ArrayList<StdCategoryDO> children ;

    @Override
    public boolean isRoot() {
    
    
        return this.parent == null;
    }
    
    @Override
    public boolean isLeaf() {
    
    
        return this.getChildren().isEmpty();
    }

    @Override
    public List<? extends StdCategoryDO> getChildren() {
    
    
        return this.children;
    }
    
    @Override
    public String getCategoryNamePath(String sep) {
    
    
        List<? extends DefaultStdCategoryDO> m = this.getPathList();
        StringBuilder sb = new StringBuilder();
        for (DefaultStdCategoryDO c : m) {
    
    
            if (sb.length() > 0) {
    
    
                sb.append(sep);
            }
            sb.append(c.getName());
        }
        return sb.toString();
    }

  //省略...
}

From the above code, we can find that this is far beyond the scope of DTO, which is a Domain Model (with data, behavior, and inheritance) . Is this appropriate? I think it is appropriate:

  • First of all, all the data used by DefaultStdCategoryDO is self-consistent, that is, these calculations can be done by themselves without external assistance. For example, judging whether it is a root category, whether it is a leaf category, and obtaining the name path of a category, etc., can all be done by yourself .
  • Secondly, this is a kind of shared kernel. I exposed my domain knowledge (language, data, and behavior) through the second party library. If there are 100 applications that need to use isRoot() to judge, you don’t need to implement it yourself. .

what? Didn’t you say that sharing the kernel is not recommended? (Well, kids can tell right from wrong, okay). I think the shared kernel here is of positive significance, especially in scenarios where categories are light-data and heavy-calculation . However, the tight coupling brought about by sharing is indeed a problem. So if I were a Consumer of Category Services, I would choose to use a Wrapper to package and reuse Category, so that I can reuse its domain capabilities and also play a role in isolation and corrosion protection.

Two-party library in COLA

Having said that, I think you should already understand my attitude towards Erfangku. Yes, the second party library should not only be an interface and DTO, but an important part of the field, and an important means to implement Shared Kernel.

Therefore, I intend to expand the scope of responsibility of the second party library in COLA 2.0. It mainly includes two points:

  1. The domain model in the two-party library is also an important part of the domain. It is a "lightweight" expression of domain capabilities. The so-called "lightweight" means that the expression is self-consistent and sufficiently cohesive, similar to the StdCategoryDO mentioned above Case. Of course, the expression of ability also needs to follow Ubiquitous Language.

  2. The collaboration between different Bounded Contexts should make full use of the bridge function of the two-party library. The way of collaboration is shown in the figure below.
    image.png

Note that this is only a recommendation, not a standard. In fact, we always have to make a trade-off between sharing and coupling. There is no perfect architecture or perfect design in the world. Whether it is appropriate or not, you also need to decide on your own based on the actual situation.

COLA framework extension mechanism (Easter eggs)

At this point, I have almost explained the changes in COLA 2.0. Let's add another easter egg. Let me tell you how COLA as a framework supports extensions.

As a component, the framework is integrated into the system to complete a specific task. For example, logback as a log framework helps us solve problems such as printing logs, log formats, and log storage. However, in the face of various application scenarios, the framework itself cannot predict the log format and log archiving method you want. These places need an extension mechanism to enable users to configure and extend themselves.

As far as the implementation of expansion is concerned, there are generally two ways, one is expansion based on interface, and the other is expansion based on data configuration.

Interface-based extension

Interface-based extensions mainly use object-oriented polymorphism. First define an interface (or abstract method) and a template for processing the interface in the framework, and then users can implement their own customization. The principle is shown in the figure below:
image.png

This extension method is widely used in frameworks, such as in Spring ApplicationListener. Users can implement this Listener to do special processing after container initialization. For another example in logback AppenderBase, users can AppenderBaseimplement customized Appender requests (send logs to the message queue) through inheritance .

As a framework, COLA is unavoidable. For example, we have one ExceptionHandlerI. In the framework, we provide a default implementation. The code is as follows:

public class DefaultExceptionHandler implements ExceptionHandlerI {
    
    

    private Logger logger = LoggerFactory.getLogger(DefaultExceptionHandler.class);

    public static DefaultExceptionHandler singleton = new DefaultExceptionHandler();

    @Override
    public void handleException(Command cmd, Response response, Exception exception) {
    
    
        buildResponse(response, exception);
        printLog(cmd, response, exception);
    }

    private void printLog(Command cmd, Response response, Exception exception) {
    
    
        if(exception instanceof BaseException){
    
    
            //biz exception is expected, only warn it
            logger.warn(buildErrorMsg(cmd, response));
        }
        else{
    
    
            //sys exception should be monitored, and pay attention to it
            logger.error(buildErrorMsg(cmd, response), exception);
        }
    }
}

However, not every application is willing to make such an arrangement. Therefore, we provide extensions. When users provide their own ExceptionHandlerIimplementations, the user's implementation is preferred. If the user does not provide them, the default implementation is used:

public class ExceptionHandlerFactory {
    
    

    public static ExceptionHandlerI getExceptionHandler(){
    
    
        try {
    
    
            return ApplicationContextHelper.getBean(ExceptionHandlerI.class);
        }
        catch (NoSuchBeanDefinitionException ex){
    
    
            return DefaultExceptionHandler.singleton;
        }
    }

}

Expansion based on data configuration

Based on the expansion of configuration data, we must first agree on a data format, and then use the data provided by the user to assemble it into an instance object. The data provided by the user is an attribute in the object (sometimes it may be a class, such as StaticLoggerBinder in slfj). The principle is shown in the figure below:
image.png

The KV configuration we generally use in the application belongs to this form, and there are many usage scenarios in the framework, such as the logback.xml configuration for the log format and log size in the logback mentioned above.

In COLA, the configuration of extension points through Annotation @Extension(bizId = "tmall", useCase = "placeOrder", scenario = "88vip")is also a typical data-based configuration extension.

How to use COLA 2.0

Source code

The source code of COLA 2.0 is at: https://github.com/alibaba/COLA/

Generate COLA application

COLA 2.0 provides two sets of Archetype, one is a pure back-end application and the other is a web back-end application. The difference between them is that the web back-end application has one more Controller module than the pure back-end application, and the others are the same. I have uploaded Archetype's second party library to Maven Repo, and I can generate COLA application with the following command:

  1. Generate pure back-end applications (no Controller)
mvn archetype:generate  -DgroupId=com.alibaba.demo -DartifactId=demo -Dversion=1.0.0-SNAPSHOT -Dpackage=com.alibaba.demo -DarchetypeArtifactId=cola-framework-archetype-service -DarchetypeGroupId=com.alibaba.cola -DarchetypeVersion=2.0.1
  1. Generate Web back-end application (with Controller)
mvn archetype:generate  -DgroupId=com.alibaba.demo -DartifactId=demo -Dversion=1.0.0-SNAPSHOT -Dpackage=com.alibaba.demo -DarchetypeArtifactId=cola-framework-archetype-web -DarchetypeGroupId=com.alibaba.cola -DarchetypeVersion=2.0.1

We assume that the newly created application is called demo. After executing the command, you will see the following module structure. The upper part is the application skeleton and the lower part is the COLA framework.
image.png

There are some demo codes in the generated application, which can be tested directly with "mvn test". If it is a web back-end application, you can run and TestApplicationstart the Spring Boot container, and then directly access the service through the REST URL http://localhost:8080/customer?name=Alibaba.

COLA 2.0 overall architecture

Finally, in accordance with the old rules, still give two overall architectural views. So that you can grasp COLA as a whole.

Note: COLA has two meanings. One meaning is COLA as a framework, which mainly provides support for common components required in some applications. Another meaning refers to the COLA architecture, which refers to the architecture of the application skeleton generated by COLA Archetype. The architecture view mentioned here is the application architecture view.

Dependent view

image.png

Call view

image.png

Guess you like

Origin blog.csdn.net/significantfrank/article/details/100074716