32 | Architecture: High-level design of the system

Our second chapter, “Desktop Development,” is almost over. Today we return to the topic of architecture.

Infrastructure and business architecture

The main content we cover in desktop development is as follows.

For an architect, the content of his architecture work can be roughly divided into two parts, one is infrastructure and the other is business architecture.

Infrastructure, simply put, is technology selection. Choosing an operating system to support, choosing a programming language, choosing a technology framework, choosing a third-party library can all be boiled down to infrastructure work.

The ability of infrastructure tests the ability to choose. What lies behind this is technological foresight and judgment. It's not simple. Most architects tend to focus more on business architecture, but in fact infrastructure has a wider impact, and the cost of making the wrong choice is higher.

The gap between architects is even more reflected in their attitudes towards infrastructure and their ability to build. A truly awesome architect will definitely attach great importance to the technical selection of the team and the construction of the basic platform. The "big middle office, small front office" advocated by Ali is essentially advocating the construction of basic platforms to continuously reduce the cost of business development and enhance the innovation capabilities of enterprises.

Business architecture, simply put, is the decomposition capability of business systems. Infrastructure is actually a decomposition of business systems, but it decomposes parts that are almost irrelevant to business attributes to form domain-independent infrastructure. Business architecture is more about decomposing domain issues.

Once we talk about business architecture, we cannot avoid understanding domain issues. The so-called domain issues talk about the common needs faced by user groups in this field. So we need to analyze user needs.

In Chapter 1, we have already talked about requirements analysis:

17 | Architecture: Requirements Analysis (Part 1)

18 | Architecture: Requirements Analysis (Part 2) - Practical Case

This is our first step in starting the business architecture. Without requirements analysis, there is no business architecture. In the process of business architecture, at least one-third of the energy should be spent on demand analysis.

Today, let’s talk about the second step of architecture: the outline design of the system, referred to as system design.

System design, simply put, is the ability to “decompose a system”. The core thing to do at this stage is to clarify the responsibility boundaries and interface protocols of the subsystems and build the overall framework of the entire system.

So how to decompose the system?

First of all, we need to clarify the criteria for judging the pros and cons of the decomposition system. In other words, we need to know what kind of system decomposition is good and what kind of decomposition is bad.

The simplest basis for judgment is the following two core points:

The user interface (or interface) of the function should be as consistent as possible with the natural expectations of business needs;

The implementation of functions should be highly cohesive and the coupling between functions should be as low as possible.

There are multiple levels of organizational units in a software system: subsystems, modules, classes, methods/functions. How are subsystems broken down into modules? How are modules broken down into more specific classes or functions? The decomposition method of each layer follows the same routine. That is the methodology of decomposing the system.

The interface should naturally reflect business needs

Let’s first look at the function’s usage interface (or interface).

What is the user interface?

For a function, its usage interface is the function prototype.

package packageName

func FuncName(
  arg1 ArgType1, ..., argN ArgTypeN
  ) (ret1 RetType1, ..., retM RetTypeM)

It contains three parts of information.

Function name. Strictly speaking, it is the full name of the function including the namespace in which the function is located. For example, the above example is packageName.FuncName.

Enter the parameter list. Each parameter contains parameter name and parameter type. Output a list of results.

Each output result contains the result name and result type. Of course, many language functions have a single return value, that is, there is only one output result. In this case, the output result has no name, but only one result type, also called the return value type.

For a class, its usage interface is the public properties and methods of the class.

package packageName

type ClassName struct {
  Prop1 PropType1
  ...
  PropK PropTypeK
}

func (receiver *ClassName) MethodName1(
   arg11 ArgType11, ..., arg1N1 ArgType1N1
  ) (ret11 RetType11, ..., ret1M1 RetType1M1)

...

func (receiver *ClassName) MethodNameL(
   argL1 ArgTypeL1, ..., argLNL ArgTypeLNL
  ) (retL1 RetTypeL1, ..., retLML RetTypeLML)

It contains the following contents. Type name.

Strictly speaking, it contains the full name of the type name in the namespace where the type is located. For example, in the above example, it is packageName.ClassName.

List of public properties. Each attribute contains the attribute name and attribute type. The Go language has limited support for attributes, which are expressed directly based on type member variables. Some languages, such as JavaScript, have more advanced support for attributes, allowing get/set methods to be set for a certain attribute. In this way, three attributes can be achieved: read-only, write-only, and read-write.

List of public methods.

Methods and functions are essentially the same, with some differences in details. This is reflected in the following points.

The namespaces are different. The full function name of an ordinary function is packageName.FuncName, and the full method name of a method is in the form packageName.(*ClassName).MethodName.

Compared with functions, methods have one more concept called receiver, which is the object on which the method acts. In Go language receiver is explicitly expressed. But in most languages ​​receiver is hidden, usually named this or self.

For modules, its usage interface is quite diverse, and you need to look at the module type. Typical module types include the following:

package. It is also called a static library in some languages.

dynamic library. There is a special name in Go language called plugin.

Executable program (application).

For packages and dynamic libraries, both are a form of release of code, but the standards are formulated in different ways. Packages are generally defined by programming languages ​​and are more friendly to developers. Dynamic libraries are generally defined by the operating system and can be cross-language, but they are often not very friendly to developers. Why not friendly? Because it defines cross-language symbol definitions and type definition standards. This means that it can only take the common parts between multiple programming languages.

For executable programs (applications), there are many situations. The most common types of executable programs are:

Network service program (service);

command line application;

Desktop application (GUI application)

For the network service program (service), its usage interface is the network protocol. Previously, we also defined the network protocol of the drawing server in the "Painting" program practice (4). as follows:

For command line applications, its usage interface includes:

Command line, including: command name, switch list, parameter list. For example: CommandName -Switch1 ... -SwitchN Arg1 ... ArgM.

Standard input (stdin).

Standard output (stdout).

For a desktop program (GUI application), its usage interface is the user's operation method. The interface appearance of a desktop program is of course important, but it is not the most important. The most important thing is the interaction paradigm, that is, the definition of the business process of how users complete functions. Why we need to specifically introduce a role like a product manager to define the product is precisely because of the importance of using the interface.

The above organizational units all exist physically, and finally we have one concept left: subsystem. In actual development, there is no conceptual correspondence between physical entities and subsystems. It only exists in architectural design documents.

So how do you understand subsystems?

Subsystem is a logical concept, which may physically correspond to one module (Module) or multiple modules. You can understand the subsystem as a logical big module (Big Module). We will also define its usage interface for this big module.

There are two common situations in which subsystems and modules correspond.

In one case, which is also the most common case, the subsystem consists of a root module (master control module) and several sub-modules. The usage interface of the subsystem is the usage interface of the root module.

In another case, the subsystem is composed of multiple similar modules. For example, for Office programs, the IO subsystem is composed of many similar modules, such as Word document reading and writing, HTML document reading and writing, TXT document reading and writing, PDF document reading and writing, etc. These modules often have a unified user interface.

Through the above explanation of the usage interfaces of subsystems, modules, classes, and functions, you will find that they actually have something in common. They all define methods to complete business requirements, but the levels of how to meet the requirements are different. Classes and functions complete business through language-level function calls, network service programs complete business through network RPC requests, and desktop programs complete business through user interaction.

After understanding this, you can easily understand the meaning behind the sentence "The user interface of a function should be as consistent as possible with the natural expectations of business needs."

Whether a programmer's system decomposition ability is strong or not can actually be seen at a glance. You don't need to look at the implementation details, you only need to look at the usage interfaces of the modules, classes and functions he defines. If there are a large number of functions with unclear business intentions, or a large number of modules and classes with unclear responsibilities, you know that it is basically still in the brick-moving stage.

Whether it is a subsystem, module, class or function, it has its own business boundaries. Whether its responsibilities are single and clear enough, whether the interface is simple and clear enough, and whether it naturally embodies business needs (even without the need for additional documentation), these all reflect the power of the architecture.

Functional implementation guidelines: high cohesion and low coupling

In the system decomposition routine, in addition to the user interface of the function itself, we also pay attention to how functions are connected. Of course this involves the implementation of functions.

The basic criteria for function implementation are: the code of the function itself should be highly cohesive and the coupling between functions should be low.

What is high cohesion? Simply put, the code for a function should be written together as much as possible instead of scattered everywhere. The habits I have developed personally in the direction of high cohesion are:

The code of a function should be kept in a separate file as much as possible, and should not be mixed with other functions;

Some codes for small functions may be put together in the same file, but they will also be divided into many with comment lines such as "//------------------" A logical "small file" means that it is an independent small function.

The advantage of high cohesion in code is that collaboration in any team will be smooth, and code submissions rarely conflict.

So what is low coupling? To put it simply, it means that the implementation of a certain function relies on less external environment and is easy to build.

There are two types of external dependencies for function implementation. One is dependence on basic components that have nothing to do with the business, and the other is dependence on underlying business modules.

The basic components may be open source projects, or they may come from the company's basic platform department. Regarding the dependencies on basic components, our core focus is stability. Stability is reflected in the following two aspects.

One aspect is the maturity of the components. How long has this component been born? Is the interface no longer easy to adjust? Is there relatively few functional defects (issues)?

Another aspect is component persistence. Who is the maintainer of the component? Does it have good enough community credit? Is the project still active? How many people are participating in it and contributing code to it.

Of course, from an architectural perspective, our focus is not on the dependence on basic components, but on the dependence on other business modules. It is more in line with the original meaning of business system decomposition.

Less dependence on the underlying business modules and low coupling are shown in the following:

The dependence on the underlying business is "universal". Try not to let the underlying business module customize the interface specifically for me;

The number of dependent business interfaces is small and the frequency of calls is low.

How to do system decomposition?

With the criteria for judging the merits of system decomposition, how do we do it?

Generally speaking, system decomposition is a domain-specific problem that relies on your understanding of user needs. There is no one-size-fits-all approach.

System decomposition must first start with demand induction. It is important to analyze user needs clearly. Clarify the data (objects) and operating interfaces involved in the required function points, summarize them, and classify each function into a certain category. Then clarify the relationship between classes and make it logically self-consistent, then a basic system framework will be formed.

In the outline design stage of the system, we generally use subsystems as dimensions to explain the relationships between the various roles in the system.

For key subsystems, we will further decompose it, and even determine the responsibilities and interfaces of all modules of the subsystem in detail. But our core intention at this stage is not to determine the complete module list of the system. Our focus is on how the entire system can be effectively connected. If a subsystem does not pose any risk to the project without further decomposition, then we do not need to refine it at this stage.

To reduce risk, the system's outline design phase should also have code output.

What is the purpose of the code at this stage?

serves a twofold purpose. First, the initial framework code of the system. In other words, the general framework of the system has been set up. Second, prototype code to verify. Some core subsystems provide mock systems at this stage.

The advantage of this is that we focus on the elimination of global systemic risks from the beginning and give the person in charge of each subsystem or module a more concrete and deterministic understanding.

Code is documentation. Code is a document that is more consistent in understanding.

Let’s talk about MVC again

In this chapter we mainly discuss desktop program development. Although the businesses of different desktop applications vary widely, the desktop itself is a very deterministic field, so it will form its own inherent system decomposition routine.

Everyone already knows that the decomposition routine for desktop program systems is the MVC architecture.

Although the interaction methods of desktop programs in different historical periods are different, some are based on keyboard + mouse, and some are based on touch screen, their framework structures are very consistent. They are all based on event dispatching for input and GDI for interface presentation.

So why did the Model-View-Controller (MVC for short) architecture form?

When we discussed demand analysis in the first chapter, we repeatedly emphasized one point: we must distinguish the stable points and changing points of demand. The stability point is the core capability of the system, while the change point requires an open design.

From this perspective, we can believe that the core logic of the business is stable unless a new technological revolution occurs that causes a qualitative change in the internal logic of the product. Therefore, our lowest layer generally organizes the core logic of the business in the form of classes and functions. This is the Model layer.

But user interaction is a changing point. Everyone is a "drawing" program. Whether on PC desktop or mobile phone, the Model layer is the same, but the user interaction methods are different, and the View and Controllers are quite different.

Of course, the Model layer also has its own changes. Its changing points lie in storage and networking. The Model layer must consider persistence and deal with storage, so it has its own IO subsystem. If the Model layer needs to consider Internetization, it must consider the B/S architecture and network protocols.

However, whether it is storage or network, changes are expected from an architectural perspective. Storage media will change, network technology will change, but only the implementation will change, and their usage interfaces will not change. This means that not only the core logic of the Model layer is stable, but the IO and network subsystems are also stable. Of course, this is also the reason why they are attributed to the Model layer. If they are mutable, they may be separated from the Model layer.

The changing point of user interaction is mainly reflected in two aspects. On the one hand, there are changes caused by screen size. Smaller screens mean that information on the interface needs to be organized more efficiently. On the other hand, there are changes in interaction. Mouse interaction and multi-touch interaction on a touch screen are completely different.

The View layer is mainly responsible for interface presentation. Of course, this also means that it also bears the change point of screen size.

The Controller layer is mainly responsible for interaction. Specifically, it responds to user input events and converts user operations into business requests to the Model layer.

The Controller layer has many Controllers. These Controllers are usually each responsible for different business function points.

In other words, the Model layer is a whole and is responsible for the core logic of the business. The View layer is also a whole, but it may have different implementations on different screen sizes and platforms, but the number will not be too many. Moreover, the so-called responsive layout is now popular, and it is also an implementation that encourages sharing the same View on different screen sizes and different platforms as much as possible. The Controller layer is not a whole, it exists in the form of plug-ins, and different Controllers are very independent.

The advantage of this is that it can quickly adapt to changes in interaction. For example, taking the function of creating a rectangle, there is a RectCreator Controller in the PC mouse + keyboard interaction mode, and a brand new RectCreator Controller in the touch screen interaction mode. Under different platforms, we can initialize different Controller instances to adapt to the interaction mode of the platform.

Of course, in the previous lecture "22 | Architectural Suggestions for Desktop Programs", we also introduced some variants of the MVC structure, such as MVP (Model-View-Presenter), which is mainly responsible for the Controller after the DataChanged event is issued when the Model's data is updated. Listen and Update View instead of the View layer responding to the DataChanged event and Update View.

The differences between these different models are actually just trade-offs and trade-offs in details and do not change the essence.

What do you think of actual combat?

Chapter 1 "Basic Platform", from an architectural perspective, we are mainly learning about basic architecture. Generally speaking, we talk from the perspective of learning history, and everyone mainly listens to stories.

But starting from the second chapter, our topic gradually transitioned to business architecture, and we also began to introduce practical cases: the "drawing" program.

Why is actual combat important?

Learning structure, the concept that I emphasize is “learning by doing”.

First of all, you still need to be hands-on. Then cooperate with this column to think and sort out the principles behind it, so that you can make rapid progress.

We cannot turn architecture courses into theoretical courses. Computer science itself is a practical science, and architectural experience is the accumulation and summary of front-line practical experience.

In order to make it easier for everyone to see the evolution of the architecture more clearly, I implemented a non-MVC version (branch v01) of the drawing program with all the code together:

www/index.htm

Its function corresponds to the minimized drawing program in our lecture "26 | Practical Combat (1): How to Design a "Drawing" Program?" This is the source code given at the time (branch v26):

www/*

As you can see, the total code of the v01 version, including HTML+JavaScript, is only about 470 lines. So this is a very small architectural practical case. If we further reduce the code size of the case, architectural thinking may be less necessary.

Let’s compare the differences between the two versions.

One of the most basic comparisons is code size. In the v26 version, we have split multiple files:

Model:dom.js(100 行)

View:view.js(112 行)

Controllers:

accel/menu.js(86 行)

creator/path.js(90 行)

creator/freepath.js(71 行)

creator/rect.js(108 行)

Reference:index.htm(18 lines)

The combined code size of these files is approximately 580 lines, which is 110 lines more than the v01 version.

This shows that the value of the MVC architecture is not to reduce the total number of lines of code for us. In fact, its focus is on how to let our team work together and work in parallel.

How to make work parallel? This requires that when we implement functions, the code of the function itself must be highly cohesive and the dependencies between functions must be low-coupled. In the v26 version, we split the function into 6 files (excluding the master control index.htm), which can be handed over to 6 team members. On average, each person writes about 100 lines of code.

Of course, for a program with less than 500 lines of code, this is somewhat overkill. But we have evolved and iterated through multiple versions since then, and the functions have become more and more complex, so the need for division of labor has become greater and greater.

In addition to code size, we might as well look at these points when comparing v01 and v26 versions.

High functional cohesion. How many places a certain function code is scattered.

Low coupling between functions. Of course, all the codes of the v01 version are mixed together. We might as well deduce the significance of using the MVC architecture for the v26 version from the perspective of how to do system decomposition.

How to reduce global variables and prepare for control.

Conclusion

After we have introduced all the content in Chapter 2 "Desktop Development", today we introduce the second step of the architecture: the outline design of the system.

In the outline design stage, we generally use subsystems as dimensions to explain the relationships between the various roles in the system. For key subsystems, we will further decompose it, and even determine the responsibilities and interfaces of all modules of the subsystem in detail.

Our core intention at this stage is not to determine the complete module list of the system. Our focus is on how the entire system can be effectively connected. If a subsystem does not pose any risk to the project without further decomposition, then we do not need to refine it at this stage.

To reduce risk, the outline design phase should also have code output.

The advantage of this is that we focus on the elimination of global systemic risks from the beginning and give the person in charge of each subsystem or module a more concrete and deterministic understanding.

Code is documentation. Code is a document that is more consistent in understanding.

Guess you like

Origin blog.csdn.net/qq_37756660/article/details/134973384