30 | Practical combat (5): How to design a "drawing" program?

We continue our conversation. This is the last lecture on the drawing program. Of course, we will later discuss all aspects of the architecture in conjunction with this practical procedure.

Macro system architecture

Since the last lecture, our painting program has had cross-team collaboration: because we have started to have two major software, paintdom and paintweb. The address where paintdom listens is localhost:9999, while the address where paintweb listens is localhost:8888.

It should be noted that in actual business, they are different software. In fact, our paintweb program also calls the functions of paintdom through the reverse proxy mechanism in a completely inter-process collaboration manner. But in our painting DEMO program, they belong to the same process, and paintdom is running as a goroutine of paintweb. This is purely because we want these two programs to "live and die together" to facilitate starting and stopping the process during debugging.

The basis for the collaboration between paintdom and paintweb is the network protocol used between them.

When we talk about network protocols, it actually usually contains two levels of meaning: one is the carrier of our network protocols, that is, the protocol stack (we adopt the HTTP protocol here, and the HTTP protocol is based on the TCP/IP protocol); The second is the business logic carried by our network protocol.

When we talk about architecture, we also talk about these two levels at the same time, but they are in different dimensions. We will care about the choice of protocol stack for the network protocol, whether it is based on HTTP or a custom binary protocol. This is a dimension of the infrastructure. We will also care about the business logic of the network protocol and determine whether it naturally reflects business requirements. This is a dimension of application architecture.

After clarifying the network protocol, we implemented the Mock version of the server program paintdom. In actual projects, Mock programs often greatly speed up the team's development efficiency. This is because it can achieve the following two major core goals:

Let the team's R&D iterations be carried out in parallel, and each other can evolve independently.

Verify the rationality of the network protocol as early as possible and achieve the purpose of stabilizing the protocol in the shortest time in actual combat.

In the previous lecture, although we defined the network protocol between paintdom and paintweb and implemented the first version, we did not connect the two.

Today we will connect them.

Although paintweb does not connect to the server, from the perspective of document editing, its functions are very complete. Our purpose of connecting paintdom and paintweb is not to add editing functions, but to allow documents to be stored on the server so that people can open them anywhere in the world with Internet access.

Of course, strictly speaking, it is incorrect to say that paintweb does not have a server. Paintweb itself is a B/S structure and has its own server. as follows:

var wwwServer = http.FileServer(http.Dir("www"))

func handleDefault(w http.ResponseWriter, req *http.Request) {
  if req.URL.Path == "/" {
    http.ServeFile(w, req, "www/index.htm")
  return
  }
  req.URL.RawQuery = "" // skip "?params"
  wwwServer.ServeHTTP(w, req)
}

func main() {
  http.HandleFunc("/", handleDefault)
  http.ListenAndServe(":8888", nil)
}

It can be seen that paintweb's own server basically does nothing. It is just a very ordinary static file download server that provides the browser with content such as HTML + CSS + JavaScript.

Therefore, the server side of paintweb is completely "mediocre" and has nothing to do with the business. Specific business is done through files in the www directory. These files are all relied on by the front-end browser, but are just "hosted" to the paintweb server.

So how does paintweb connect to paintdom?

The physical connection is relatively simple, it is just a reverse proxy server. The code is as follows:

func newReverseProxy(baseURL string) *httputil.ReverseProxy {
  rpURL, _ := url.Parse(baseURL)
  return httputil.NewSingleHostReverseProxy(rpURL)
}

var apiReverseProxy = newReverseProxy("http://localhost:9999")

func main() {
  http.Handle("/api/", http.StripPrefix("/api/", apiReverseProxy))
}

It can be seen that what paintweb’s server does is still “mediocre”, it just sends the

Requests for http://localhost:8888/api/xxx are sent intact to http://localhost:9999/xxx.

In reality, what paintweb's server does is a little more complicated. Behind it is not only the business server paintdom, but also the essential account server (Account Service) to support user login/logout.

The account server is an infrastructure service and has nothing to do with business. The company will most likely not only have QPaint as a business, but also have other businesses, but these businesses can share the same account service. To be more precise, the same account service must be shared, otherwise users will be criticized if a company creates multiple independent account systems.

When it is necessary to connect to the account server, the paintweb server actually does not forward the business request intact, but escapes the protocol.

In the lecture "24 | Tips for Cross-Platform and Web Development" we mentioned:

When it comes to web development, we also need a secondary development interface, but this secondary development interface is no longer completed on the client side, but on the server side. The server side supports direct API calls to support automation requirements.

Therefore, for the server side, the bottom layer is a multi-tenant Model layer (Multi-User Model), which implements the API required for automation (Automation).

On top of the Multi-User Model layer, there is a Web layer. The assumptions of the Web layer and the Model layer are different. The Web layer is session-based because it is responsible for user access. After each user logs in, a session will be formed.

If we take a closer look at the Web layer, it is divided into the Model layer and the ViewModel layer. To differentiate, we call the Model layer on the Web side Session-based Model. Correspondingly, we call the ViewModel layer Session-based ViewModel.

On the server side, Session-based Model and Session-based ViewModel are not directly related. They control the Model and ViewModel on the browser side through their own network to respond to user interaction.

What does a Session-based Model look like? It is actually a translation of the Multi-User Model layer. Translate multi-tenant APIs into single-tenant scenarios. So this layer does not require too much code, and even automatic implementation is theoretically possible.

Session-based ViewModel is some HTML+JavaScript+CSS files. It is the real gateway to web business. It returns its data to the browser through the Internet, and the browser renders the View based on the ViewModel, so that the entire system works.

This paragraph is relatively abstract, but combined with the practical example of QPaint, it is very clear:

paintdom is the Multi-User Model layer mentioned here, responsible for the multi-tenant business server.

The paintweb server implements the Session-based Model layer, which is responsible for the translation from Session-based to Multi-User. Since our current example does not support multi-tenancy, translation becomes simple forwarding. We will show you how the actual translation layer is done later in the "Server-side Development" section.

So you can see that paintweb's own server is actually business-independent. It does some things like:

Hosting of web front-end files (as a static file download server);

Support account services and realize user login on the Web;

Translate business protocols and convert Session-based API requests into Multi-User API requests.

Of course, we assume here that the business logic of the Web itself is implemented through JavaScript. This means we are based on the "fat front-end" model.

But this is not necessarily true. Some companies will be based on the "fat backend" model. This means that most front-end user behaviors are supported by the back-end. For example, we use PHP to implement the business code of the Web back-end.

The advantage of the fat backend model is that the web code is safer. “Security” here refers to the security of IT asset preservation, not to business security issues, because others cannot see the complete Web business logic code.

But the disadvantage of the fat backend model is that it cannot support offline. Most user interactions require the web backend to respond, and once the network is disconnected, nothing can be done.

In the "fat backend" mode, I personally would prefer to implement the business code of the web backend based on a glue language like PHP. Once we do this, the business logic of paintweb is stripped away, and the backend of paintweb itself is still business-independent, but has one more responsibility: supporting the PHP scripting language.

The real web backend business logic is still placed in the www directory and exists as PHP files. These files are not simple static resources, but the business code of the "fat backend".

Since the paintweb backend is "mediocre" and has nothing to do with the business, the entire business logic depends on the js files in www and the API interface provided by paintdom.

As we said above, before connecting to paintdom, the paintweb program is complete when viewed independently. It supports offline creation, editing, and storing documents to localStorage locally in the browser.

After connecting paintdom and paintweb, we will not give up the ability to edit offline, but will be able to:

In the case of disconnection, the effect we achieved in the previous lecture will be the same, and you can continue to edit and save offline;

Once online, all offline edits can be automatically saved to the paintdom server.

Calculate changes

Sounds like a simple thing?

It's actually very complicated. The first thing to do is: How do you know what content has been edited offline after disconnecting from the Internet?

The first idea is to save the entire document every time regardless of the situation. This is wasteful, because not only do we need to save the document when the Internet is restored, but we also automatically save the modified content every time we edit.

The second idea is to record the complete editing operation history, and record each editing operation to localStorage. This idea seems to be more economical, but in fact it is more wasteful in many cases. the reason is:

If an object is edited multiple times, there will be many editing operation instructions to be saved;

After being disconnected for a long time, the accumulated editing operations may even exceed the document size.

Therefore, this solution lacks good robustness and is unacceptable in badcase situations.

The third idea is to add a version number to the object. By comparing the base version of the entire document (baseVer, the version when the last synchronization was completed) with the version ver of an object. If ver > baseVer, it means that the object has changed since the last synchronization was completed. The calculation logic of complete change information is as follows:

prepareSync(baseVer) {
  let shapeIDs = []
  let changes = []
  let shapes = this._shapes
  for (let i in shapes) {
    let shape = shapes[i]
    if (shape.ver > baseVer) {
      changes.push(shape)
    }
    shapeIDs.push(shape.id)
  }
  let result = {
    shapes: shapeIDs,
    changes: changes,
    ver: this.ver
  }
  this.ver++
  return result
}

Sync changes

With the changed information, how to synchronize it to the server?

One possible idea is to restore the changes to edit operations one by one and send them to the server. However, this problem will be very complicated, because some of these editing operations are sent successfully and some of them fail to be sent. What should I do?

This partially successful intermediate state is the most challenging to our programmers' programming level, and it is very brain-burning.

The architectural principle that I have always adhered to is not to burn your brain. Especially for most non-performance-sensitive business codes, simplicity and ease of implementation are the first principle.

So we chose to modify the network protocol. Added synchronization interface:

It's interesting. When we discuss the interfaces that cooperate with each other, we respect the business logic very much and define a series of editing operations based on our understanding of the business. However, in the end we found that none of them worked. What we wanted was a synchronization protocol.

Were we wrong at first?

You can't say that either. The logic of our initial protocol definition was not wrong, but we just did not consider the need to support offline editing.

Reviewing this matter, we can say this:

Anticipation of demand is very important. If we do not fully anticipate demand, in most cases we will end up footing the bill because of our lack of market insight;

To further explain, it is necessary to launch Mock early so that the front end can quickly iterate and discover the shortcomings of the originally defined network protocol early. The later a protocol adjustment is made, the harder and less efficient it will be.

With the synchronization protocol, we can synchronize the change information to the server. We leave this matter to the QSynchronizer class to complete (see dom.js#L204 for details).

Load document

After pushing the changes to the server in detail, we can theoretically see this document all over the world.

How?

Let's talk about how to load the document next. The difficulty in this process is how to reconstruct the entire document based on the json data returned by the server.

We have already said in the previous lecture that the data format in the network protocol of our shape (Shape) is different from that in localStorage. This means that we need to load two sets of graphics data.

This is quite unnecessary.

Moreover, from the perspective of predicting changes, one change we can easily anticipate is that the drawing program will support more and more types of shapes (Shapes).

Let’s look at these two things together. We did a refactoring for this purpose. The refactoring goals are:

Unify localStorage and graphical representation in network protocols;

It's easy to add new types of graphics, and the code is very cohesive, so you don't have to change the code here and there.

To this end, we add the qshapes: QSerializer global variable to allow various graphics types to register their own creation methods (creators). The schematic code is as follows:

qshapes.register("rect", function(json) {
  return new QRect(json)
})

In order to support the QSerializer class (see code dom.js#L89), each graphic needs to add two methods:

interface Shape {
  constructor(json: Object)
  toJSON(): Object
}

This way we can call qshapes.create(json) to create a shape instance.

With this capability, we can easily load documents. For specific code, please refer to the _loadRemote(displayID) method of the QPaintDoc class (seedom.js#L690 ).

To be complete, the scenarios for loading documents are divided into three categories:

_loadBlank, which loads a new document. When connected to the Internet, a new drawing will be created on the server. When not connected to the Internet, a temporary document will be created locally (displayID starts with t).

_loadTempDoc, which loads a temporary document. That is to say, the document has been edited offline from the beginning to the present. It also has two situations. If it is currently connected to the Internet, a new drawing will be created on the server and the current offline editing data will be synchronized. If you are not connected to the Internet, the data edited offline will be loaded and you can continue editing offline.

_loadRemote, which loads a remote document. The document may have been edited locally, so the locally cached offline edited data will be loaded first. If you are currently connected to the Internet, the remote document will be loaded asynchronously, and the local offline edited content will be discarded after success.

In addition, QPaintDoc will issue an onload message after loading the document. This message will currently be responded to by QPaintView to refresh the interface. The code is as follows:

class QPaintView {
  constructor() {
    ...
    let view = this
    this.doc.onload = function() {
      view.invalidateRect(null)
    }
    ...
  }
}

The reason why there is an onload message is because it is difficult to predict when the ajax request to the server will be completed. We load the document after the asynchronous ajax is completed. From this point of view, issuing the onload event after completing the document loading can avoid the Model layer from needing to understand the business logic of the View layer.

Model layer thickness

At this point, we have basically introduced the main content of this iteration clearly. We won’t go into details about other small detail changes. For detailed code changes, please see:

https://github.com/qiniu/qpaint/compare/v29...v30

The topic I want to talk about next is about the thickness of the Model layer. We mentioned in “22 | Architectural Recommendations for Desktop Applications”:

From an interface programming perspective, the thicker the Model layer, the better. Why do you say that? Because this is the part that has nothing to do with the interface program framework of the operating system, it is the easiest part to test, and it is also the easiest part to cross platforms. If we shift the logic more toward the Model layer, the Controller layer will be much simpler, which will be extremely beneficial to cross-platform development.

Our philosophy is that the thicker the Model layer, the better. In fact, we have been insisting on this in this actual "drawing" program. Let's look at two sets of data.

First, comparison of the Model layer (dom.js) of different versions (v26..v30):

dom.js for MVP version (v26), about 120 lines.

The latest version (v30 version) of dom.js is about 860 lines.

How many times have the lines of code in the Model layer doubled? 7.x times.

Second, the change history of different versions (v26..v30):

v27:https://github.com/qiniu/qpaint/compare/v26...v27

v28:https://github.com/qiniu/qpaint/compare/v27...v28

v29:https://github.com/qiniu/qpaint/compare/v28...v29

v30:https://github.com/qiniu/qpaint/compare/v29...v30

I don’t know what you saw?

An interesting fact is that the iterations of multiple versions basically involve changes to the Model layer. The changes in version v29 seem to be an exception, and dom.js has not been modified. But in fact, the entire change in v29 is a change in the Model layer, because it adds a server-side Model (we called it Multi-User Model earlier).

If we think deeply about this issue, we will have the following inference:

If we do not keep the Model layer code together in a cohesive manner, but let it scatter freely everywhere, then the quality of our code changes will be very uncontrolled.

Why? The Model layer is generally the easiest to test because it has the smallest environmental dependencies. If these codes are dispersed into the View and Controller layers, the difficulty of reading, maintaining, and testing the code will increase significantly.

Through several rounds of functional iterations, our understanding of the Model layer continues to deepen. We summarize its responsibilities as follows:

Business logic exposes business interfaces to the outside world. It is also the most important job of Model.

Implement the onpaint event delegated by the View layer to complete the drawing function.

Implement the hitTest interface of the Controller layer to implement selection support.

To achieve communication with the server-side Multi-User Model layer, the View and Controllers components do not need to be aware of the server.

Implement offline editing of localStorage access.

Except for a small number of View (onpaint) and Controllers (hitTest) requirements, most of them are the normal business scope of the Model layer.

There are already a lot of responsibilities, so the Model layer will naturally become fatter.

Conclusion

Today we completed the connection between the front-end and back-end paintdom and paintweb of the painting program. Due to the consideration of supporting offline editing, the docking work is relatively complex. If you cannot understand it, it is recommended to read the code carefully. Of course, we will discuss this case in detail later.

Here is the source code for the latest version:

https://github.com/qiniu/qpaint/tree/v30

At this point our actual combat process comes to an end.

If you have any thoughts and interpretations on today’s content, please leave me a message and we will discuss it together. So far, we have discussed the business architecture of a complete desktop application (which may be a stand-alone or a B/S structure).

In the next lecture, we will talk about the architectural design of auxiliary interface elements (custom controls), which is quite different from the business architecture considerations of the application.

Guess you like

Origin blog.csdn.net/qq_37756660/article/details/134971917