Distributed and microservices are talked about every day, but do you really understand what a service is?

The technical architecture of the service

Services should be de-versioned, whether microservices or SOA

Any structural adjustment is just to tear down the east wall to make up the west wall, and cannot solve the problem of efficiency

First clarify the relationship between service governance and organizational structure, and then talk about microservices

Since we have been engaged in the structural transformation of traditional enterprises, we have not practiced how to implement microservice architecture in emerging Internet enterprises. Before writing this chapter, I communicated with architects of Internet companies who have implemented microservice architectures in the architecture group, and the result was profound disappointment. I have seen the capabilities that Internet companies have lost for the sake of speed, and I have seen the roughness under the pretext of "simplicity is beautiful".

To this end, I have rewritten this chapter. The previous chapters separated the entire architecture and discussed the analysis in isolation. In this chapter, I hope to do my best to integrate traditional service systems and microservices to construct a balanced, secure, easy-to-expand, easy-to-maintain, and efficient enterprise service architecture.

Integrated Architecture within the Enterprise

The technical requirements of the new system and the existing system are different within the enterprise. Often the already built system hopes to be as stable as possible without major structural and technical changes, and at the same time, it is also hoped that these existing systems can function as much as possible; The newly built system hopes to adopt advanced technology and architecture as much as possible on a stable and reliable basis, so as to adapt to future development and not fall behind quickly. Such a strategy will inevitably cause the internal systems of the enterprise to be heterogeneous. Therefore, in the long run, we focus on the application integration architecture between heterogeneous systems, and in the short term, we focus on the unified application development architecture of the current new system.

Decentralized architecture is not suitable for application integration

The requirement of the application integration architecture within the enterprise is to integrate all the existing heterogeneous service systems through non-intrusive adaptation technology means, and provide interfaces on demand to service consumers. This requirement of application integration architecture determines that the decentralized architecture cannot be applied.

Distributed and microservices are talked about every day, but do you really understand what a service is?

Decentralized architecture has no practical significance in integrated architecture, because traditional applications are already point-to-point mesh connections of decentralized architectures before they are integrated. It is this messy point-to-point mesh connection between heterogeneous systems that gave birth the emergence of an integrated architecture. If the integrated adapters are forcibly distributed to each application to form the front-end of the application, it is equivalent to deploying a set of ESBs for each application independently, which has no practical value except for the increase of overhead.

Even if we don't consider the actual value, the decentralized integration architecture still requires a physical dispatch center to realize the service composition that may be required. Because it is not reasonable to implement the combination on the adaptation front of any application (although the application architect can forcefully choose to implement it on one or several fronts).

System security restrictions on decentralized architecture

When we discussed intrinsic services in Chapter 4, we gave an intrinsic microservice architecture diagram, as follows:

Distributed and microservices are talked about every day, but do you really understand what a service is?

I said that looking at the thin blue lines is not elegant enough. Here we take a look at the schematic diagram of the deployment architecture of traditional enterprises:

Distributed and microservices are talked about every day, but do you really understand what a service is?

In fact, I mustered up the courage to question this matter. Before writing this article, I consulted with friends who do Internet, and I was convinced that there is no DMZ in Internet companies, and all applications are mixed in one District, including the database (of course, since the author has no experience in the Internet company, and usually Internet companies do their own architecture design, so I have no chance to participate in the architecture design of Internet companies, all this is just hearsay). I believe everyone understands the specific role of DMZ. Of course, if you don't understand, you can find relevant information. For security reasons, the WEBUI layer is generally deployed in the DMZ area. I don't want to break this fine design for microservices, so the diagram in Chapter 4 becomes like this:

Distributed and microservices are talked about every day, but do you really understand what a service is?

In this picture, I put the gateway and the composite service container together for convenience. In fact, they can be deployed separately (this is not important). The important thing is that this architecture has returned to the ESB center exchange mode. In fact, the traditional SOA architecture is also like this in the enterprise, because the security of customer data always comes first.

So how do we solve the Taobao-level transaction volume we are worried about? What I want to say is that it is absolutely impossible to sacrifice customer data security in exchange for efficiency. Thousands of applications are deployed in one area, and there is no way to guarantee that each application is solid, reliable and invulnerable. Disaster is coming, and even malicious programmers can artificially create meat machines, which is simply impossible to prevent.

If the efficiency of the algorithm cannot be improved, and the efficiency cannot be improved by weakening the security indicators, then resources can only be exchanged. In order to ensure the interests of customers, money must be willing to spend, or do not do this business.

Reduce centralized load by partitioning multiple centers

Distributed and microservices are talked about every day, but do you really understand what a service is?

In the above figure, by dividing the business into different partitions by line, each partition is integrated with an independent ESB cluster. In this way, when each front-end system calls back-end services, the access pressure can be distributed to different sub-centers, thereby improving access efficiency by increasing resources.

Improve query service efficiency through data redundancy

Usually, an excellent commercial ESB product can produce system delays ranging from a few milliseconds to tens of milliseconds, and the impact on the business processing time of tens to hundreds of milliseconds for most applications is minimal. But when the processing time of the application is reduced to a few milliseconds, and the massive concurrency capability is required (such as simple query), the delay caused by the integrated architecture becomes intolerable (unless technological progress makes microsecond-level ESB become a reality. ).

Under the traditional application architecture, ODS, data warehouse and data mart are formed through data integration to solve the pressure on the business system caused by real-time or non-real-time data requests such as data query, reporting and online analysis; the Internet mode uses read and write A separate way to solve similar real-time data query problems. Therefore, in the above architecture, we also mentioned that the short-circuit method can be replaced by a data integration architecture.

Compared with the way of reading and writing separation, using ODS to solve real-time data query has obvious defects:

1. The data range stored by ODS is too large, and the read-write separation is aimed at data with massive query requirements, so the data hit rate is higher, which is more cost-effective than ODS in using redundancy to improve efficiency.

2. The ODS method requires the application to change the query logic to increase the coupling between the systems. Most applications only pay attention to their own database. If the ODS method is used inside the application to improve the query efficiency, the application will depend on the external database. , resulting in reduced efficiency throughout the application lifecycle from development to operation.

The root cause of a large number of front-end and back-end interaction problems is that "the front-end presentation system needs the data of the back-end service system". Why is this so? In fact, this is a misunderstanding brought to us by OOAD. The traditional object-oriented method tells us that we will encapsulate the properties and methods into an object to facilitate consistent operations on the object, so we will encapsulate the two methods of creating and querying the "customer" object, which is very consistent Intuitive logic. But is it really reasonable to do so?

From a service-oriented approach, services such as querying customer information really don't have to be implemented by the client information system. In fact, it is possible for any system to implement this service. In the real world, each person's information has different copies in various places, such as in the police station, in the talent center, and in your company. In fact, when someone needs to query your personal information, they will basically The principle of nearest inquiry is adopted. The object-oriented approach has caused us a misunderstanding. This misunderstanding is that all behaviors are bound to entity packages, so when we implement services, we also attach behaviors to entities.

In fact, the practice in the real society is that the (information) behavior will be closer to the demanders and users. In other words, we should create a copy of the data to be displayed in the front-end application, and provide query services in the front-end system, because only the front-end system will use these services more frequently. Simply put, your company will create personal information for you. Copy, otherwise, you will have to go to the household registration office to check your information. I am sure this is not a joke. As shown in the figure below, creating a cropped copy of the object in the front-end system eliminates the need for massive queries between systems.

Distributed and microservices are talked about every day, but do you really understand what a service is?

However, it should be noted that since the WEB layer is all in the DMZ area, placing the query library in the DMZ area brings data security risks, which we have mentioned earlier. Therefore, this method can only solve the query efficiency problem of non-critical data.

Distributed multi-center architecture within the enterprise

Distributed and microservices are talked about every day, but do you really understand what a service is?

The picture above is a real case of a large enterprise in the insurance industry. During the consultation process of the structure transformation, we proposed a distributed structure with capacity building and consumption as the main business goals according to the current situation and future development direction of the customer.

Through the distributed service center, the company's internal business capabilities, capabilities provided by traditional partners, data capabilities, and third-party capabilities of Internet integration are unified to establish a new Internet ecosystem, enabling internal, external, and partners. However, developers from the Internet can easily understand and use these capabilities, helping the rapid construction and expansion of the enterprise ecosystem.

During operation, the services in the originally isolated Internet area, external area and intranet area can be easily accessed through the form of global routing, and logically become a whole; in terms of management and governance, all services are Unified processes are managed and governed on one management platform.

The basic logical structure of the competence center

Distributed and microservices are talked about every day, but do you really understand what a service is?

Logically, the capability aggregation center is divided into three main parts, and each part communicates asynchronously through queues:

Out: service (outgoing) container

Out is the deployment container of the service entity or service adapter, which can be considered as the implementer of the service. Globally, logically a service has only one implementer (multiple implementers can be considered as a clustering method of services).

In: service (access) container

In is the deployment container of the service API (adapter). In order to realize the service access location and protocol transparency of S++, the service entities in the Out container cannot be directly accessed by physical consumers outside the center. Publish a variety of different APIs to accommodate service consumers with different access protocols.

In order to realize the transparency of service access addresses, each center's In container (if necessary) can publish both the service API deployed by the center and the API of services deployed by other centers, so no matter whether consumers receive access from any center channel access, you can transparently access all global services.

Router: service router

In order to realize the transparency of service access addresses to consumers, the competence center must realize that consumers can access any global service transparently no matter which channel they access from. Therefore, the global router must maintain the routing address table of the global service, and connect the In access Service requests are routed correctly to the Out container where the service is deployed.

Basically, each center runs the platform with such logical components as the base. In addition to the basic operating platform, each center will add other components according to its own business needs. For example, the outreach platform will have a complete set of security components; the Internet open platform will provide a self-service developer portal, and the main data exchange center will provide data standards. Real-time sync capability and more.

Internet open platform

Distributed and microservices are talked about every day, but do you really understand what a service is?

The open platform is used to open the internal services of the enterprise to Internet applications, and to introduce third-party services on the Internet. The system should be able to resist various network attacks on the Internet, establish an application authorization authentication mechanism and an isolation mechanism, and have a complete fault isolation mechanism to ensure that the system Smooth operation. The open platform is built based on cloud architecture and mainly includes the following functional modules:

Developer Portal: The platform provides a developer portal as an interface for developing self-service, including developer registration, community, application and service management, etc.;

Service Gateway: Provides management for service-oriented routing, protocol conversion, flow control, and log flow.

OAuth authorization: Provides an open authorization protocol for users to access resources.

Operation and maintenance monitoring platform: The platform provides a unified management and monitoring platform to complete the design of platform-related parameters, the review of various applications, and the monitoring and statistical analysis of services and applications.

We have introduced microservices into the Internet open platform, and microservice applications will be deployed in the PaaS private cloud to achieve dynamic expansion of applications. All micro-applications will be attached to the API gateway, and the API gateway will provide internal and external routing of micro-services. This architecture is basically consistent with the theoretical architecture we proposed earlier.

In theory, all applications can be deployed to a PaaS cloud, but why not in practice? Because traditional applications are too large, it is not conducive to the dynamic response of PaaS. At the same time, because traditional applications cannot provide intrinsic services, the cloud environment scaling strategy will be too complicated and the reliability of the cloud environment will be reduced. In the last chapter of this article, we will discuss the problem of PaaS cloud.

A brief introduction to the capabilities of the remaining centers

The Outreach Competence Center mainly provides the ability for enterprises to communicate with traditional partners, usually through dedicated communication channels, using different security protocols and message formats.

The master data capability center mainly provides master data publishing and synchronization capabilities for internal and external applications, and adopts the service topic subscription model to ensure asynchronous delivery to the data consumer system. Consumers form data copies locally, thereby reducing pressure on business systems and networks and improving query efficiency.

The composition service center provides global and local composition capabilities based on business services, and publishes the combined process as a new service for channel invocation. The composite service center is not necessarily a real physical center, it can be embedded in each physical center.

summary

This chapter uses a practical case to introduce the distributed multi-center architecture. Due to space limitations, many design and implementation details cannot be expanded.

The distributed multi-center architecture is a very flexible architecture, which can be arbitrarily tailored according to the actual situation of customers. In order to adapt to the different organizational structures of customers, service governance adopts an on-site customizable governance process, which can adapt to both the registration system and the approval system. Moreover, the combination of S++ and distributed multi-center architecture gives microservices new features and broader prospects:

1. Intrinsicization of microservices, completely realizing micro-application decoupling, and greatly simplifying the difficulty of microservice development and operation and maintenance.

2. The S++ service composition container makes service-based process orchestration simpler and easier to maintain, and even business personnel can use it directly.

3. The complete separation of business and technology in S++ makes the governance of microservices simpler, thereby achieving true business agility.

4. The theory of S++ parallel combination will give microservices higher performance and transaction consistency guarantees, so that microservices can be more widely used in various fields.

5. The combination of distributed multi-center architecture and S++ not only avoids the problem of unmanageable decentralized architecture, but also ensures the security and efficiency of the system.

If you are interested in Java high concurrency, distributed, microservices, etc., you can join my Java exchange group: 574683650, welcome to discuss! !

Finally, in the last chapter, we will discuss how S++-based microservices can help PaaS environments achieve high-sensitivity dynamic scaling capabilities based on service quality, so that they can quickly respond to sudden network concurrency shocks.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325499856&siteId=291194637