What you need to know about building a microservice architecture quickly and correctly

1. Four characteristics of microservice architecture

What does a good microservice architecture look like? To build a microservice architecture, you must have the following four characteristics:

  1. The granularity of services needs to be divided according to business functions. For some complex services, the granularity may be larger, and for relatively simple services, the granularity may be smaller. In short, the granularity of services can be large or small, but often we prefer it to be as small as possible, but do not want any dependencies between services, so the division of granularity is a matter of testing the level of architects.

  2. We need to ensure that each microservice does only one thing, which is what we often refer to as the "Single Responsibility Principle", which provides guidelines for the division of services.

  3. Each service is isolated from each other and does not affect each other. That is, each service needs to run in its own process. As we all know, processes are isolated and safe, and resources within processes or between threads are shared. In other words, a problem with one service will not affect other services in any way.

  4. With the continuous increase of business functions, the number of services will gradually increase. We need to provide automatic deployment and monitoring and early warning capabilities for services in order to manage services more efficiently.

 

2. Build a Microservice Architecture

Microservices have been popular for a long time, but there are few articles on the Internet that can maturely spread the technology, and at the same time perfectly take care of "people who are new to the microservice field", starting from 0, using easy-to-understand language to explain the microservice architecture series. So, we curated this article. This article is reproduced from the InfoQ vertical number "Chat Architecture", ID: archtime

I remember seeing a big bull saying a long time ago: If the monolithic architecture can’t be done well, don’t engage in the microservice architecture. At first glance, this sentence makes sense, but later found that this sentence is not quite right, because the purpose of the microservice architecture is to reduce the complexity of the system, so the microservice architecture should be simpler and better practice than the monolithic architecture That's right.

In this article, we will share how to build a simple pattern microservice architecture.

What is the Simple Pattern of Microservice Architecture?

Compared with the tens of thousands of concurrent visits of large Internet platforms, or the online version release multiple times a day, most enterprises and projects do not have such requirements. They focus on how to better improve development efficiency, how to implement new requirements faster, how to operate and maintain more conveniently, and so on.

A simple pattern of microservice architecture is a software architecture solution that can meet the above requirements.

Compared with the "perfect" microservice architecture scheme, the simple mode of microservice architecture can temporarily ignore the distributed transaction technology to ensure data consistency, the configuration center components that facilitate the migration of packages between environments (development, testing, production), monitoring Call chain components for API calls, circuit breaker components to avoid system overload, API documentation frameworks for API management and testing, Zookeeper, Redis, and various MQs. Just focus on the often talked about registries, service discovery, load balancing and service gateways.

How to land?

When implementing a microservice architecture, the focus is to develop the advantages and overcome the disadvantages. Compared with the monolithic architecture, the biggest disadvantage of the microservice architecture is that it is difficult to get started and difficult to operate and maintain. Let's take a look at how to start from these two aspects and implement the simple mode of microservice architecture.

Difficult to get started

Compared with the traditional monolithic architecture, the microservice architecture introduces too many concepts at once, making it a bit overwhelming for novices. Therefore, we need to go to the turnips and save the greens, and clarify which ones are what we need and which ones are just legends on the rivers and lakes. Let's take a look at which components are necessary to develop a system with a microservice architecture.

First, let's talk about the four steps of developing with the simple pattern of microservices:

Step 1: Develop single-responsibility microservices using the existing technology system in the organization.

Step 2: The service provider registers the address information in the registry, and the caller pulls the service address from the registry.

Step 3: Expose the microservice API to the portal and mobile APP through the portal backend (service gateway).

Step 4: Integrate the management terminal module into a unified operation interface.

In order to realize the above 4 points, the corresponding basic technologies (required components) must be mastered below.

  • Registry, service discovery, load balancing: corresponding to the first and second steps above

  • Service gateway: corresponds to the third step above

  • Management-side integration framework: corresponding to the fourth step above

Registry, service discovery, load balancing

Different from the monolithic architecture, the microservice architecture is a distributed mesh structure composed of a series of fine-grained services with a single responsibility. The services communicate through a lightweight mechanism. At this time, a service registration discovery problem must be introduced, that is to say The service provider needs to register its service address in a certain place (Service Registry Center), and the caller of the service can find the address of the service to be called from the service registry (Service Discovery). At the same time, service providers generally provide services in a cluster mode, which introduces the need for load balancing.

According to the location of the Load Balancer (LB), there are currently three main service registration, discovery and load balancing solutions:

Centralized LB solution

The first is the centralized LB scheme, there is an independent LB between service consumers and service providers. LB is usually a dedicated hardware device such as F5, or based on software such as LVS, HAproxy and so on.

When a service caller invokes a service, it initiates a request to the LB, and the LB routes the request to the specified service according to certain policies (such as polling, randomness, minimum response time, minimum concurrency, etc.). The biggest problem with this solution is that there is an additional hop between the caller and the provider, and the LB is most likely to become the bottleneck of the entire system.

In-process LB scheme

The second is the in-process LB solution. In view of the shortcomings of centralized LB, the in-process LB solution integrates the functions of LB into the service consumer process in the form of a library. This solution is also called Soft Load Balancing or Client load scheme.

The principle is: the service provider sends its own address to the service registration center, and at the same time sends the heartbeat to the registration center at regular intervals, and the registration center judges whether to remove the node from the registry according to the heartbeat situation. When a service caller calls a service, it first pulls the service registration information from the registry, and then calls the service node according to a certain policy.

In this case, even if the registry is down, the caller can route the request to the correct service based on the service address that has been pulled in memory. The biggest problem with this solution is that the service caller may need to integrate the client of the registry, that is, to upgrade the server of the registry in the future, the client of the registry may need to be upgraded.

Host-independent LB process scheme

The third is the host-independent LB process scheme. This scheme is a compromise scheme proposed for the shortcomings of the second scheme. The principle is basically similar to the second scheme. The difference is that it combines LB and service discovery functions. Move out of the process and become an independent process on the host. When one or more services on the host want to access the target service, they all use the independent LB process on the same host for service discovery and load balancing. A typical example of this approach is Airbnb's SmartStack service discovery framework. The biggest problem with this solution is that deployment and operation and maintenance are more troublesome.

At present, with the rise and maturity of Netflix's microservices solution and Spring Cloud, the second solution has become our first choice. We recommend using Eureka as the service registry and Ribbon for client service discovery and load balancing.

The biggest advantage of this choice is that it is simple + practical + controllable. There is no need to introduce additional Zookeeper and Etcd as a registry, and deployment and operation and maintenance are relatively simple. From the code point of view, it is also very simple to use.

However, it should be noted that this solution is generally used for load balancing in the local area network. If you want to do load balancing for services open to the Internet, you can use Nginx Upstream to do it.

Here I recommend an architecture learning exchange group to everyone. Communication and learning group number: 575745314 It will share some videos recorded by senior architects: Spring, MyBatis, Netty source code analysis, high concurrency, high performance, distributed, principles of microservice architecture, JVM performance optimization, distributed architecture, etc. These become the necessary knowledge system for architects. You can also receive free learning resources, which are currently benefiting a lot

The following are the most important parameter configurations of Eureka. From these parameters, you can also see how Eureka works.

Due to Eureka's registration and expiration mechanism, it takes nearly 2 minutes for the service to be fully available from startup. Therefore, in order to improve the release speed in the development and testing environments, we have changed the following parameters. During production, it must be changed back.

The interface of the Eureka registry is as follows:

For details, please refer to

https://github.com/Netflix/eureka

and

https://github.com/Netflix/ribbon。

  service gateway

Usually, there are many microservices with a single responsibility in a large system. If the portal system or mobile APP calls the API of these microservices, at least two things must be done:

  • A unified entry to call the API of microservices

  • API authentication

This requires a service gateway. In 2015, we made a simple API gateway using Rest Template + Ribbon. The principle is that when the API gateway receives the request /service1/api1.do, it forwards the request to the api1 interface of the microservice corresponding to service1.

Later, I found that the functions we implemented, Spring Cloud Zuul has a better implementation, so I switched to Zuul. Zuul is a server-side API gateway and load balancer developed by Netflix based on Java.

In addition, Zuul can dynamically load, compile, and run filters. The most surprising thing is that Zuul's forwarding performance is said to be similar to Nginx. Details can be found at https://github.com/Netflix/zuul.

In general, API gateways (which can be called portal backends) are used for reverse proxying, authorization authentication, data tailoring, data aggregation, and so on.

  Management side integration framework

After mastering registry, service discovery, load balancing and service gateway technologies, microservices can already provide reliable services for portal systems and mobile apps. However, how is the management terminal used by the back-end operators implemented?

Due to the low pressure on the back-end operating system, we can use CAS and UPMS (UPMS is a user and permission management system developed by our team that fits the micro-service architecture, we will share it on the Qingliuyun official website, welcome to pay attention) to develop the microservices separately. Services are integrated.

The basic process of integrating a microservice in three steps is as follows:

  1. The Spring Boot-based security starter is introduced into the microservice, and the starter includes the top Banner and the left menu of the system.

  2. Register the access address of the microservice in UPMS, and use this address as the entry menu (first-level menu) of the microservice.

  3. Configure the function menu and role permission information of the microservice in UPMS.

When a user opens a microservice from a browser, the security starter will call the UPMS API to pull all the microservices list (first-level menu) and the function list of the current microservice (secondary menu), and convert the page of the current microservice. displayed to the user in the content area.

Application Architecture Diagram:

  

UPMS screenshot, the orange part is provided by the UPMS framework, and the red box is the page of the microservice:

UPMS accesses new microservices through the "module" function:

So, in the end, a system based on a simple pattern of microservices architecture can look like this:

So far, the basic microservice architecture has been built. Let's talk about how to solve the problem of microservice operation and maintenance.

Here I recommend an architecture learning exchange group to everyone. Communication and learning group number: 575745314 It will share some videos recorded by senior architects: Spring, MyBatis, Netty source code analysis, high concurrency, high performance, distributed, principles of microservice architecture, JVM performance optimization, distributed architecture, etc. These become the necessary knowledge system for architects. You can also receive free learning resources, which are currently benefiting a lot

  Difficulty in operation and maintenance

The operation and maintenance of the microservice architecture is mainly relative to the monolithic architecture. Because after the implementation of the microservice architecture, the entire system has many more modules than before. After the modules increase, the workload of deployment and maintenance will increase. Therefore, to solve the problem of difficult operation and maintenance, we can first solve it from the perspective of automation.

Further, if you want to better utilize the advantages of the microservice architecture and avoid the shortcomings, it is recommended to prepare a reliable infrastructure, including automatic construction, automatic deployment, log center, health check, performance monitoring and other functions.

Otherwise, it is very likely that due to the shortcomings of the microservice architecture, our team will lose confidence in the microservice architecture and return to the old road of the monolithic architecture. If a worker wants to do a good job, he must first sharpen his tool. This is really important.

  Continuous Integration

After the monolithic application is microserviced, it is very likely that the original package will be divided into 10, 20 or even more packages. Well, the first thing we had trouble with was that the deployment effort was directly scaled up by a factor of 10-20. At this time, the methods and tools of continuous integration have become a prerequisite for implementing a microservice architecture. In practice, we use the Docker-based container service platform to automatically deploy the microservices of the entire system. The process is as follows:

  

  Configuration Center

  

ProjectA_PRODUCTION_MicroService1_jdbc.connection.url。

  

  Monitoring alarms

  

  

  

In addition to the above modules, we have also developed a module to detect the health and performance of applications, and send alerts to operation and maintenance personnel when various indicators such as host, program health, and program performance are abnormal.

finally

At the end of this article, we can look back. We only need to understand the registry, service discovery, load balancing, service gateway and management-side integration framework at the development level, and prepare continuous integration tools at the operation and maintenance level. , configuration center and monitoring and alerting tools, you can easily implement the microservice architecture.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325216611&siteId=291194637