Microservice architecture starting from 0: (2) How to quickly experience the microservice architecture?

I remember seeing a big bull saying a long time ago: If the monolithic architecture can’t be done well, don’t engage in the microservice architecture. At first glance, this sentence makes sense, but later found that this sentence is not quite right, because the purpose of the microservice architecture is to reduce the complexity of the system, so  the microservice architecture should be simpler and better practice than the monolithic architecture That's right .

In this article, we will share how to build a  simple pattern  of microservice architecture.

What is a simple pattern for a microservices architecture?

Compared with the tens of thousands of concurrent visits of large Internet platforms, or the online version release multiple times a day, most enterprises and projects do not have such requirements. They focus on how to better improve development efficiency, how to implement new requirements faster, how to operate and maintain more conveniently, and so on.

A simple pattern of microservice architecture is a software architecture solution that can meet the above requirements.

Compared with the "perfect" microservice architecture solution, the simple mode of the microservice architecture can temporarily ignore the distributed transaction technology to ensure data consistency, the configuration center components that facilitate the migration of packages between environments (development, testing, production), monitoring Call chain components for API calls, circuit breaker components to avoid system overload, API documentation frameworks for API management and testing, Zookeeper, Redis, and various MQs. Just focus on the often talked about  registries , service discovery , load balancing  , and  service gateways  .

How to implement the simple mode of microservice architecture?

When implementing a microservice architecture, the focus is to develop the advantages and overcome the disadvantages. As shown in the comparison of the previous article " Re: Microservice Architecture from 0: (1) Revisiting the Microservice Architecture ", compared with the monolithic architecture, the biggest disadvantage of the microservice architecture is that it is  difficult to get started  and difficult to operate and  maintain .

Let's take a look at how to start from these two aspects and implement the simple mode of microservice architecture.

Difficult to get started

Compared with the traditional monolithic architecture, the microservice architecture introduces too many concepts at once, making it a bit overwhelming for novices. Therefore, we need to go to the turnips and save the greens, and clarify which ones are what we need and which ones are just legends on the rivers and lakes. Let's take a look at which components are necessary to develop a system with a microservice architecture.

First, let's talk about the four steps of developing with the simple pattern of microservices:

Step 1: Develop single-responsibility microservices using the existing technology system in the organization.

Step 2 : The service provider registers the address information in the registry, and the caller pulls the service address from the registry.

Step 3 : Expose the microservice API to the portal and mobile APP through the portal backend (service gateway).

Step 4 : Integrate the management terminal module into a unified operation interface.

In order to realize the above 4 points, the corresponding basic technologies (required components) must be mastered below.

  • Registry, service discovery, load balancing : corresponding to the first and second steps above

  • Service gateway : corresponds to the third step above

  • Management-side integration framework : corresponding to the fourth step above

Registry, service discovery, load balancing

Different from the monolithic architecture, the microservice architecture is a distributed mesh structure composed of a series of fine-grained services with a single responsibility  . The services communicate through a lightweight mechanism. At this time, a  service registration discovery  problem must be introduced, that is to say The service provider needs to register its own service address to a certain place ( Service Registry Center ), and the caller of the service can find the address of the service to be called from the service registry ( Service Discovery, Service Discovery ). At the same time, service providers generally provide services in a cluster mode, which introduces the  need for load balancing  .

According to the location of Load Balancer (LB ), there are three main service registration, discovery and load balancing solutions:

Centralized LB solution

The first is the centralized LB scheme, there is an independent LB between service consumers and service providers. LB is usually a dedicated hardware device such as F5, or based on software such as LVS, HAproxy and so on.

(click to enlarge image)

When a service caller invokes a service, it initiates a request to the LB, and the LB routes the request to the specified service according to certain policies (such as polling, randomness, minimum response time, minimum concurrency, etc.). The biggest problem with this solution is that there is an additional hop between the caller and the provider, and the LB is most likely to become the bottleneck of the entire system .

In-process LB scheme

The second is the in-process LB solution. In view of the shortcomings of centralized LB, the in-process LB solution integrates the functions of LB into the service consumer process in the form of a library. This solution is also called Soft Load Balancing or Client load scheme.

(click to enlarge image)

The principle is: the service provider sends its own address to the service registration center, and at the same time sends the heartbeat to the registration center at regular intervals, and the registration center judges whether to remove the node from the registry according to the heartbeat situation. When a service provider invokes a service, it first pulls the service registration information from the registry, and then invokes the service node according to a certain policy.

In this case, even if the registry is down, the caller can route the request to the correct service based on the service address that has been pulled in memory. The biggest problem with this solution is that the service caller may need to integrate the client of the registry, that is, to upgrade the server of the registry in the future, the client of the registry may need to be upgraded .

Host-independent LB process scheme

(click to enlarge image)

The third is the host-independent LB process scheme. This scheme is a compromise scheme proposed for the shortcomings of the second scheme. The principle is basically similar to the second scheme. The difference is that it combines LB and service discovery functions. Move out of the process and become an independent process on the host. When one or more services on the host want to access the target service, they all use the independent LB process on the same host for service discovery and load balancing. A typical example of this approach is Airbnb's SmartStack service discovery framework. The biggest problem with this solution is that deployment and operation and maintenance are more troublesome .

The above three points are excerpted from Mr. Yang Bo's " What basic frameworks do we need to implement microservices?

And some supplements have been made. If you want to see a more detailed description of these three programs, it is recommended to read Mr. Yang Bo's article.

At present, with the rise and maturity of Netflix's microservices solution and Spring Cloud, the second solution has  become our first choice. We recommend using  Eureka as the service registry and Ribbon for client service discovery and load balancing .

The biggest advantage of this choice is that it is  simple + practical + controllable . There is no need to introduce additional Zookeeper and Etcd as a registry, and deployment and operation and maintenance are relatively simple. From the code point of view, it is also very simple to use.

However, it should be noted that this solution is generally used for  load balancing in the local area network  . If you want to do load balancing for services open to the Internet, you can use Nginx Upstream to do it.

The following are the most important parameter configurations of Eureka. From these parameters, you can also see how Eureka works.

(click to enlarge image)

Due to Eureka's registration and expiration mechanism, it takes nearly 2 minutes for the service to be fully available from startup. Therefore, in order to improve the release speed in the development and testing environments, we have changed the following parameters. During production, it must be changed back.

(click to enlarge image)

The interface of the Eureka registry is as follows:

(click to enlarge image)

Details can be found in link 1 and link 2 .

service gateway

Usually, there are many microservices with a single responsibility in a large system. If the portal system or mobile APP calls the API of these microservices, at least two things must be done:

  • A unified entry to call the API of microservices

  • API authentication

This requires a  service gateway . In 2015, we made a simple API gateway using Rest Template + Ribbon. The principle is that when the API gateway receives the request /service1/api1.do, it forwards the request to the api1 interface of the microservice corresponding to service1.

Later, I found that the functions we implemented, Spring Cloud Zuul has a better implementation, so I switched to Zuul. Zuul is a server-side API gateway and load balancer developed by Netflix based on Java.

In addition, Zuul can dynamically load, compile, and run filters. The most surprising thing is that Zuul's forwarding performance is said to be similar to Nginx. Details can be found at https://github.com/Netflix/zuul.

In general, API gateways (which can be called portal backends) are used for reverse proxying, authorization authentication, data tailoring, data aggregation, and so on.

(click to enlarge image)

Management side integration framework

After mastering registry, service discovery, load balancing and service gateway technologies, microservices can already provide reliable services for portal systems and mobile apps. However, how is the management terminal used by the back-end operators implemented?

Due to the low pressure on the back-end operating system, we can use CAS and UPMS (UPMS is a user and permission management system developed by our team that fits the micro-service architecture, we will share it on the Qingliuyun official website, welcome to pay attention) to develop the microservices separately. Services are integrated.

The basic process of integrating a microservice in three steps is as follows:

  1. The Spring Boot-based security starter is introduced into the microservice, and the starter includes the top Banner and the left menu of the system.

  2. Register the access address of the microservice in UPMS, and use this address as the entry menu (first-level menu) of the microservice.

  3. Configure the function menu and role permission information of the microservice in UPMS.

When a user opens a microservice from a browser, the security starter will call the UPMS API to pull all the microservices list (first-level menu) and the function list of the current microservice (secondary menu), and convert the page of the current microservice. displayed to the user in the content area.

Application Architecture Diagram:

(click to enlarge image)

UPMS screenshot, the orange part is provided by the UPMS framework, and the red box is the page of the microservice:

(click to enlarge image)

UPMS accesses new microservices through the "module" function:

(click to enlarge image)

So, in the end, a system based on a simple pattern of microservices architecture can look like this:

(click to enlarge image)

So far, the basic microservice architecture has been built. Let's talk about how to solve the problem of microservice operation and maintenance.

Difficulty in operation and maintenance

The operation and maintenance of the microservice architecture is mainly relative to the monolithic architecture. Because after the implementation of the microservice architecture, the entire system has many more modules than before. After the modules increase, the workload of deployment and maintenance will increase. Therefore, to solve the problem of difficult operation and maintenance, we can first   solve it from the perspective of automation .

Further, if you want to better utilize the advantages of the microservice architecture and avoid the shortcomings, it is recommended to prepare a reliable infrastructure, including automatic construction, automatic deployment, log center, health check, performance monitoring and other functions.

Otherwise, it is very likely that due to the shortcomings of the microservice architecture, our team will lose confidence in the microservice architecture and return to the old road of the monolithic architecture. If a worker wants to do a good job, he must first sharpen his tool. This is really important .

Continuous Integration

After the monolithic application is microserviced, it is very likely that the original package will be divided into 10, 20 or even more packages. Well, the first thing we had trouble with was that the deployment effort was directly scaled up by a factor of 10-20. At this time, the methods and tools of continuous integration have become a prerequisite for implementing a microservice architecture. In practice, we use the Docker-based container service platform to automatically deploy the microservices of the entire system. The process is as follows:

(click to enlarge image)

If there is no microservice support platform, the Jenkins API and Docker API can also be called in the form of shell scripts.

The main process is:

  1. Call the Jenkins command to pull the code from the code repository and package the code.

  2. Invoke the Docker /build and /images/push commands to build the image and push the image to the private registry.

  3. Create and start containers by calling the Docker /containers/create and /containers/start commands.

Configuration Center

In the development/test environment, the program package has been packaged into a Docker image. If the image that passes the test can be directly pushed to the production environment, it can directly save the repeated packaging and deployment work for the production environment. Wouldn't it be beautiful?

If you need to achieve this effect, you need to package the package to be environment-independent, that is, there can be no environment-related configuration information in the package, which also introduces the  configuration center  component.

This component is very simple, just get the key-value pairs required by the microservice according to the project code, environment code and microservice code. E.g:

ProjectA_PRODUCTION_MicroService1_jdbc.connection.url。

Using the configuration center also has a very important added value, that is, the configuration information of different environments can be managed by different people, which strengthens the security of configuration information in the production environment, such as database accounts and passwords.

This module also has some open source projects for reference, such as Baidu disconf, Spring Cloud Config. And we have promoted the spirit of reinventing the wheel and developed a configuration center microservice to easily integrate with the UPMS mentioned above.

Note: This component is not required for the simple pattern of the microservice architecture, it is only recommended.

Monitoring alarms

After a single application is microserviced, a single application is split into many microservices, and the system health inspection, performance monitoring, business indicator health, file backup monitoring, database backup monitoring, and timing task execution monitoring are all changed. difficult.

Therefore, in order to let the operation and maintenance students live a more secure life, it is best to build a monitoring platform. If you want to quickly build a monitoring platform, you can consider Nagios and Zabbix. If you want better scalability and customizability, you can consider building with the following components:

(click to enlarge image)

Collectd  is a host, database, network, and storage metrics collector. 1653 Stars on GitHub.

Metrics  is a powerful JVM metrics collector that provides many modules to provide auxiliary statistics for third-party libraries or applications, such as Jetty, Logback, Log4j, Apache HttpClient, Ehcache, JDBI, Jersey, and it can also convert metrics data Sent to Ganglia and Graphite to provide graphical monitoring. 5000+ Stars on GitHub.

CAdvisor  is a Docker container metrics collector, produced by Google. 6000 Stars on GitHub.

Grafana  is a very beautiful open source dashboard tool that supports multiple data sources such as Graphite, InfluxDB, MySQL and OpenTSDB. 17,000 Stars on GitHub.

InfluxDB  is an excellent open source distributed time series database, which currently ranks first in time series data. Among its features, RETENTION POLICY can automatically clear unnecessary historical data, which is very practical. 11175 Stars on GitHub.

In addition to the above modules, we have also developed a module to detect the health and performance of applications, and send alerts to operation and maintenance personnel when various indicators such as host, program health, and program performance are abnormal.

Summarize

At the end of this article, we can look back. We only need to understand the registry, service discovery, load balancing, service gateway and management-side integration framework at the development level, and prepare continuous integration tools at the operation and maintenance level. , configuration center and monitoring and alarm tools, you can easily implement the microservice architecture and enjoy the wonderfulness brought by the microservice architecture. Have a great time everyone.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326791518&siteId=291194637