Microservices: A microservice in action using Angular + Nodejs + spanner + AppEngine in GCP

1. Introduction

This is not a microservice project! I don't know how to name it, so I had to abuse it.
"The microservice architecture has an important rule: each microservice must have domain logic and data. Similar to the logic and data of a complete application, in the autonomous life cycle, microservices also have their own logic and data, and can be targeted for each Each microservice is deployed independently. " 

All services in this project share a relational database, and there is no domain-driven design, so it cannot be regarded as a microservice project, but most of the projects encountered as a developer are repetitive wheels and can encounter a The project, which contains multiple services and has been officially launched, is barely a good experience.

This article reconstructs a traditional monolithic application into a modern multi-service cloud application, and uses microservices to solve problems. After a small amount of reconstruction, an existing monolithic application is split into multiple services. Each service can be independently developed, deployed, and extended. They will be deployed on Google ’s GCP cloud, and eventually serve as a Provide services for the application as a whole. There are the following advantages:

(1) Maintainability: This kind of architecture provides long-term agility. These services usually have fine-grained and independent life cycle characteristics, which makes complex, large and highly scalable systems have better maintainability;

(2) Cost savings: Each service can be independently scaled horizontally, so that only functions that really need more processing resources or network bandwidth can be expanded, rather than expanding other functional areas that do not need to be expanded together. Because less hardware is used, this also means cost savings.

2. Monomer application

This is a monolithic application, using the front-end and back-end separation methods, the front-end uses the angular framework, and the back-end uses Nodejs + Mysql. This part is relatively simple and is not the focus of this article. The specific architecture diagram is as follows:

The specific web page is shown below

It can be seen that it is a very ordinary website. Here we think about it. How do we split a large monomer, do we need a comprehensive migration and refactoring?

3. Application Design

Microservices provide powerful advantages and also bring huge new challenges. We intend to analyze it using the concept of microservices.

(1) How to design?

The microservice architecture pattern provides basic support for creating microservice applications. It is actually a domain-driven design (DDD) pattern plus container orchestration theory. In the actual project, the front end has been separated. We only need to split the back-end API into small microservices according to some rules. There are currently more than 20 microservices, each of which is a separate Nodejs WebAPI applications, which do not depend on each other, can be independently developed and deployed. Finally, these microservices are deployed to the AppEngine on the GCP cloud. The AppEngine itself contains the ability to orchestrate containers and automatically expand and contract services.

  • Domain-driven design: We did not use domain-driven design (if there is one, it is that all services share a common domain model, which feels weird)
  • Container orchestration: Currently mainstream use of K8s, but we do not need to build it locally, various cloud vendors have provided good such services, such as Azure AKS, GCP GKE, etc. They provide powerful container orchestration services Ability, but it also increases the cost of learning and use. Here we use the more integrated PAAS service AppEngine, which already includes the ability of container orchestration. We only need to be responsible for uploading the code. All other things are left to the cloud service. deal with.
  • Communication between services (Synchronous / Asynchronous): Currently there is no in this project, there is no tail in the image below, but other alternatives are used, which will be improved later.

The services deployed in AppEngine are roughly as shown in the following figure. As can be seen from the best practices of AppEngine, it naturally supports multiple services. The red part is the service deployed in AppEngine. If you want to know more about AppEngine, just poke here

 

(2) How to split the service?

Those with more technical knowledge should belong to this one. The service split emphasizes a "degree". Those who have learned Ma Zhe should know that "degree" belongs to a category of philosophy. Haha, but there are some guidelines that can measure it. .

The size of the microservices is not the point. The granularity of service splitting should ensure that the microservices have the independence and integrity of the business. There should be as few service dependencies and chain calls as possible so that each service can be independently developed, deployed, and extended. From the simple point of view of programming, it is decoupling. For example, when the code submitted by your team frequently conflicts with the code submitted by other teams or requires frequent communication, do you need to consider splitting the service?

The core of the split service is how to identify the boundary of the domain model of microservices, and the theory of microservices itself stems from the domain-driven design (DDD) bounded context (BC) mode. The DDD mode is used when splitting services. A good choice, it can be used to identify the bounding context. In essence, the deeper our understanding of the related fields, the better we should be able to adapt the size of the microservices and find the correct size.

Splitting the service to the appropriate size of the microservices is usually impossible to achieve this goal overnight. In the actual development process, we can design the granularity of the service to be larger at the beginning of the design, and consider its scalability. With the development of the business, we can slowly split it further according to needs.

Service splitting can be split either by business capabilities or domain driven design (DDD).

There are many contents of service split, and more articles are needed to make it clear. Some of the links I prepared here are very good.

In this project, more than 20 microservices have been split so far, and so many have been taken out in a gradual or progressive manner. At the beginning, it was simple and rough to split according to the menu, but it can also be understood as a business logic split, and some services that were separately split later are as follows:

  • Access to the website, split as a service
  • A framework of the website, namely a shell and basic data, split as a service
  • A certain query page, which includes queries of different dimensions, almost does not involve write operations, split as a service
  • Reminder function of the website, split as a service
  • CRON JOB is used for scheduled running tasks, split as a service

It can be seen that as long as a service has the independence and integrity of the business and does not depend on other services, it can be dismantled. Before the service is dismantled, your DevOps must be done first. This is the premise of microservice .
 

(3) The architecture of a single service

Microservices is an idea, or a logical architecture, and the creation of microservices does not require the use of certain technologies, such as Docker containers are not necessary. In this project, Docker is not used. Similar to the following figure, only the app.yaml configuration file is required to deploy a single service using the GCP SDK, and it is not packaged as a Docker image.

 (4) Why not use multiple databases?

A single service must have its own domain model (data + logic + behavior), so as to be a complete microservice, that is to say each service has its own database, which will bring no small challenges:

  1. How to split the service. How the database is designed to almost match your domain-driven model, so first split the service.
  2. How to create a query to get data from multiple microservices. Usually a service cannot directly access the database of another service, for example: you need to generate a Report, which needs to aggregate data from tables in different databases. There are two ways: (1) call their API to aggregate data. (2) Use CQRS to process multiple databases, generate report data in advance, that is, generate read-only tables or views in advance. If this type of aggregation operation occurs frequently, it is necessary to consider whether the split service was split before and whether the service needs to be merged.
  3. How to achieve consistency between multiple databases. Usually we do not use strong consistency, which is difficult to achieve high availability and high scalability. More is the use of final consistency. The method is: event-driven, asynchronous communication, commonly used cloud services are Azure Service Bus, GCP Pub / Sub .

As can be seen from the above, we need to spend extra effort to solve these problems, and it is not easy to solve. In this project, there is no way to use multiple databases, because the project is not large and there is no domain model. Only a centralized database is used in the project. It is the Spanner database using GCP, which is a distributed relational database. All services of our project share this database.

Microservices are a double-edged sword!

If your project is more complex, a table may have dozens or even hundreds of columns, and some relatively independent businesses will not use all the columns, that is, the fields used in different domain models are different, then You can consider whether to switch to multiple databases.

If your project is not complicated or the project members do not have enough ability to use the microservice architecture, monolithic applications or pseudo-microservice variant applications like this project are more suitable.

(5) How to store application data?

Cloud services are available, and various cloud vendors provide corresponding object storage services.

  • AWS: S3 storage available
  • Azure: Service Account storage can be used
  • GCP: Storage can be used
  • Ali: Can use OSS storage

(6) How to use API gateway?

It mainly provides a single entry service for multiple microservices. Cloud services can be used. Various cloud vendors provide corresponding API gateway services.

  • AWS: API Gateway can be used
  • Azure: Api Management can be used

In this project, since the AppEngine service is used, it has such routing distribution and load balancing functions built in. The dispatch.yaml file defines this routing rule. If you want to know more about AppEngine, just poke here

(7) Service communication and service governance?

If you want to use these, you actually need to spend a lot of effort to deal with such problems. If you want to understand the principle of service communication, you can refer to the following link.

If you want to understand the content of service governance, I suggest you take a look at istio

But this project uses AppEngine, which also has these functions built in. If you want to know more about AppEngine, just poke here

 

(8) For deployment, use DevOps

This project uses Azure DevOps tools to implement CI / CD.

(1) Pipelines-> Library is mainly used to store sensitive information such as environment variables, parameters, and files required by the application, such as: passwords, database connection strings, key files, etc., usually should not be in the source code Contain these sensitive data, but put it here. We first need to define a Variable group for each environment in the Library, which defines the environment variables needed for the application to run. Then store the json file of the GCP service account in Secure files, which contains private key information. With it, we can deploy the application to the GCP cloud through the GCP SDK.

 (2) Pipelines-> Pipelines is used for continuous integration (CI), we can use it to package the program, run UT and test coverage, etc., and finally build the Artifact we need. This project uses the yaml file to create this pipeline, which is defined in the source code, click azure-pipelines.yml

(3) Pipelines-> Releases are used for deployment (CD), deploy the Aritifact generated in CI to the corresponding environment, for example: deploy to a virtual machine or some PAAS service, so that it can pass IP or Domain name access to our application. In this project, Artifact is deployed in GCP's AppEngine. The bash scripts are listed below.

# 第一步:在release definition中选择agent时你选择要下载的artifact zip包

# 第二步:在release definition中创建一个task用于下载service account文件

# 第三步:解压Artifact, 如果你不知道像这样的参数$AGENT_RELEASEDIRECTORY的实际路径是什么,不妨先运行一个Release试试,在Initialize job阶段它会列出所有参数及相应的值,或者参考官网,里面有预先定义的参数。
if [ -f $AGENT_RELEASEDIRECTORY/Artifacts/release/api-gateway-$(ApiGateway_Srv_Name)-$(Release.Artifacts.Artifacts.BuildNumber).zip ]; then
    cd $AGENT_RELEASEDIRECTORY/Artifacts/release && unzip api-gateway-$(ApiGateway_Srv_Name)-$(Release.Artifacts.Artifacts.BuildNumber).zip
else
    echo "File not found to extract : $AGENT_RELEASEDIRECTORY/Artifacts/release/api-gateway-$(ApiGateway_Srv_Name)-$(Release.Artifacts.Artifacts.BuildNumber).zip"
    exit 1
fi

# 第四步:在release definition中创建一个task用于安装GCP SDK

# 第五步:部署到AppEngine的时候是需要用app.yaml文件的,它里定义了运行时和所需要的环境变量,所以我们需要将Library里定义的环境变量写入到app.yaml文件中,事先我们在app.yaml中给每个环境变量设置了placeholder,在这一步,我们需要将这些placeholder替换成真正的环境变量。比如:PROJECT_URL是library中定义的变量,$(PROJECT_URL)是它的值,[PROJECT_URL]是在app.yaml中设置的placeholder。

cd $AGENT_RELEASEDIRECTORY/Artifacts/release/dist_archive

ls -ail && pwd

echo "************ Replace PROJECT_URL ************"
sed -i -e "s/\[PROJECT_URL\]/$(PROJECT_URL)/g" "./app.yaml"
echo "************ Replace CLIENT_ID************"
sed -i -e "s/\[CLIENT_ID\]/$(CLIENT_ID)/g" "./app.yaml"


# 第六步:激活serviceAccount,然后部署artifact到appengine中。

cd $AGENT_RELEASEDIRECTORY/Artifacts/release/dist_archive

ls -ail && pwd

gcloud config set verbosity debug

cp $(Agent.TempDirectory)/$(GCP_CREDENTIAL_FILE) ./keyfile.json

# 激活serviceAccount
gcloud auth activate-service-account $(SR_ACCT_CLIENT_EMAIL) --key-file=./keyfile.json --project $(GCP_PROJECT_ID)

# 设置此serviceAccount为当前的account
gcloud config set core/account $(SR_ACCT_CLIENT_EMAIL)

# $(SERVICE_FILE_PATH) 实际就是app.yaml文件, 你可以将它hard code在这。
SERVICE_FILE_PATH="$(SERVICE_FILE_PATH)"
if [ -z "$(CRON_SERVICE_FILE_PATH)" ]
then
    echo "SERVICE_FILE_PATH is: $SERVICE_FILE_PATH"
    # 部署artifact到appengine中,就这么一行,前面所有的操作都为它而准备
    gcloud app deploy $SERVICE_FILE_PATH  --quiet
else
    SERVICE_FILE_PATH="$SERVICE_FILE_PATH $(CRON_SERVICE_FILE_PATH)"
    echo "SERVICE_FILE_PATH is: $SERVICE_FILE_PATH"
    gcloud app deploy $SERVICE_FILE_PATH  --quiet
fi

The above three-step operation defines how to use CI / CD to deploy our service to AppEngine in one click. A service needs to be defined as the above three steps. If you have 20 services, you need to define 20 such three-step operations. For the third step, which contains 6 Tasks, you can abstract these 6 Tasks into a Task group.

As mentioned above, the CD is deployed using the GCP SDK. In fact, Terraform can also be used. It is an IT infrastructure automation orchestration tool, which will be introduced in the near future.

 

(9) How to deal with authentication and authorization

This project uses Azure AD authentication service of Azure, poke here to understand

Fourth, the code

This code is extracted from the actual project, removes sensitive information, and part of the transformation makes it more versatile so that it can be reused next time. The specific description of the code can be found in the readme in the code , which includes:

  • The CI part of DevOps
  • How to run a single service locally
  • How to run all services locally
  • How to verify Azure AD token
  • Set cors

 

V. Discussion and summary

1. Problem: If the back-end API is split into more than 20 services, the front-end website must rely on all back-end APIs to function properly. When developers are developing locally, don't they want to run all these 20 services? If you are doing .net program development, and each service corresponds to a WebApi Project, how can these more than 20 WebApi Projects run simultaneously?

A: For this project, Nodejs is used. A single project can run independently or as a subproject, so this problem is solved well. For details, refer to the readme in the code. What if it is a non-nodejs application? If you have a better solution, please comment

2. Question: What cloud services have we used?

Answer: (1) Use AWS S3 to host the front-end website, including S3 + Lambda + CloudFront. (2) Use AppEngine in GCP to host all services on the back-end, including AppEngine + Google Storage + IAM + Log Viewer. (3) Azure AD is used for certification services, and Azure DevOps is used to host the code and continuous integration. It can be seen that we have already used many services of the cloud unconsciously. To build a modern application can not be separated from the cloud. It has slowly penetrated into our work and life like water, electricity and coal.

Reference link

Microservice- based containerized application:  eShopOnContainers

.net architecture book series

Various sao operations teach you how to go to the cloud

 

Published 105 original articles · Likes46 · Visits 210,000+

Guess you like

Origin blog.csdn.net/wucong60/article/details/105342275