.Net Core microservice architecture technology stack

I. Introduction

Everyone has been talking about the microservice architecture. There are also many articles about microservices in the garden. A few days ago, some friends from the garden asked me about some technologies of the microservice architecture. Here I will share and discuss with you, and let novices have a deeper understanding of microservices related technologies.

Second, the technology stack

2.1 If a worker wants to do his job well, he must first sharpen his weapon

In the age of the Internet, Internet products are also emerging one after another. Popular Internet products have a relatively technical team. I share here the .net microservice architecture technology stack as follows:

As the saying goes, if a worker wants to do his job well, he must first sharpen his weapon. An excellent engineer should be good at using frameworks and tools. The selection of technologies in microservices is not easy. It needs to be perfected after a long and arduous project.
Below I will share the main framework and tool usage scenarios in the technology stack one by one, this article will not share practical examples one by one.

2.2 Microservices

How to "micro" microservices?

Microservices, of course, the core is the theme is "micro", how to micro? How should it be reduced? When I first came to Hangzhou, I came into contact with an e-commerce system 单体架构. The system is relatively large. It combines various business logic and scenarios that e-commerce should have. The
code is also difficult to maintain. There are more people who take over before and after. The code The coupling degree is too high, and changing the business logic is basically a move . The original architecture diagram of the e-commerce system in the article on the
practical application of the IdentityServer4 authorization center in Asp.Net Core I shared last month , As follows:

Then there is a "micro" talk about this architecture.

Here we 单体架构can do the following "micro" services in principle:

  • Split according to business, one business and one service principle, to achieve a universal business service module, so that high cohesion and low coupling between businesses You can change which business you want later, you only need to change the business microservices of this business, and other businesses will not be affected.
  • A business module and an independent database are the principles, and mutually parallel services do not need to call each other.
  • The outer API gateway integrates business logic.
  • A business database and a microservice are the principles.
  • Combined with distributed services, it is possible to iterate quickly and release smoothly. It is not affected by time. It can be released every moment, and there is no need to wait until 12 o'clock in the middle of the night to release. (The more painful release is like a three-day volley. There were a few painful releases every week for a period of time. One release may be 4 or 5 in the morning. Many times after the release, it will go through various After testing, we finally found that the problem had to be corrected online. When we went back, other colleagues had already come to work. At that time, our tech boss said this: "He has n’t seen his son for a week. Go back. When his son fell asleep long ago, when he got up to work, his son had already gone to school ", everyone must have had such release experience.)

According to the above principles, the original e-commerce monolithic architecture microservices architecture diagram is as follows: the

architecture diagram is roughly drawn to indicate the meaning Docker. k8sThe microservices,, and that piece are briefly summarized, and no specific details are drawn. Figure.

Microservice cluster

Microservices are already "micro", and you need a data center for service discovery, which should be used Consulhere. It is Consulmainly used for service registration, service discovery, and health check of services. We can target certain business services as needed Automated capacity expansion, adding servers and expanding service clusters. If a service hangs, Consul will automatically select available service nodes for connection and use, so that the overall e-commerce system stability is greatly increased.
For Consulmore detailed features and construction, you can read the article Consul Features and Construction under Microservice Architecture in 5 minutes .

How microservices ensure data consistency

In the previous monolithic architecture application, the coupling between businesses was to ensure data consistency through transactions, so how can microservices achieve data consistency? The door-to-door also said that microservices should ensure that there is no dependency between services, and each business is an independent service. Then how to ensure the consistency of data between the business and there is also a big problem, which is also the industry A topic that is controversial about microservices, how to ensure data consistency?

There is one in the distributed system architecture CAP理论: any distributed system can only meet the two points of consistency, availability, and partition tolerance. For distributed systems, partition fault tolerance is a basic requirement, otherwise it will lose value. Therefore, you can only choose between availability and consistency. If you choose to provide consistency, you have to pay the price of blocking other concurrent accesses before consistency is met. This may last for an indefinite period of time, especially when the system has already exhibited high latency or when a network failure has caused the connection to be lost. According to the current successful experience, usability is generally a better choice, but maintaining data consistency between services and databases is a very basic requirement. The microservice architecture is chosen to meet the final consistency.

Final consistency means that all data copies in the system will eventually reach a consistent state after a period of time.
The period of time mentioned here should also be a period of time that is acceptable to the user.

From the essence of consistency, all services contained in a business logic either succeed or fail. So how do we choose the direction to guarantee success or failure? It is necessary to make a choice according to the business model. There are three modes for achieving final consistency: reliable event mode, business compensation mode, and TCC mode, which will not be extended here, and we will have the opportunity to share and learn later.

2.3 Microservices open source framework

I am here to micro-services architecture using open source micro-services framework core-grpcopen source framework address: https: //github.com/overtly/core-grpc
front of me to share a story about micro-service upgrade [] .net core business platform The core application of core-grpc
briefly describes the basic concepts and pros and cons of microservices , so I wo n’t share it here. For specific applications, please click [.net core] Microservice architecture application of e-commerce platform upgrade (core-grpc) ) Reading

2.4 ORM framework

The ORM Dapper used in microservices, and the third-party open source component used is core-datathat the open source author packaged the dapper once.

core-dataMain advantages:

  • The official recommendation is to use DDD domain-driven design ideas for development
  • Support multiple databases, simple configuration can be added to link configuration
  • Multi-database support
  • Support sub-table operation, support for custom sub-table strategy
  • Support expression writing, reduce the mechanical work of writing Sql statement
  • Dapper can be extended
  • Performance depends on the performance of Dapper itself, Dapper itself is a lightweight ORM, the official test performance is stronger than other ORM

2.5 Distributed tracking system

With the popularity of microservice architecture, some problems under microservice architecture will become more and more prominent, for example, a request will involve multiple services, and the service itself may also depend on other services, the entire request path constitutes a mesh Call chain, and once a certain node in the entire call chain is abnormal, the stability of the entire call chain will be affected, so you will deeply feel that the word "silver bullet" does not exist, and every architecture There are advantages and disadvantages.

For the above situation, we need some tools that can help understand system behavior and analyze performance problems, so that when a fault occurs, the problem can be quickly located and resolved. At this time, the APM (application performance management) tool should debut. .
At present, some of the main APM tools are: Cat, Zipkin, Pinpoint, SkyWalking. Here we mainly introduce SkyWalking, which is an excellent domestic APM tool, including distributed tracking, performance index analysis, application and service dependency analysis.

2.6 System log integration

In a huge system, a log system is needed to troubleshoot problems and record related sensitive information. A log system is required here. The ExceptionLess log system is selected here. The log is written to the ES and supports the visual UI for log management, query, and common encounters. In case of problems, directly check through the log management background.

2.7 Message Queue

Message queuing middleware is an important component in a distributed system, which mainly solves the problems of application coupling, asynchronous messaging, and traffic shaving. Achieve high-performance, high-availability, scalable, and ultimately consistent architecture. The most used message queues are ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ, RocketMQ.

2.8 Task scheduling

Quartz.Net is mainly used here for job task scheduling. What is the use of task calling? For example, we need to count a piece of data, but real-time statistics require a lot of continuous table queries, and the performance of the database is relatively lost, so you can choose to use the task scheduling scheme for data statistics operations, at a certain point in the night to count the previous day data.

2.9 NoSql

Nosql is mainly a non-relational database, such as MongDB, Redis, Memcache, etc., can be used to make a layer of data cache at the API gateway and database level, access some data that is not frequently updated, cache it, and it can be used every time a network request comes over First, read data from the distributed cache to reduce the query pressure on the database and improve the system throughput.

2.10 Visual data management and analysis (Kibana)

Kibana is an open source analysis and visualization platform designed for Elasticsearch. You can use Kibana to search, view and interact with the data stored in the Elasticsearch index. You can easily implement advanced data analysis and visualization, which is displayed in the form of icons.
The usage scenarios of Kibana should focus on two aspects:

Real-time monitoring
Through the histogram panel, multiple queues with different conditions can take multiple dimensions of an event to combine different time series trends. Time series data is the most common monitoring alarm.
problem analysis

For the purpose of elk, you can refer to its corresponding commercial product splunk scene: the significance of using Splunk is to make information collection and processing intelligent. The intelligence of its operation is:
searching, troubleshooting problems by drilling down data, and solving problems by analyzing root causes;
real-time visibility, which can combine system detection and alarms to facilitate tracking of SLA and performance issues;
historical analysis , You can find trends and historical patterns, behavior baselines and thresholds, and generate consistency reports.

2.11 Prometheus

Prometheus is an open source system monitoring and alarming framework. Prometheus, as a new generation of cloud-native monitoring system, has the following advantages over traditional monitoring and monitoring systems (Nagios or Zabbix).

Advantage
  • Easy to manage
  • Easy access to internal service status
  • Efficient and flexible query statement
  • Support local and remote storage
  • Adopt http protocol, pull data by default pull mode, or push data through intermediate gateway
  • Support auto discovery
  • Scalable
  • Easy to integrate

Well, here, most of them have been introduced, and the other few technologies that are commonly used by everyone are not introduced one by one.

2.12 .Net Core virtualization

.Net Core The new generation of .Net Core cross-platform development framework can be built on Linux and other platforms without the windows environment. How to build it? Of course, you can use the currently popular Docker container to build .net core project virtualization in a Docker container and run it. It does not depend on any platform and environment. You only need to make an image through the command, and you can also use it K8sto perform multiple containers. Application deployment, orchestration, update, etc.

What is k8s?

Kubernetes is an open source, used to manage containerized applications on multiple hosts in the cloud platform. The goal of Kubernetes is to make the deployment of containerized applications simple and efficient (powerful). Kubernetes provides application deployment, planning, updating, and maintenance. A mechanism.

A core feature of Kubernetes is the ability to manage containers autonomously to ensure that the containers in the cloud platform run according to the user's desired state (such as the user wants to keep apache running, the user does not need to care about how to do it, Kubernetes will automatically monitor and then go Restart, new, in short, let apache always provide services), the administrator can load a micro service, let the planner to find the appropriate location, at the same time, Kubernetes also system upgrade tools and user-friendly aspects, so that users can easily deploy their Application (just like canary deployments).

Now Kubernetes focuses on uninterrupted service status (such as web servers or cache servers) and native cloud platform applications (Nosql), and will support various services in various production cloud platforms in the near future, for example, batch, workflow , And traditional databases.

In Kubenetes, all containers are running in Pods. A Pod can carry one or more related containers. In the following case, the containers in the same Pod will be deployed on the same physical machine and can share resources. A Pod can also contain O or more disk volume groups (volumes), these volume groups will be provided in the form of a directory to a container, or shared by all containers in the Pod, for each Pod created by the user, the system will Automatically select the machine that is healthy and has enough capacity, and then create a container like a container. When the container creation fails, the container will be automatically restarted by the node agent. This node agent is called kubelet, but if it is a Pod failure or machine, it It will not be automatically transferred and started unless the user defines the replication controller.

Users can create and manage Pods themselves. Kubernetes simplifies these operations into two operations: deploying multiple Pod replicas based on the same Pod configuration file; creating alternative Pods when a Pod hangs or the machine hangs. The part of the Kubernetes API that is responsible for restarting, migration, and other behaviors is called a "replication controller". It generates a Pod based on a template, and then the system creates a lot of redundancy according to user needs. These redundant Pods form a The entire application, or service, or a layer in the service. Once a Pod is created, the system will continuously monitor the health of the Pod and the health of the host where the Pod is located. If the Pod hangs for software reasons or the machine where it hangs, the replication controller will automatically be in a healthy Create an identical Pod on the machine to maintain the original Pod redundancy state. Multiple Pods of an application can share a machine.

We often need to select a group of Pods. For example, we want to restrict certain operations of a group of Pods or query the status of a group of Pods. As the basic mechanism of Kubernetes, users can paste a set of keys to any object in the Kubernetes Api: value tag, then we can select a set of related Kubernetes Api objects through the tag, and then perform some specific operations, each resource has an additional set of (a lot of) keys and values, and then external tools can use these Keys and vlues values ​​are used to retrieve objects. These Maps are called annotations.

Kubernetes supports a special network model. Kubernetes creates an address space and does not allocate ports dynamically. It allows users to choose any port they want to use. In order to achieve this function, it assigns an IP address to each Pod.

Modern Internet applications generally include multiple layers of services, such as a web front-end space and a memory server used to store key-value pairs and corresponding storage services. In order to better serve such an architecture, Kubernetes provides service abstraction and provides A fixed IP address and DNS name, and these are dynamically associated with a series of Pods, these are associated through the aforementioned label, so we can associate any Pod we want to associate, when a container in a Pod accesses this address At this time, this request will be forwarded to the local proxy (kube proxy), each machine has a local proxy, and then forwarded to the corresponding back-end container. Kubernetes selects the corresponding back-end container through a rotation training mechanism. When these dynamic Pods are replaced, the Kube proxy keeps track. Therefore, the IP address (dns name) of the service never changes.

All resources in Kubernetes, such as Pod, are distinguished by something called URI. This URI has a UID. The important components of URI are: object type (such as pod), object name, object namespace, for For special object types, all names are different in the same namespace. In the case where the object only provides a name and does not provide a namespace, this situation is assumed to be the default namespace. UID is unique in time and space.

2.13 Automated integrated deployment

Why do you need automated integrated deployment?

I analyze why I need automated integrated deployment from the following points:

  • What you have to believe is that all manual deployment, release, and update are unreliable, and automated intelligent deployment can reduce the accident rate.
  • Artificial backup and release of updates are very inefficient.
  • If a project needs to be updated, but this microservice has a dozen or so loads, is it very tedious for you to update and publish one by one server, and it is more likely to cause accidents?

What is automated integrated deployment?

By jenkins, gitlab, dockerand other scripting tools monitor the code, and the reliance on written submissions in advance dynamic, automated construction project Mirror, mirror to mirror push warehouse, Docker pull up the mirror, startup items and other automated script processing, can be a smooth one service Stop and update; all operations do not require human intervention, and even a problem can be rolled back with one click.

What are the advantages of automated integrated deployment

  • Everything is automated without human intervention to improve efficiency. Professional people can do professional things, develop and develop well, and maintain and maintain well.
  • Release traceability
  • Roll back at any time with human intervention (review the project image of the previous automated backup through a script)
  • Smooth release, without affecting user experience, each server is cut off and updates are released.

3. Conclusion

I wrote a lot today, and I ca n’t stop after drawing a picture. This article analyzes the use scenarios and uses of the series of technologies used in the .net core microservice architecture. There is no practical thing, the purpose is to let Xiaobai have A clear technical direction, to further master the technology related to the microservice architecture; also allow yourself to sort out and summarize the previous experience, so that you can move towards a larger goal. I will continue to bring you more dry goods and practical things later, welcome to pay attention to [Dr. dotNET] WeChat public number
Insert picture description here

Published 24 original articles · won praise 8 · views 6604

Guess you like

Origin blog.csdn.net/a312586670/article/details/105375184