Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

What are microservices

First of all, there is no official definition of microservices. It is difficult to describe microservices directly. We can understand what microservices are by comparing traditional WEB applications.

The traditional WEB application core is divided into business logic, adapter and API or WEB interface accessed through UI. Business logic defines business processes, business rules, and domain entities. Adapters include database access components, message components, and access interfaces. The architecture diagram of a taxi software is as follows:

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

Although they also follow modular development, they end up being packaged and deployed as monolithic applications. For example, a Java application will be packaged as a WAR and deployed on Tomcat or Jetty.

This monolithic application is more suitable for small projects. The advantages are:

  • Simple and straightforward development, centralized management

  • Basically no repeat development

  • Functions are all local, without distributed management overhead and calling overhead

Of course, its shortcomings are also very obvious, especially for Internet companies:

  • Low development efficiency: All developers change code in one project, submit code and wait for each other, and code conflicts continue

  • Code maintenance is difficult: Code functions are coupled together, and newcomers do not know where to start

  • Inflexible deployment: long build times, any small modification must rebuild the entire project, which is often a long process

  • Insufficient stability: a trivial little problem that can cause the entire application to hang

  • Insufficient scalability: unable to meet business requirements in high concurrency situations

Therefore, the current mainstream design generally adopts the microservice architecture. The idea is not to develop one huge monolithic application, but to decompose the application into small, interconnected microservices. A microservice performs a specific function, such as passenger management and order management. Each microservice has its own business logic and adapters. Some microservices also provide API interfaces for other microservices and application clients to use.

For example, the system described earlier can be decomposed into:

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

Each business logic is decomposed into a microservice, and the microservices communicate through REST API. Some microservices also develop API interfaces to end users or clients. But usually, these clients cannot directly access the backend microservices, but pass requests through API Gateway. API Gateway is generally responsible for tasks such as service routing, load balancing, caching, access control, and authentication. The knowledge points of the architecture mentioned in the article are shared in the group: 561614305, which can be downloaded for free in the group.

Advantages of Microservice Architecture

Microservices architecture has many important advantages

First, it solves the problem of complexity. It decomposes a monolithic application into a set of services. While the total amount of functionality remains the same, the application has been broken down into manageable modules or services. These services define explicit RPC or message-driven API boundaries. Microservices architecture enforces a level of application modularity that is difficult to achieve with a monolithic codebase. As a result, microservices are much faster to develop and easier to understand and maintain.

Second, this architecture allows each service to be developed independently by a team dedicated to that service. Developers are free to choose the development technology as long as it conforms to the service API contract. This means that developers can write or refactor services using new technologies, and since the services are relatively small, this doesn't have much of an impact on the overall application.

Third, the microservice architecture enables each microservice to be deployed independently. Developers do not need to coordinate the deployment of service upgrades or changes. These changes can be deployed as soon as the tests pass. So the microservice architecture also makes CI/CD possible.

Finally, the microservices architecture allows each service to scale independently. We only need to define constraints such as configuration, capacity, and number of instances that meet the service deployment requirements. For example, we can deploy CPU-intensive services on EC2 compute-optimized instances and in-memory database services on EC2 memory-optimized instances.

Disadvantages and Challenges of Microservice Architecture

In fact, there are no silver bullets, and the microservice architecture will also bring us new problems and challenges. One of them is similar to its name, microservices emphasize service size, but in fact there is no uniform standard. According to what rules should business logic be divided into microservices, this is an empirical project in itself. Some developers argue that 10-100 lines of code should be enough to build a microservice. While building small services is what the microservices architecture celebrates, remember that microservices are a means to an end, not a goal. The goal of microservices is to decompose an application sufficiently to facilitate agile development and continuous integration deployment.

Another major disadvantage of microservices is the complexity that comes with the distributed nature of microservices. Developers need to implement invocation and communication between microservices based on RPC or messages, which makes discovery between services, tracking of service invocation chains, and quality issues quite tricky.

Another challenge of microservices is partitioned database architecture and distributed transactions. Business transactions that update multiple business entities are fairly common. These types of transactions are very simple to implement in a monolithic application, because a monolithic application often only exists in one database. But under the microservice architecture, different services may have different databases. Constrained by the CAP principle, we have to give up the traditional strong consistency and pursue eventual consistency instead, which is a challenge for developers.

The microservice architecture also brings great challenges to testing. A traditional monolithic web application only needs to test a single REST API, while testing a microservice requires starting all other services it depends on. This complexity cannot be underestimated.

Another big challenge with microservices is changes across multiple services. For example, in a traditional single application, if there are three services A, B, and C that need to be changed, A depends on B, and B depends on C. We just need to change the corresponding module and deploy it in one go. But in a microservices architecture, we need to carefully plan and coordinate the deployment of changes to each service. We need to update C first, then B, and finally A.

Deploying microservice-based applications is also much more complicated. Monolithic applications can simply be deployed on the same set of servers, and then load balancing on the front end. Each application has the same address for underlying services, such as databases and message queues. Microservices, on the other hand, consist of a large number of different services. Each service may have its own configuration, number of application instances, and underlying service addresses. This is where different configuration, deployment, scaling and monitoring components are required. Additionally, we need a service discovery mechanism so that a service can discover the addresses of other services it communicates with. Therefore, successful deployment of microservice applications requires developers to have better deployment strategies and a high level of automation.

The above problems and challenges can be broadly summarized as:

  • API Gateway

  • inter-service call

  • service discovery

  • Service fault tolerance

  • Service deployment

  • data call

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

Fortunately, there are many microservice frameworks that can solve the above problems.

The first generation of microservices framework

Spring Cloud

Spring Cloud provides developers with tools to quickly build a general model of distributed systems (including configuration management, service discovery, circuit breakers, intelligent routing, micro-agents, control buses, one-time tokens, global locks, leader election, distributed session, cluster state, etc.). Major projects include:

  • Spring Cloud Config : Centralized external configuration management backed by Git repositories. Configuration resources map directly to the Spring Environment, but can be used by non-Spring applications if needed.

  • Spring Cloud Netflix : Integration with various Netflix OSS components (Eureka, Hystrix, Zuul, Archaius, etc.).

  • Spring Cloud Bus : An event bus for connecting services and service instances with distributed messaging. Used to propagate state changes (such as configuration change events) across the cluster.

  • Spring Cloud for Cloudfoundry : Integrate your application with Pivotal Cloudfoundry. Provides a service discovery implementation, also makes it easy to secure resources via SSO and OAuth 2, and can create Cloudfoundry service proxies.

  • Spring Cloud - Cloud Foundry Service Broker : Provides a starting point for building service brokers that manage a service in Cloud Foundry.

  • Spring Cloud Cluster : Leader election and generic state model (based on abstractions and implementations of ZooKeeper, Redis, Hazelcast, Consul).

  • Spring Cloud Consul : Service discovery and configuration management combined with Hashicorp Consul

  • Spring Cloud Security : Provides support for load-balanced OAuth 2 hibernate client and authentication header relay in Zuul proxy.

  • Spring Cloud Sleuth : Distributed tracing for Spring Cloud applications, compatible with Zipkin, HTrace and log-based (e.g. ELK) tracing.

  • Spring Cloud Data Flow : A cloud-native orchestration service for modern runtime composable microservice applications. An easy-to-use DSL, drag-and-drop GUI, and REST-API together simplify the overall orchestration of microservice-based data pipelines.

  • Spring Cloud Stream : A lightweight event-driven microservices framework for rapidly building applications that can connect to external systems. Simple declarative model for sending and receiving messages between Spring Boot applications using Apache Kafka or RabbitMQ.

  • Spring Cloud Stream Application Starters : Spring Cloud Task Application Starters are Spring Boot applications that could be any process, including Spring Batch jobs that don't run forever, and which end/stop after a limited amount of data processing.

  • Spring Cloud ZooKeeper : Service discovery and configuration management for ZooKeeper.

  • Spring Cloud for Amazon Web Services : Easily integrates with managed Amazon Web Services services. It easily integrates with AWS services such as caching or messaging APIs by using Spring's idioms and APIs. Developers can build applications around managed services without having to care about infrastructure.

  • Spring Cloud Connectors : Enables PaaS applications on various platforms to easily connect to backend services such as databases and message brokers (a project formerly known as "Spring Cloud").

  • Spring Cloud Starters : As a Spring Boot-based startup project, dependency management is reduced (after Angel.SR2, it is no longer an independent project).

  • Spring Cloud CLI : The plugin supports the rapid creation of Spring Cloud component applications based on Groovy predictions.

Dubbo

Dubbo is a distributed service framework open sourced by Alibaba, dedicated to providing high-performance and transparent RPC remote service invocation solutions, as well as SOA service governance solutions. Its core parts include:

  • Remote communication : Provides abstract encapsulation of various long-connection-based NIO frameworks, including various thread models, serialization, and information exchange in the "request-response" mode.

  • Cluster fault tolerance : Provides transparent remote procedure calls based on interface methods, including multi-protocol support, and cluster support such as soft load balancing, failure tolerance, address routing, and dynamic configuration.

  • Automatic discovery : Based on the registry directory service, the service consumer can dynamically find the service provider, making the address transparent, so that the service provider can smoothly increase or decrease the machine.

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

But obviously, both Dubbo and Spring Cloud are only suitable for specific application scenarios and development environments, and they are not designed to support generality and multilingualism. And they are just the framework of the Dev layer, lacking the overall solution of DevOps (this is what the microservice architecture needs to focus on). Then came the rise of Service Mesh. The knowledge points of the architecture mentioned in the article are shared in the group: 619881427, which can be downloaded for free in the group.

The Next Generation of Microservices: Service Mesh?

Service Mesh

Service Mesh is also translated as "service mesh", as the infrastructure layer for communication between services. If you use one sentence to explain what Service Mesh is, you can compare it to TCP/IP between applications or microservices, which is responsible for network calls, current limiting, circuit breakers, and monitoring between services. For writing applications, there is generally no need to care about the TCP/IP layer (such as RESTful applications through the HTTP protocol), and using Service Mesh also does not need to concern the things between services that were originally implemented through applications or other frameworks. For example, Spring Cloud and OSS can now be handed over to Service Mesh.

Service Mesh has the following characteristics:

  • The middle layer of communication between applications

  • Lightweight web proxy

  • application-agnostic

  • Decoupled application retries/timeouts, monitoring, tracing and service discovery

The architecture of Service Mesh is shown in the following figure:

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

Service Mesh runs as a Sidebar, which is transparent to applications, and all traffic between applications will pass through it, so the control of application traffic can be implemented in Service Mesh.

Currently popular open source software for Service Mesh are Linkerd, Envoy and Istio, and recently Buoyant (the company that opened up Linkerd) has released Conduit, an open source service mesh project based on Kubernetes.

left

Linkerd is an open source network agent designed to be deployed as a service mesh: a dedicated layer for managing, controlling and monitoring service-to-service communication within an application.

Linkerd is designed to solve problems that companies like Twitter, Yahoo, Google, and Microsoft find when they run large production systems. As a rule of thumb, the source of the most complex, surprising, and emergent behavior is often not the services themselves, but communications between services. Linkerd solves these problems by not just controlling the communication mechanism, but providing an abstraction layer on top of it.

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

Its main features are:

  • Load Balancing: Linkerd provides a variety of load balancing algorithms that use real-time performance metrics to distribute load and reduce tail latency across the application.

  • Circuit Breakers: Linkerd includes automatic circuit breakers that will stop sending traffic to instances deemed unhealthy, giving them a chance to recover and avoid a cascading failure.

  • Service Discovery: Linkerd integrates with various service discovery backends to help you reduce code complexity by removing ad-hoc service discovery implementations.

  • Dynamic request routing: Linkerd enables dynamic request routing and rerouting, allowing you to set up staging services, canaries, blue-green deploys, cross-DC failures with minimal configuration Handoffs and dark traffic.

  • Number of retries and deadlines: Linkerd can automatically retry requests on certain failures and can timeout requests after a specified period of time.

  • TLS: Linkerd can be configured to send and receive requests using TLS, which you can use to encrypt communications across host boundaries without modifying existing application code.

  • HTTP proxy integration: Linkerd can act as an HTTP proxy and is widely supported by almost all modern HTTP clients, making it easy to integrate into existing applications.

  • Transparent proxy: You can use iptables rules on the host to set up a transparent proxy through Linkerd.

  • gRPC: Linkerd supports HTTP/2 and TLS, allowing it to route gRPC requests, supporting advanced RPC mechanisms such as bidirectional streaming, flow control, and structured data payloads.

  • Distributed tracing: Linkerd supports distributed tracing and metrics, which can provide unified observability across all services.

  • Instrumentation: Linkerd supports distributed tracing and measurement instrumentation, which can provide unified observability across all services.

Envoy

Envoy is designed as an L7 proxy and communication bus for a service-oriented architecture. This project was born with the following goals:

For applications, the network should be transparent, and when network and application failures occur, it is easy to locate the source of the problem.

Envoy provides the following features:

  • External process architecture: can work with applications developed in any language; can be quickly upgraded.

  • Based on the new C++11 coding: able to provide efficient performance.

  • L3/L4 filter: The core is an L3/L4 network proxy that can be used as a programmable filter to implement different TCP proxy tasks and plug into the main service. Support various tasks by writing filters such as raw TCP proxy, HTTP proxy, TLS client certificate authentication, etc.

  • HTTP L7 Filter: Supports an additional layer of HTTP L7 filtering. The HTTP filter acts as a plugin that plugs into the HTTP link management subsystem to perform different tasks such as buffering, rate limiting, routing/forwarding, sniffing Amazon's DynamoDB, and more.

  • Support HTTP/2: In HTTP mode, support HTTP/1.1, HTTP/2, and support HTTP/1.1, HTTP/2 bidirectional proxy. This means that HTTP/1.1 and HTTP/2, in any combination of client and target server, can be bridged.

  • HTTP L7 routing: When running in HTTP mode, it supports path-based routing and redirection according to content type, runtime values, etc. Front-end/edge proxies available for the service.

  • Support for gRPC: gRPC is an RPC framework from Google that uses HTTP/2 as the underlying multiplexer. Both gRPC requests and responses carried by HTTP/2 can use Envoy's routing and LB capabilities.

  • Support MongoDB L7: Support for obtaining statistics and connection records and other information.

  • Support DynamoDB L7: Support for obtaining statistics and connection information.

  • Service Discovery: Supports multiple service discovery methods, including asynchronous DNS resolution and service discovery through REST requests.

  • Health check: Contains a health check subsystem that can actively check the upstream service cluster. Passive health checks are also supported.

  • Advanced LB: including automatic retries, circuit breakers, global throttling, blocking requests, anomaly detection. Support for request rate control is also planned in the future.

  • Front-end proxy: Can act as a front-end proxy, including TLS, HTTP/1.1, HTTP/2, and HTTP L7 routing.

  • Excellent observability: Reliable statistical capabilities are provided for all subsystems. Currently supports statsd and compatible statistical libraries. Statistics can also be viewed through the management port, and third-party distributed tracing mechanisms are also supported.

  • Dynamic Configuration: Provides a layered dynamic configuration API that users can use to build complex centralized management deployments.

Same

Istio is an open platform for connecting, managing and securing microservices. Istio provides an easy way to build a network of deployed services, with load balancing, inter-service authentication, monitoring, and more, without changing any service code. To add Istio support to a service, you just need to deploy a special sidecar in your environment that uses the Istio control panel functionality to configure and manage proxies that intercept all network communication between microservices.

Istio currently only supports service deployment on Kubernetes, but other environments will be supported in future releases.

Istio provides a complete solution to meet the diverse needs of microservice applications by providing behavioral insight and operational control across the entire service mesh. It provides many key functions uniformly in the service network:

  • Traffic Management: Control the flow of traffic and API calls between services, making calls more reliable and making the network more robust in harsh conditions.

  • Observability: Understanding dependencies between services, and the nature and direction of traffic between them, provides the ability to quickly identify problems.

  • Policy Enforcement: Applying organizational policies to interactions between services ensures that access policies are enforced and resources are well allocated among consumers. Policy changes are made by configuring the grid rather than modifying the application code.

  • Service Identity and Security: Provides verifiable identities for services in the mesh and provides the ability to secure service traffic so that it can flow over networks of varying levels of trustworthiness.

The Istio service mesh is logically divided into a data plane and a control plane:

  • The data plane consists of a set of intelligent proxies (Envoy) deployed as sidecars that mediate and control all network communication between microservices.

  • The control plane is responsible for managing and configuring proxies to route traffic and enforce policies at runtime.

The following diagram shows the different components that make up each panel:

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

Conduit

Conduit is an ultra-light service mesh service designed for Kubernetes that transparently manages the runtime communication of services running on Kubernetes, making them more secure and reliable. Conduit provides visibility, reliability, and security without changing code.

The Conduit service mesh also consists of a data plane and a control plane. The data plane carries the actual network traffic of the application. The control panel drives the data panel and provides northbound interfaces.

Compared

Linkerd and Envoy are similar in that they are both network proxies for service communication, which can implement functions such as service discovery, request routing, and load balancing. Their design goal is to solve the communication problem between services, so that applications are unaware of service communication, which is also the core concept of Service Mesh. Linkerd and Envoy are like distributed sidebars. Multiple proxies similar to Linkerd and Envoy are connected to each other to form a service mesh.

Istio, on the other hand, stands at a higher angle and divides Service Mesh into Data Plane and Control Plane. Data Plane is responsible for all network communication between microservices, and Control Plane is responsible for managing Data Plane Proxy:

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

And Istio natively supports Kubernetes, which also bridges the gap between the application scheduling framework and Service Mesh.

There is less information about Conduit. From the official introduction, its positioning and functions are similar to Istio.

Kubernetes + Service Mesh = Complete Microservice Framework

Kubernetes has become the de facto standard for container scheduling and orchestration, and containers can be used as the smallest unit of work for microservices, thus taking full advantage of the microservice architecture. So I think the future microservice architecture will revolve around Kubernetes. Service Meshes such as Istio and Conduit are designed for Kubernetes by nature, and their appearance complements the shortcomings of Kubernetes in service communication between microservices. Although Dubbo, Spring Cloud, etc. are mature microservice frameworks, they are more or less bound to specific languages ​​or application scenarios, and only solve the problems at the Dev level of microservices. If you want to solve the Ops problem, they also need to be combined with resource scheduling frameworks such as Cloud Foundry, Mesos, Docker Swarm or Kubernetes: The knowledge points of the architecture mentioned in the article are shared in the group: 561614305, all of which can be free in the group download.

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

However, due to the initial design and ecology of this combination, there are many applicability issues that need to be resolved.

Kubernetes is different. It is a language-independent, general-purpose container management platform that supports running cloud-native and traditional containerized applications. And it covers the Dev and Ops stages of microservices. Combined with Service Mesh, it can provide users with a complete end-to-end microservice experience.

So I think the future microservice architecture and technology stack may be in the form of:

Ali architect's log: take you to quickly understand the microservice architecture and understand the core of the microservice architecture

The multi-cloud platform provides resource capabilities (computing, storage, and networking, etc.) for microservices. Containers are scheduled and orchestrated by Kubernetes as the smallest unit of work. Service Mesh manages the service communication of microservices, and finally exposes the business interface of microservices through API Gateway. .

I believe that in the future, with the prevalence of microservice frameworks based on Kubernetes and Service Mesh, the cost of implementing microservices will be greatly reduced, and ultimately provide a solid foundation and guarantee for the implementation and large-scale use of microservices.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324406374&siteId=291194637