Microservice Architecture 2.0--Cloud Native Era

cloud native

Cloud native (Cloud Native) is a method and philosophy that focuses on building, deploying and managing applications in a cloud environment. Cloud-native applications can maximize the advantages of cloud computing infrastructure, such as elasticity, automation, scalability and high availability . This concept covers many aspects, including architecture, development, deployment, operation and maintenance and team culture, etc.

Cloud Native Features and Principles

  1. Containerization: Package applications and their dependencies into containers to achieve a consistent operating environment. Container technologies such as Docker offer the advantages of isolation, portability, and rapid deployment.
  2. Microservice Architecture: Split an application into small, independent service units, each of which focuses on a specific business function. This improves maintainability, scalability, and agility.
  3. Automation: Emphasizes automated development, testing, deployment and operation and maintenance processes, including continuous integration, continuous delivery, automatic expansion, etc., to improve efficiency and stability.
  4. Flexibility: Through microservice architecture and containerization, cloud-native applications enable each service unit to be independently developed, deployed, and expanded, enhancing the ability to respond to changes.
  5. Elasticity: Cloud-native applications can automatically scale up and down based on load to cope with changes in traffic.
  6. Service Governance: Emphasizes service discovery, load balancing, traffic management, and fault tolerance mechanisms to ensure reliable communication of services.
  7. Declarative API and infrastructure as code: Automatic deployment and management of applications is realized through declarative API and infrastructure as code.
  8. Observability: Emphasizes application monitoring, logging, tracing, and metrics to identify and resolve issues in a timely manner.
  9. Openness and diversity: Cloud native technology is not limited to a specific language, framework or cloud platform, and encourages the adoption of open standards and diverse technology stacks.

Cloud native is a comprehensive concept that aims to enable applications to take better advantage of cloud computing and provide higher reliability, scalability, and agility . Principles such as containerization, microservices, automation, and elasticity are key to enabling cloud-native applications.

Inherent Problems of Distributed Systems

In the microservice architecture, there will be some problems that must be solved, such as registration discovery, tracking management, load balancing, transmission communication, etc. As long as it is a distributed system, there is no way to completely avoid these problems. Looking back, think about it: must these problems be solved by the distributed system itself?

1、SpringCloud与Kubernetes

For the same distributed service problem, compare the application-level solutions provided by Spring Cloud and the infrastructure-level solutions provided by Kubernetes. Although Spring Cloud
insert image description here
and Kubernetes have different starting points, the methods and effects of solving problems are also different. , but it cannot be ignored that Kubernetes does provide a new and more promising way to solve problems.

Technical issues that have nothing to do with business are likely to be separated from the software level and quietly resolved within the hardware infrastructure, so that the software can only focus on business and truly "build around business capabilities" teams and products.

So the distributed architecture problem that can only be solved at the software level, so there is another solution: the application code and the infrastructure software and hardware are integrated, and they work together to deal with it.

Limitations of Kubernetes
The infrastructure is managed as a whole for the entire container, and its granularity is relatively rough.
Some problems are at the edge of the application system and infrastructure, and it is difficult for Kubernetes to completely solve them at the infrastructure level.

  1. Relationship between business logic and infrastructure: Some problems may be closely related to specific business logic, which is difficult to solve through a common infrastructure. In this case, microservice frameworks (such as Spring
    Cloud) may be easier to provide customized solutions for the business.
  2. Microservices Governance and Business Rules: Some issues involve the combination of business rules and governance, which may require higher levels of abstraction and customization.
  3. Business complexity: For some complex business processes, specific customized solutions may be more suitable, while general infrastructure (such as Kubernetes) may require more adaptation.

2. Service Mesh and Sidecar Proxy

Service Mesh is an infrastructure layer for managing and monitoring communication between services in a microservice architecture. It provides a series of tools and functions to solve some common problems in microservice architecture, such as service discovery, load balancing, security, fault handling, etc.

**Sidecar Proxy (**Sidecar Proxy) is a pattern used in the microservice architecture to handle communication between services. It separates the communication logic from the application code, runs as a separate proxy process (sidecar) on the same container, the same host or the same virtual machine, works together with the main application, and handles issues such as routing, load balancing, and security , monitoring and other communication-related tasks.

Main usage scenarios of sidecar proxy

  1. Service discovery and load balancing: The sidecar proxy can be responsible for service discovery and implement load balancing between services, dynamically assigning requests to different service instances.
  2. Security: A sidecar proxy can provide security features such as authentication, authorization, encryption, etc., to ensure that only authorized services can communicate.
  3. Monitoring and Tracking: Sidecar agents can collect and log request and response data for monitoring and tracking purposes, to aid in problem analysis and resolution.
  4. Retries and timeouts: The sidecar proxy can handle request retries and timeouts to ensure requests are responded within a certain amount of time.
  5. Circuit Breakers and Circuit Breakers: A sidecar proxy can implement a circuit breaker and circuit breaker mechanism to stop sending requests to a service when it fails to avoid affecting the entire system.
  6. Routing and flow control: The sidecar agent can implement flow control and routing strategies, and is used to implement functions such as gray release and A/B testing.
  7. Request conversion and response processing: The sidecar agent can convert and process requests and responses to implement protocol conversion and data format conversion.

Sidecar Proxy Advantages

  1. Decoupling communication logic: A service mesh decouples communication logic from applications, allowing developers to focus on the development of business logic without worrying about communication details.
  2. Observability: Service meshes typically provide rich monitoring, tracing, and logging capabilities to help developers and operations teams better understand and manage communication between services.
  3. Traffic management: The service mesh allows for fine-grained control over traffic, including traffic splitting, A/B testing, grayscale publishing, etc., thereby providing greater flexibility.
  4. Security: The service grid can provide security functions, such as authentication, authorization, encryption, etc., to ensure that the communication between services is secure.
  5. Failure recovery: The service grid can handle failure detection and recovery, and when a service instance fails, it can automatically switch to other healthy instances.
  6. Cross-language support: Service meshes usually support multiple programming languages ​​and technology stacks, and are suitable for a diverse microservice ecosystem.

sidecar agency costs

  1. Complexity: Introducing a service mesh adds complexity to the system, especially in small scale projects it might be overkill.
  2. Performance overhead: Since the proxy of the service mesh needs to handle the communication logic, certain performance overhead may be introduced, especially in high-throughput scenarios.
  3. Learning curve: Learning and deploying a service mesh takes time, and the development team needs to be familiar with its concepts and configuration.
  4. Maintenance: Deployment and maintenance of a service mesh can require additional work, especially in complex environments involving multiple services.
  5. Applicability: Not all microservice architectures require a service grid, and some simple scenarios may not require the introduction of such a complex communication layer.

Common Sidecar Proxies

Envoy : An open source proxy maintained by CNCF (Cloud Native Computing Foundation), which has powerful routing, load balancing, and fault recovery functions.

Istio : is a service grid platform jointly open sourced by Google, IBM and Lyft, designed to simplify and improve communication, management and monitoring between services in the microservice architecture. Istio provides a set of features and tools to solve some common problems in microservice architecture, such as traffic management, failure recovery, security and observability, etc.

The service grid will become the mainstream mode of communication and interaction between microservices. It will isolate technical issues such as "what communication protocol to choose" and "how to do authentication and authorization" from the application software, replacing today's Spring Cloud family bucket Most of the components in the function.

Software Architecture Evolution Direction

Let developers focus on business logic, separate technical issues that have nothing to do with business from the software level, and handle these technical issues through infrastructure and tools . This helps to improve development efficiency, reduce maintenance costs, and enable the development team Ability to focus more on innovation in core business functions.

Business and technology are completely separated, and remote and local are completely transparent. Perhaps this is the best era of distributed architecture.

Guess you like

Origin blog.csdn.net/FLGBgo/article/details/132364659