Analysis of Cloud Native

What is Cloud Native

In the past few years, the concepts of cloud computing, distributed architecture, and micro-service architecture were relatively hot. In the past two years, the voices of people talking about distributed architecture and micro-service architecture have dropped a lot, and the words cloud-native and cloud-native architecture have appeared frequently. , What is cloud native, what is the difference between cloud native architecture and the premise of distributed architecture, microservice architecture? With such questions, we look for the definition of cloud native. At present, there are different opinions on the understanding of cloud native, and there is no unified and standardized description. Now, one of the views that everyone sees more is the latest definition of cloud native by CNCF:

Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.

Cloud-native technologies enable organizations to build and run elastically scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Representative cloud-native technologies include containers, service grids, microservices, immutable infrastructure, and declarative APIs. These techniques enable the construction of loosely coupled systems that are fault-tolerant, manageable, and observable. Combined with powerful automation, cloud-native technologies enable frequent and predictable breaking changes with minimal effort from engineers.

I believe that when you see this explanation for the first time, you still cannot understand cloud native well. This article attempts to introduce the concepts related to cloud native in a vernacular way. The word for cloud native is "Cloud Native". Native means native in English, which means that cloud applications adapt to the cloud environment like natives, are proficient in using the flexibility of the cloud environment, and can calmly face various cloud environments. come the complexity problem. The cloud-native architecture is a collection of architectural principles and best practices to help applications run better on the cloud in a cloud-native manner. The core idea of ​​cloud-native architecture is to maximize the stripping of non-business code parts in cloud applications, allowing cloud infrastructure to take over a large number of non-functional features of applications, such as elastic scalability, high availability of applications, and self-healing capabilities of application failures. Apply observability, etc., so that development and operation and maintenance personnel can focus more on the business itself.

Why go to the cloud

The cloud-native architecture is to help applications better use on the cloud. Then we must first understand why applications go to the cloud.

From a technical perspective, there are two benefits.

  • The first benefit of going to the cloud is to achieve elastic scaling of applications. The cloud is built based on a distributed architecture, and it is inherently genetically capable of horizontal expansion. Through such technical features, it is ensured that when the system business volume grows rapidly, the support capability of the application system can also be effectively guaranteed. In addition, the utilization efficiency of the infrastructure can be improved, and resources can be expanded during business peaks and resources can be reduced during low business peaks. run. Reduced redundant planning and construction of resources.

  • The second benefit of going to the cloud is that it brings changes to the product delivery model. In the original offline product delivery, each product has its own structure and is delivered to the project in a certain way, and then the project deployment personnel perform manual operations on the host computer or use tools to implement product deployment according to the product deployment documents. Due to the lack of standardized delivery actions and automated delivery processes, the product delivery cycle is relatively long and the delivery efficiency is relatively low, and this delivery mode cannot support the deployment requirements of large-scale product instances. After migrating to the cloud, based on the continuous delivery + containerization technology, software and products are packaged into a standardized image through the CD process to efficiently implement rapid deployment in multiple cloud environments, making the concept of agile delivery a reality. The lead time is also increased from the original month to hour or day.

From a business perspective, going to the cloud has brought innovation to the business model of the business. The offline system has been transformed into an online system, and the interaction efficiency of business value has been greatly improved. Going to the cloud releases cost dividends for enterprises and helps enterprises transform from the original CAPEX model to the OPEX model. After the system is migrated to the cloud, the system will be more closely connected with upstream and downstream customers, which provides infinite possibilities for the ecological development of the system and is conducive to more business innovations.

What are the ways to apply to the cloud

Seeing the benefits of going to the cloud, the business needs to choose an appropriate way to go to the cloud according to its actual situation.

The first way: re-host

Migrate systems and data to the cloud without changing the application operating environment. Direct migration generally involves migrating a physical machine to a virtual machine, or a virtual machine to a virtual machine. If an enterprise wants to migrate to the cloud quickly or has large-scale applications that need to migrate to the cloud, direct migration is the most suitable and effective migration method.

The second way: re-platform

On the premise of not changing the core architecture of the application, when migrating data and systems to the cloud, do some simple cloud optimization for the application program. This method is called "migration after patching". For example, replace the original relational database with the database service provided by the cloud service provider, or replace the original self-built message middleware with the message queue service provided by the cloud service provider. Most businesses use this approach to reduce administrative costs and increase efficiency.

The third way: re-factor

"Reconstruction" refers to rebuilding the application architecture and development model to realize cloud-native application services. When the existing application environment is difficult to meet future use, or the performance and scale cannot meet future needs, the "rebuild" migration mode will be adopted. Compared with other methods, "rebuild" has the highest cost, but in the long run, it can better meet the needs of future business and systems.

The following table shows the comparison of the factors considered in the three methods of accessing the cloud:

Introduction to technologies used in cloud native architecture

In the early mention of cloud native architecture technology, there was a troika, microservice + containerization + DevOPS. With these three technical systems, the application can achieve the goal of migrating to the cloud in the re-factor mode. This is the technical means adopted by most enterprises that have already migrated to the cloud.

In addition, the cloud-native architecture is constantly developing. On the one hand, with the needs of business development, the Internet of Things and 5G technologies are widely used, and enterprise applications do not meet the form of running on a single cloud. It is hoped to achieve multi-cloud, hybrid cloud, and cloud-edge collaboration. , edge computing and other forms of better operation. On the other hand, cloud-native technologies are constantly reforming and innovating, such as the emergence of technologies such as Service Mesh and Serverless, which reduce the complexity and cost of running services on the cloud. Therefore, the technologies used in the cloud-native architecture are constantly being updated. The cloud-native architecture technology considered by CNCF adds three technologies: service network, immutable infrastructure and declarative API.

Therefore, we have to look at the core concept of cloud native architecture. As long as it is a technical practice that helps applications to better realize cloud native, it should be classified as cloud native architecture technology. Next, we will introduce which technologies are mainly used in cloud native architecture. .

microservice

The microservice architecture is relative to the single architecture, and the two belong to different architectural styles. In a microservice architecture, a service is a single, independently deployable software component that implements some useful functionality. The service's API encapsulates its internal implementation. Unlike a monolithic architecture, developers cannot bypass the service's API Direct access to methods and data inside the service, thus, the microservice architecture enforces the modularity of the application.

The core feature of the microservice architecture is the loose coupling between services. The interaction between services is done using API, which encapsulates the implementation details of the service, so that the implementation method can be modified without affecting the client.

The microservice architecture splits a large system according to the granularity of business services, and each service can be independently developed, tested, verified, and deployed. After such decomposition, the benefits brought are as follows:

  • Enables continuous delivery and continuous deployment of large, complex applications
  • Each service is relatively small and easy to maintain
  • Services can be deployed independently
  • Services can scale independently
  • Microservice architecture enables team autonomy
  • Easier to experiment and adopt new technologies
  • better fault tolerance

But microservices are not a silver bullet. The introduction of microservices is costly, and using it will introduce more technical challenges, such as the complexity of problem location, log analysis, application observability, and application high availability. In terms of complexity, etc., enterprises need to introduce them reasonably according to different stages of business, and cannot "microservice" completely for the sake of microservices.

In the technical field of microservices, Whale Technology provides ZDubbo, a microservice framework product, and SGP, a microservice governance platform product.

SGP product function diagram:

DevOPS

DevOPS includes Dev and OPS fields. The Dev field implements the concept of agile research and development to achieve high-frequency continuous delivery of products; the OPS field provides various operation and maintenance capabilities to meet the requirements of system observability.

Whale Technology provides ZCM products in the field of DevOPS

container

Based on container image technology, the complete environment required for an application to run is directly packaged into the image. In this way, the container-based delivery capability of the application can be realized, and the delivery efficiency is improved; on the other hand, based on the lightweight characteristics of the container, it can meet the requirements for the ability to quickly pull up the application when the cloud application is elastically scalable.

Whale Technology provides container cloud platform products in the container field

cloud native middleware

Cloud-native middleware provides more technical component capabilities required for running applications on the cloud, provides high-performance, high-stability, easy-to-use, and easy-to-operate technical components, and helps business applications shield technical complexity and realize technology reuse.

In the field of middleware, Whale Technology provides distributed cache middleware product ZCache, distributed message middleware product ZMQ, and distributed database middleware product ZDAAS.

ZCache product function diagram

Schematic diagram of ZMQ product functions

Schematic diagram of ZDAAS product functions

Service Network

A service mesh is a dedicated infrastructure layer for handling inter-service communication, responsible for reliable delivery of requests between microservices. A service mesh is typically implemented through a set of lightweight network proxies that are deployed alongside the application code without being aware of the application itself.

With the growth of scale and complexity, the service grid includes more and more functions. Its requirements include service discovery, load balancing, fault recovery, indicator collection and monitoring, and usually more complex operation and maintenance requirements, such as A /B testing, canary release, traffic limiting, access control and end-to-end authentication, etc. Its deployment structure is shown in the following figure:

The service grid has the following characteristics: - Middle layer for inter-application communication - Lightweight network proxy - Application-agnostic - Decoupled application retry/timeout, monitoring, tracking and service discovery - Cross-language Service Communication Capabilities.

immutable infrastructure

A workload (such as a container, virtual machine, etc.) cannot be modified once deployed. When something needs to be updated, fixed or modified, simply replace the old with the new, proven workload.

The role of immutable infrastructure is mainly reflected in the stability of the system. Once a traditional application is deployed to a user-specific server, the server system will continue to change. Either the operating system is upgraded, or a new application is installed, which may cause conflicts, resulting in the need for the application to continue to change as the user's system environment changes. Upgrading, new problems will continue to appear in the middle. Immutable infrastructure avoids all of these problems, because cloud-native applications are deployed on immutable infrastructure, so there is no problem of change.

Declarative API

Declarative API is a more advanced interface design method than imperative API. Simply put, imperative API provides users with the ability to do what, while declarative API provides users with the ability to do what. Based on the declarative API, cloud applications can use infrastructure capabilities more easily, allowing business development to focus on the business itself.

Evaluation System for Cloud Native Architecture Maturity

How to evaluate the effectiveness of applications on the cloud, Alibaba Cloud has given a cloud-native architecture maturity model (SESORA), which divides cloud-native into service capability (Service), elasticity capability (Elasticity), and serverless degree (Serverless) , Observability (Observability), Resilience (Resilience), Automation level (Automation) six different dimensions (SESORA), each evaluation dimension sets up four different levels from ACNA-1 to ACNA-4, which are counted as 0 to 3 in turn At the same time, four different maturity levels are set up: zero level, basic level, development level and mature level. The cloud-native architecture maturity model is proposed to provide evaluation and optimization directions for the current status of enterprise cloud-nativeization, unclear capabilities and development paths, and help enterprises embark on the "shortest path" of digital transformation.

Guess you like

Origin blog.csdn.net/whalecloud/article/details/127851203