What is [Cloud Native]?

foreword

We start with a simple definition:

Cloud-native architecture and technology is an approach to designing, structuring, and operating workloads that are built in the cloud and that leverage cloud computing models.

The Cloud Native Computing Foundation provides an official definition:

Cloud-native technologies enable organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs are examples of this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with powerful automation, they allow engineers to make high-impact changes frequently and predictably with minimal effort.

Cloud native is about speed and agility. Business systems are evolving from enabling business functions to strategic transformation weapons for accelerating business speed and growth. New ideas must be brought to market immediately.

At the same time, business systems have become more and more complex, and user requirements have become higher and higher. They want fast response, innovative work and zero downtime. They can no longer accept performance issues, recurring bugs, and inability to move fast. Your users will visit your competitors. Cloud-native systems designed to support rapid change, large-scale operations, and resiliency

Below are some companies that have implemented cloud-native technologies. Consider the speed, agility, and scalability of its implementation.

Company experience
Netflix There are more than 600 services in production. 100 deployments per day.
Uber There are more than 1,000 services in production. Thousands of deployments per week.
WeChat There are more than 3,000 services in production. 1,000 deployments per day.

As you can see, Netflix, Uber, WeChat expose cloud-native systems consisting of many independent services. This architectural style enables them to respond quickly to market conditions. They instantly update small areas of live, complex applications without requiring a complete redeployment. They scale services individually as needed.

The pillars of cloud native

The speed and agility of cloud native stems from many factors. The most important thing is cloud infrastructure. But there's more: The five other foundational pillars shown in Figure 1-3 also provide the foundation for cloud-native systems.

Cloud-native foundational pillars

Figure 1-3 . Cloud Native Foundation Pillars

Let's take some time to better understand the importance of each pillar.

cloud

Cloud-native systems take full advantage of the cloud service model.

These systems are designed to thrive in a dynamic, virtualized cloud environment, making extensive use of platform-as-a-service (PaaS) computing infrastructure and hosting services. They treat the underlying infrastructure as disposable: provisioned in minutes and resized, scaled, or destroyed (via automation) as needed.

Consider the widely accepted concepts of DevOps: Pets and Cattle. In a traditional data center, a server is considered a "pet": a physical computer, given a meaningful name, and cared for. You scale by adding more resources to the same computer (scale up). If something goes wrong with the server, you fix it and bring it back to normal health. If the server is unavailable, everyone will notice.

The "livestock" service model is different. You'll provision each instance as a virtual machine or container. They are identical and assigned system identifiers (eg service-01, service-02, etc.). You scale by creating more instances (scale out). When an instance is unavailable, no one notices.

Livestock models use immutable infrastructure. The server will not be repaired or modified. If a server fails or needs to be updated, it is destroyed and a new server is provisioned – all through automation.

Cloud-native systems employ a livestock service model. They continue to run as the infrastructure scales out or scales out, regardless of the computer on which they are running.

The Azure cloud platform supports this type of highly elastic infrastructure with auto-scaling, self-healing, and monitoring capabilities.

modern design

How to design cloud native applications? What does your architecture look like? What principles, patterns and best practices will you follow? What infrastructure and operational issues are important?

Twelve Elements App

Twelve-factor applications are a widely recognized method for building cloud-based applications. It introduces a set of principles and practices that developers need to follow when building applications optimized for modern cloud environments. A particular focus is on portability and declarative automation across environments.

While applicable to any web-based application, many practitioners see the Twelve Factors as a solid foundation for building cloud-native applications. Systems built on these principles can be rapidly deployed and scaled, with added functionality to respond quickly to market changes.

The following table highlights the twelve-factor approach:

factor illustrate
1 - Basic code Each microservice has a single base code, stored in its own repository. It is tracked through version control and can be deployed to multiple environments (QA, staging, production).
2 - Dependencies Each microservice isolates and packages its own dependencies to make changes without affecting the entire system.
3 - Configuration Configuration information is moved out of microservices and externalized by configuration management tools outside of the code. With the correct configuration applied, the same deployment can be propagated between environments.
4 - Support Services Secondary resources (data stores, caches, message relays) should be exposed via addressable URLs. Doing so decouples the resource from the application, making it interchangeable.
5 - Build, publish, run Each release must enforce strict separation between the build, release, and run phases. Each should be tagged with a unique ID and support rollback. Modern CI/CD systems help achieve this principle.
6 - Process Each microservice should execute in its own process, isolated from other running services. Externalize the desired state to a supporting service, such as a distributed cache or data store.
7 - Port binding Each microservice should be self-contained, with its interface and functionality exposed on its own port. Doing so provides isolation from other microservices.
8 - Concurrency When capacity needs to increase, scale out services across multiple identical processes (replicas) instead of scaling up a single large instance on the most powerful machine available. Develop applications as concurrent applications to seamlessly scale out in cloud environments.
9 - Disposability Service instances should be disposable. Fast startup is supported to increase scalability opportunities, and graceful shutdown is supported to keep the system in the correct state. Docker containers and orchestrators inherently meet this requirement.
10 - Development/Production Equivalent Make environments as similar as possible throughout the application lifecycle and avoid costly shortcuts. Here, container adoption can contribute a lot by facilitating the same execution environment.
11 - Logging Think of logs generated by microservices as a stream of events. Use event aggregators to handle them. Propagate log data to data mining/log management tools such as Azure Monitor or Splunk and eventually to long-term archives.
12 - Admin Process Run administrative/administrative tasks such as data cleaning or computational analysis as one-time processes. These tasks are invoked from production using standalone tools, but independent of the application.

In Beyond the Twelve Elements Applied, author Kevin Hoffman details all 12 elements of the original (written in 2011). Additionally, he discusses three additional elements that reflect the design of today's modern cloud applications.

new elements illustrate
13 - API First Make everything a service. Assume your code will be used by front-end clients, gateways, or other services.
14 - Telemetry On the workstation, you can gain insight into applications and their behavior. In the cloud, you can't do that. Make sure the design includes the collection of monitoring, domain-specific, and health/system data.
15 - Authentication/Authorization Identity is implemented from the start. Consider the [RBAC (Role-Based Access Control) feature available in the public cloud.

We will cover many of the more than 12 elements in this chapter and throughout the book.

Azure Well-Architected Framework

Designing and deploying cloud-based workloads can be challenging, especially when implementing cloud-native architectures. Microsoft provides industry-standard best practices to help you and your team deliver reliable cloud solutions.

Microsoft's Well-Architected Framework provides a set of guiding principles that can be used to improve the quality of cloud-native workloads. The framework contains five elements of Architecture Excellence:

creed illustrate
cost management Focus on generating incremental value early. Apply the Generate-Measure-Learn principle to accelerate time-to-market while avoiding capital-intensive solutions. Use a pay-as-you-go strategy, investing as you scale out, rather than offering large investments up front.
Operational excellence Automate environments and operations to speed up and reduce human error. Quickly roll back or roll forward issue updates. Monitoring and diagnostics are implemented right from the start.
performance efficiency Efficiently meet workload demands. Horizontal scaling (horizontal scaling) is supported and designed into the system. Ongoing performance and load testing to identify potential bottlenecks.
reliability Generate resilient and usable workloads. Resiliency enables workloads to recover from failures and continue to function normally. Availability ensures that users always have access to workloads. Design applications to predict and recover from failures.
safety Implement security throughout the application lifecycle, from design and implementation to deployment and operation. Pay close attention to identity management, infrastructure access, application security, and data sovereignty and encryption.

To get started, Microsoft offers a set of online assessments to help you evaluate your current cloud workloads based on five well-architected pillars.

Microservices

Cloud-native systems employ microservices, a common architectural style used to construct modern applications.

Microservices are built as a set of distributed small independent services that interact through a shared structure, sharing the following characteristics:

  • Each implements a specific business function in the context of a larger domain.
  • Each is independently developed and can be deployed independently.
  • Each is independent, encapsulating its own data storage technology, dependencies, and programming platform.
  • Each runs in its own process and communicates with other microservices using standard communication protocols such as HTTP/HTTPS, gRPC, WebSocket or AMQP.
  • Together they form an application.

Figure 1-4 compares the monolithic application approach with the microservices approach. Note how the overall structure consists of a layered architecture (executed in a single process). It usually uses a relational database. However, the microservices approach separates functionality into independent services, each with its own logic, state, and data. Each microservice hosts its own data store.

Monolithic deployment versus microservices

Figures 1-4. Monolithic vs Microservice Architecture

Note how microservices facilitate the "process" principle in the twelve-factor application discussed earlier in this chapter.

Element #6 specifies that "Each microservice should execute in its own process, isolated from other running services."

Why use microservices?

Microservices provide agility.

Earlier in this chapter, we compared an e-commerce application built as a monolith to one that employs microservices. In the example, we see some clear benefits:

  • Each microservice has an autonomous lifecycle and can evolve independently and deploy frequently. There is no need to wait for quarterly releases to deploy new features or updates. Small areas of real-time applications can be updated, reducing the risk of disrupting the entire system. Updates can be made without completely redeploying the application.
  • Each microservice can scale independently. Instead of scaling the entire application as a single unit, you can only scale out services that require more processing power to meet the required performance levels and service level agreements. Fine scaling gives you more control over the system and helps reduce overall costs because parts of the system (not everything) are scaled.

Develop Microservices

Microservices can be created on any modern development platform.

The Microsoft .NET platform is a good choice. It is free and open source, with many built-in features that simplify microservice development. .NET is cross-platform. Applications can be built and run on Windows, macOS, and most flavors of Linux.

.NET is performant and scores well against Node.js and other competing platforms. Interestingly, TechEmpowe performed an extensive set of [performance benchmarks] across many web application platforms and frameworks. NET scores far higher than Node.js and other competing platforms in the top 10 platforms.

NET is maintained by Microsoft and the NET community on GitHub.

Microservice Challenge

While distributed cloud-native microservices can provide tremendous agility and speed, they present a number of challenges:

communication

How does the front-end client application communicate with the back-end core microservices? Is direct communication allowed? Alternatively, can the backend microservices be abstracted with a gateway facade that provides flexibility, control, and security?

How do backend core microservices communicate with each other? Are direct HTTP calls allowed that can improve coupling and impact performance and agility? Or is it possible to consider decoupling messages from queue and topic technology?

resilience

Microservices architecture migrates the system from in-process to out-of-process network communication. In a distributed architecture, what happens when service B does not respond to network calls from service A? Or what happens when service C is temporarily unavailable and other services calling it are blocked?

distributed data

By design, each microservice encapsulates its own data, exposing operations through its public interface. If so, how to query data or implement transactions across multiple services?

confidential

How do microservices securely store and manage confidential and sensitive configuration data?

Manage complexity with Dapr

Dapr is a distributed open source application runtime. It greatly simplifies the plumbing behind distributed applications through an architecture of pluggable components. It provides a dynamic glue that combines applications with pre-built infrastructure functions and components from the Dapr runtime. Figure 1-5 shows Dapr from 20,000 feet.

Dapr at 20,000 feetFigure 1-5. Dapr at 20,000 feet.

In the top row of the diagram, notice how Dapr provides language-specific SDKs for common development platforms. Dapr v1 includes support for .NET, Go, Node.js, Python, PHP, Java, and JavaScript.

While language-specific SDKs enhance the developer experience, Dapr is platform-agnostic. Behind the scenes, Dapr's programming model exposes functionality through standard HTTP/gRPC communication protocols. Any programming platform can call Dapr through its native HTTP and gRPC APIs.

The blue box in the center of the figure represents the Dapr building block. Each exposes pre-generated pipeline code for distributed application functionality that the application can use.

A component row represents a large set of predefined infrastructure components that an application can use. Think of components as infrastructure code that you don't have to write.

The bottom line highlights the portability of Dapr and the various environments in which it can run.

Going forward, Dapr has the potential to have a profound impact on cloud-native application development.

container

In any cloud-native conversation, it's natural to hear the term "container" mentioned. In the book Cloud-Native Patterns, author Cornelia Davis observes that "containers are an excellent enabler of cloud-native software." The Cloud Native Computing Foundation, in its cloud-native trajectory map, a starting guide for the enterprise cloud-native journey, puts microservices Containerization as the first step.

Containerizing microservices is very simple and straightforward. Code, its dependencies, and runtime are packaged into binaries called container images. Images are stored in a container registry, which acts as a repository or library for images. The registry can be on a development computer, in a data center, or in a public cloud. Docker itself maintains a public registry through the Docker Center. The Azure cloud employs a dedicated container registry for storing container images close to the cloud application that will run the container images.

Convert the container image to a running container instance when the application starts or scales. Instances run on any computer that has the container runtime engine installed. You can create as many instances of a containerized service as you want.

Note how each container maintains its own set of dependencies and runtimes, which can differ from each other. Here we see different versions of the Product microservice running on the same host. Each container shares a portion of the underlying host operating system, memory, and processor, but is isolated from each other.

Note that the container model supports the "dependency" principle in .

Element #2 specifies that "Each microservice isolates and packages its own dependencies to make changes without affecting the entire system."

Containers support both Linux and Windows workloads. The Azure cloud publicly accepts both. Interestingly, it is Linux (and not Windows Server) that becomes the more commonly used operating system in Azure.

While multiple container vendors exist, Docker holds the largest market share. The company has been driving the software container movement. It has become the de facto standard for packaging, deploying and running cloud-native applications.

Why use containers?

Containers provide portability and ensure consistency between environments. By encapsulating everything into a single package, microservices and their dependencies are isolated from the underlying infrastructure.

You can deploy containers in any environment that hosts the Docker runtime engine. Containerized workloads also eliminate the expense of provisioning each environment with frameworks, software libraries, and runtime engines.

By sharing the underlying operating system and host resources, containers have a much smaller footprint than full virtual machines. A smaller size increases the density or number of microservices a given host can run at one time.

Container Business Process

While tools like Docker create images and run containers, you also need tools to manage them. Container management is done using a special software program called a "container orchestrator". Orchestration is critical when operating at scale with many running, independent containers.

Figure 1-7 shows the administrative tasks that the container orchestrator automates.

What container orchestrators do

Figure 1-7. Tasks performed by the container orchestrator

The following table describes common business process tasks.

Task illustrate
plan Automatically provision container instances.
correlation/anti-correlation Provision containers near or far from each other to help with availability and performance.
health monitoring Faults are automatically detected and corrected.
failover Automatically reprovision failed instances as healthy machines.
expand Container instances are automatically added or removed to meet demand.
network Manage network overlays for container communication.
service discovery Allows containers to be positioned relative to each other.
rolling upgrade Orchestrate incremental upgrades with zero deployment downtime. Automatic rollback of problematic changes.

Note how the container orchestrator supports the principles of "disposability" and "concurrency" in twelve-factor applications.

Element #9 specifies that "a service instance should be disposable, support fast startup to increase scalability opportunities, and support graceful shutdown to keep the system in the correct state." Docker containers, as well as orchestrators, inherently meet this requirement.

Element #8 specifies that "the service scales out across a large number of small identical processes (replicas), rather than scale up on a single large instance on the most powerful machine available."

Despite the existence of multiple container orchestrators, Kubernetes has become the de facto standard in the cloud-native world. It is a portable, extensible open source platform for managing containerized workloads.

You can host your own Kubernetes instance, but then be responsible for provisioning and managing its resources, which can be complex. Azure Cloud Functions uses Kubernetes as a managed service. [Azure Kubernetes Service (AKS) and Azure Red Hat OpenShift (ARO) both enable you to leverage the features and power of Kubernetes as a managed service without having to install and maintain it.

Support Services

Cloud-native systems rely on many different auxiliary resources, such as data storage, message relay stations, monitoring and identification services. These services are called support services.
Figure 1-8 shows many common support services used by cloud-native systems.

Common backing services

Figure 1-8. Common Support Services

You can host your own support services, but then you are responsible for licensing, provisioning, and managing those resources.

Cloud providers offer a wealth of managed support services. Just use the service without owning it. Cloud providers operate resources at scale and are responsible for performance, security, and maintenance. Monitoring, redundancy and availability are built into the service. Providers guarantee service level performance and fully support their managed services - open a ticket and they'll fix it.

云原生系统支持云供应商提供的托管支持服务。 在时间和劳动力方面的节省可能会十分巨大。 承载自己的服务和遇到问题的运营风险会迅速变得十分昂贵。

最佳做法是将支持服务视为附加资源,动态绑定到将配置信息(URL 和凭据)存储在外部配置中的微服务。

要素 #4 指定支持服务“应通过可寻址 URL 进行公开。 这样做可使资源与应用程序分离,使其可以互换。”

要素 #3 指定“配置信息通过代码之外的配置管理工具移出微服务和实现外部化。”

借助此模式,支持服务可以进行附加和拆离,而无需更改代码。 可以将微服务从 QA 提升到暂存环境。 将微服务配置更新为指向暂存中的支持服务,并通过环境变量将设置注入到容器中。

云供应商提供了 API,使你能够与其专有支持服务进行通信。 这些库封装了专有管道和复杂性。 但是,直接与这些 API 进行通信会将你的代码紧密耦合到该特定支持服务。 隔离供应商 API 的实现细节是一种被广泛接受的做法。 引入中间层或中间 API,将泛型操作公开给你的服务代码并将供应商代码包装在其中。 这种松散耦合使你可以将一个支持服务交换为另一个,或是将代码移到不同的云环境,而无需更改主线服务代码。 前面讨论的 Dapr 通过预生成构建基块集来遵循此模型。

最后一点,支持服务还促进本章前面所讨论的十二要素应用程序中的“无状态”原则。

要素 #6 指定“每个微服务应在其自己的进程中执行,与其他正在运行的服务隔离。 将所需状态外部化到支持服务,如分布式缓存或数据存储。”

自动化

如你所见,云原生系统采用微服务、容器和新式系统设计来实现一定的速度和敏捷性。 但这只是其中的一部分。 如何预配这些系统所运行的云环境? 如何快速部署应用功能和更新? 如何完成整张图片?

进入广泛接受的基础结构即代码 (IaC) 做法。

借助 IaC,你可以自动执行平台预配和应用程序部署。 实质上你是将软件工程做法(例如测试和版本控制)应用到 DevOps 做法。 基础结构和部署是自动执行的,具有一致性和可重复性。

自动完成基础结构

使用 Azure 资源管理器、Azure Bicep、HashiCorp 提供的 Terraform和 Azure CLI 等工具,你能够以声明方式对所需的云基础结构编写脚本。 资源名称、位置、容量和机密都是参数化和动态的。 脚本会进行版本控制,并作为项目的生成工件签入到源代码管理中。 你调用脚本以在系统环境(如 QA、暂存和生产)间预配一致且可重复的基础结构。

在底层,IaC 是幂等的,这意味着可以反复运行相同脚本,而不会产生副作用。 如果团队需要进行更改,则会编辑并重新运行脚本。 只有更新的资源才会受到影响。

在什么是基础结构即代码一文中,作者 Sam Guckenheimer 介绍了“实现 IaC 的团队能够快速、大规模地提供稳定的环境。 他们避免了手动配置环境,并通过代码来表示环境所需的状态,从而强制执行一致性。 采用 IaC 的基础结构部署是可重复的,可以防止因配置偏移或缺少依赖项而导致的运行时问题。 DevOps 团队可以结合使用一系列统一的做法和工具,迅速、大规模、可靠地提供应用程序及其支持的基础结构。”

自动执行部署

前面所讨论的十二要素应用程序会在将完成的代码转换为正在运行的应用程序时调用单独的步骤。

要素 #5 指定“每个版本都必须在生成、发布和运行阶段执行严格的分离。 各自都应使用唯一 ID 进行标记,并支持回滚功能。”

新式 CI/CD 系统有助于实现此原则。 它们提供单独的生成和交付步骤,可帮助确保提供一致且高质量的代码供用户随时使用。

图 1-9 显示整个部署过程中的分离。

Deployments Steps in CI/CD Pipeline

图 1-9. CI/CD 管道中的部署步骤

在上图中,请特别注意任务的分离:

  1. 开发人员在其开发环境中构造一种功能,迭代执行所谓的“内部循环”(即代码、运行和调试)。

  2. 完成后,该代码会推送到代码存储库(如 GitHub、Azure DevOps 或 BitBucket)。

  3. 推送会触发将代码转换为二进制生成工件的生成阶段。 该工作使用持续集成 (CI) 管道来实现。 它会自动生成、测试并打包应用程序。

  4. The release phase picks up binary build artifacts, applies external application and environment configuration information, and generates an immutable release. Releases are deployed to the specified environment. This work is accomplished using a continuous delivery (CD) pipeline. Every release should be identifiable. You can say "This deployment is running release version 2.1.1 of the application."

  5. Finally, the published function runs in the target execution environment. Releases are immutable, which means that any changes must create a new release.

  6. The release phase picks up binary build artifacts, applies external application and environment configuration information, and generates an immutable release. Releases are deployed to the specified environment. This work is accomplished using a continuous delivery (CD) pipeline. Every release should be identifiable. You can say "This deployment is running release version 2.1.1 of the application."

  7. Finally, the published function runs in the target execution environment. Releases are immutable, which means that any changes must create a new release.

By applying these practices, organizations have revolutionized the way they deliver software. Many organizations are moving from quarterly distribution to on-demand updates. The goal is to catch problems early in the development cycle, when they are less expensive to fix. The longer the duration between integrations, the more expensive the problem to solve. With consistency in the integration process, teams can commit code changes more frequently, improving collaboration and software quality.

Guess you like

Origin blog.csdn.net/weixin_58024114/article/details/125694623