Hard core science: what is cloud native?

This article is mainly based on the course What is cloud native? _bilibili is summarized, other reference articles are as follows:

With the popularity of cloud computing, the concept of cloud native ( CloudNative) came into being. However, after collecting various relevant information on the Internet, what is cloud native is mostly described in the cloud. The reason is that cloud Originality has been developing and changing, and there is no exact definition, and the right to interpret it does not belong to a certain person or organization .

Next, I will explain the cloud native in my eyes from various materials collected on the Internet and my own understanding in the internship work.

Preliminary knowledge - some basic concepts of cloud

Before introducing cloud native, we first need to understand cloud-related terminology

1. IaaS 、 PaaS 、 SaaS

  • Infrastructure as a Service ( laaS: Infrastructure as a Service)

    Refers to a service model that provides IT infrastructure (including servers, storage, and networks) as a service through the network, and charges based on the actual usage or occupancy of resources by users

  • Platform as a Service ( PaaS: Platform as a Service)

    A business model that provides the server platform as a service, and provides services through the network

    The so-called PaaS actually refers to the software development platform as a service, submitted to users in the SaaS model. Therefore, PaaS is also an application of the SaaS model. However, the emergence of PaaS can accelerate the development of SaaS, especially the development speed of SaaS applications. In 2007, domestic and foreign SaaS vendors successively launched their own PaaS platforms

  • Software as a Service ( SaaS: Software as a Service)

    Provide software services through the network, SaaS platform providers will uniformly deploy the application software on their own servers, customers can order the required application software services from the manufacturer through the Internet according to the actual needs of the work, according to the number of services ordered and the length of time. Pay fees and get services provided by Saas platform providers over the Internet

img

2. Public cloud, private cloud, hybrid cloud

Cloud computing platform, also known as cloud platform, refers to services based on hardware resources and software resources, providing computing, network and storage capabilities for deploying cloud computing resources. There are usually three types of cloud platforms 公共云, 私有云and 混合云.

Recommended reading : Comic: What are public, private, and hybrid clouds? _Programming Blog - CSDN Blog

img



1. What is cloud native

The concept of cloud native wasCloudNative first proposed by Matt Stine of Pivotal in 2013; in 2015, when cloud native was first promoted, Matt Stine defined several characteristics that conform to cloud native architecture in the book "Migrating to Cloud Native Architecture": 12 factors, microservices, self-agile architecture, API-based collaboration, and vulnerability; at the same time, the Cloud Native Computing Foundation (CNCF) was established, which initially defined cloud native as including: containerized packaging + automated management + microservice-oriented ;In 2017, Matt Stine changed his tone in an interview with InfoQ, and summarized the cloud native architecture into six characteristics: modular, observable, deployable, testable, replaceable, and processable;

The latest official website of Pivotal summarizes cloud native into four main points: DevOps + continuous delivery + microservices + containers . In 2018, CNCF updated the definition of cloud native, adding Service Mesh and declarative API. In the v1.0 version of the cloud native definition, it described the cloud native technology system as follows:

  • Cloud-native technology enables organizations to build and run elastically scalable applications in new dynamic environments such as public cloud, private cloud, and hybrid cloud
  • Cloud-native representative technologies include containers, service meshes, microservices, immutable infrastructure, and declarative APIs
    • These techniques enable building loosely coupled systems that are more fault-tolerant, easier to manage, and easier to observe
    • Combined with proven automation, cloud-native technologies make it easy for engineers to make frequent and predictable breaking changes to the system

Today, we can understand cloud native as a method of building and running applications , a set of technology systems and methodologies. It mainly includes the following representative technologies/methodologies: DevOps, containers and container orchestration, declarative API, microservices and microservice governance, service mesh, immutable infrastructure

For a more detailed understanding, we split it into two parts: 云Cloudand 原生Native, native refers to applications developed in any programming language, called native applications, cloud refers to developed applications that we deploy to the cloud, Make applications run efficiently on the cloud, and make full use of the elastic + distributed advantages of the cloud platform. To sum up, cloud native is a series of solutions from native applications on the cloud to the process and on the cloud . Combined with the relevant technology stack to understand:

Applications that conform to the cloud-native architecture should be: containerized using an open source stack (K8S+Docker), improving flexibility and maintainability based on a microservice architecture, supporting continuous iteration and O&M automation with agile methods and DevOps, and leveraging cloud platforms The facility realizes elastic scaling, dynamic scheduling, and optimized resource utilization.



2. Background of the birth of cloud native

We have a basic understanding of the concept of cloud native above. Next, let's talk about the origin of cloud native. The proposal of any trendy technology/methodology is driven by actual needs. The proposal of cloud native is to adapt to the distributed architecture under today's hot and complex application system.

From the 1980s to the present, technologies related to software and information systems have been iterated in different forms, with their respective milestone representative technologies, which are mainly reflected in four aspects: 开发流程, 应用架构, 打包部署, and 基础设施, as shown in the figure below. ,in:
image-20211031201229628

  • Development process From waterfall model, agile model to today's hot DevOps, GitOps and other models
  • Software Architecture From Monolithic Architecture, Layered Architecture, Distributed Architecture to Today's Microservice Architecture
  • Packaging and deployment technologies range from bare servers, virtual machines to today's mainstream containerization technologies
  • Infrastructure From data centers, data hosting to today's cloud computing

It can be seen that: DevOps, 微服务, 容器化, 云计算are today's mainstream models.

With the expansion of business scale, from a single architecture to a distributed architecture, splitting a large application into multiple small applications by vertical or horizontal segmentation is almost an inevitable choice to increase system capacity and enhance system availability . The distributed architecture is convenient for our development and implementation, reduces the impact of failures, and greatly enhances the scalability of the development system. However, while distributed brings these advantages, it also contains many problems, such as higher release frequency, more complex deployment, The difficulty of system architecture design is greatly enhanced, and due to the introduction of network communication, response time is also an important factor. In addition, the difficulty of operation and maintenance is also multiplied.

insert image description here

In order to solve the above problems, a series of solutions have been proposed:

  • Service governance (dependencies, call chains)
  • Architecture management (version management, lifecycle management, [orchestration, aggregation, scheduling])
  • DevOps
  • Automated operation and maintenance
  • Resource scheduling monitoring
  • Overall Architecture Monitoring
  • Traffic governance (load balancing, routing, circuit breakers...)

To sum up, the typical requirements for running a distributed system mainly include the following four aspects:

  1. life cycle management
  2. network management
  3. state storage management
  4. Internal and external application binding and integrated management of distributed systems

The following figure roughly shows some of the technical issues involved in each of the above requirements:

insert image description here

The mainstream system that can meet the above requirements is ESB(Enterprise Service Bus, Enterprise Service Bus)

image-20211101151013340

ESB is a middleware, such as message-oriented middleware and some lightweight integration frameworks, which supports the use of service-oriented architecture to support services, messages, and event-based interactions in heterogeneous environments, and has appropriate services. level and manageability. It provides a good feature set, but the monolithic architecture and tight technical coupling between business logic and platform can lead to technical and organizational centralization, which is contrary to distributed systems. From the analysis of the above four requirements, the limitations of ESB system for distributed support are as follows:

  • Life cycle: ESB usually only supports one language to run, which limits software packaging, available libraries, patch frequency, etc.
  • Network: Only one language and its related technologies are supported, and network problems and semantics are deeply embedded in the business. At this time, subsequent architecture upgrades need to make great changes to the code
  • State: Libraries and interfaces that interact with state are not fully abstracted and not fully decoupled from the service runtime
  • Binding: The code and design process must be constructed according to the message exchange mode, which is inconsistent with the multi-protocol, multi-data format, and multi-message exchange mode required for distribution to connect other systems, resulting in system expansion and limitations.

In view of the deficiencies of ESB systems, in today's cloud computing era, we propose cloud-native solutions based on technologies such as containerization, container orchestration, DevOps, microservices, and typical governance system service grids .

The above is the birth background of cloud native. To put it bluntly, it is driven by the actual needs of development.



3. Cloud native series technologies/methodologies

Cloud native is understood as a method of building and running applications , and it is a set of technical systems and methodologies. It mainly includes the following representative technologies/methodologies: DevOps, containers and container orchestration, microservices and microservice governance, service mesh, immutable infrastructure

1. DevOps

DevOps is a collection of practices and tools that automate processes between IT and software development teams . Among them, with the increasing popularity of agile software development, continuous integration (CI) and continuous delivery (CD) have become an ideal solution in this field. In the CI/CD workflow, each integration is verified through automated builds, including coding, release, and testing, helping developers catch integration errors early and teams can deliver internal software to production quickly, safely, and reliably .

image-20211101201001656

image-20211023225216247

  • Continuous integrationCI : After the code is written, the subsequent operations of building, testing, and merging into the code repository are continuously automated.
  • Continuous DeliveryCD : Automatically package code into images and publish them to image repositories
  • Continuous deployment : The image in the warehouse is automatically deployed to the k8s platform, and then various indicators and log information can be monitored during the running of the code.

To sum up in one sentence: DevOps means that as long as the code changes, through a series of automated processes, you can see the effect of the code change on the online cloud platform, reducing a lot of work between developers and operation and maintenance personnel. automation link to complete

That is , the closed loop of automatic code writing, construction, analysis, testing, merging code bases, building and packaging into mirrors, deployment, and online operation and maintenance analysis

2. Containers & Container Orchestration

Container technology has a long history, but around 2013, after dotCloud invented the "container image" technology in the Docker project, it creatively solved the problem of application packaging and brought new vitality to container technology 应用容器. popular all over the world.

The so-called application container, compared with the traditional container, can be called the previous container technology 系统容器, the use method is similar to a lightweight virtual machine, on which many applications run, and the application container usually only runs the process of a specific application and its child processes, no other processes will run.

Based on the introduction of Dokcer and the emergence of application containers, if only a single application runs in a container, considering the distributed or even microservice model of application development, we only have to run all applications related to a system in a containerized manner Only by organizing and arranging the relationship, operation logic and communication mechanism can ensure the smooth and smooth operation of the entire system.

Therefore , it is difficult for a single container management system to generate value, and container orchestration is the fundamental . The most famous container orchestration system is kubernetesk8s, which is often referred to as k8s. Modern container technology and k8s have evolved packaging, distribution, and application deployment into a programming language-independent format.

k8s has the following key features:

  • Following the declarative API programming paradigm, the declarative API is introduced into the cloud computing management platform, which combines the controller mode to support the basic logic of the entire k8s system operation
  • "Application-centric" modern application infrastructure, which manages various basic support services and exposes these infrastructure capabilities to upper-level applications through declarative APIs. The main purpose of the stand-alone operating system itself is to create and run scheduling applications, but k8s is to create, start, run, schedule and run applications in a larger cloud computing environment.
  • The platform for platform type system is the platform system provided for building other platforms. Therefore, in most cases, in order to run the application more completely, the native api interface of k8s is generally not directly used to build the running application, but in k8s After adding and supplementing other platform systems to run the application, the two most famous systems are service mesh Service Mashand serverless computing.Serverless

3. Microservices & Microservice Governance

Microservices are a popular architectural style for building applications that are elastic, highly scalable, independently deployable, and capable of rapid iteration. The essence is to split a large service into small services, each of which is self-contained and should implement a single business function in a bounded context.

动态化It is the natural attribute of cloud-native applications, and the microservice architecture is the key to supporting this goal, and the service governance tool is the foundation for supporting the operation of microservices. In order to facilitate the development, creation, maintenance and operation of microservice applications, users usually need to rely on the corresponding service governance framework, such as Dubbo, Spring Cloud Alibaba, and the future service governance framework ServiceMesh, etc.

4. Service mesh

In a distributed/microservice architecture system, the communication between services is crucial, so we need to ensure that the communication channel is fault-free, secure, highly available and robust enough. These requirements are precisely the reason for the emergence of service mesh as an infrastructure component , its implementation is to ensure controlled service-to-service communication by attaching a service proxy to the outer layer of each application instance. This proxy is called sidecar, and is mainly responsible for the communication-related functions between each business unit instance ( Such as service discovery, load balancing, circuit breaking, timeout, reset, etc.), service mesh is not achieved overnight, but a new generation of communication models derived from the development of the earliest ESB to Dubbo and SpringCloud.

insert image description here

In the previous era, in the pure microservice framework such as Dubbo+SpringCloud, almost every business logic in the development system needs to communicate with other modules in the same system, so in order to realize various advanced network functions , it must be embedded with network-related network functions such as service discovery, load balancing, fuse current limiting, service routing, etc. These functions were loaded through sdk in the early stage. If you consider that each module in the distributed system may be If it is developed by different programming languages, then the sdk itself needs to support multiple different languages ​​to call the interface, which causes several problems:

  1. Available sdk language versions are limited
  2. The update of the sdk will lead to the update of the business code

Therefore, in the service grid period, each of our microservice applications is divided into two parts. As shown in the figure above, each business logic only needs to send a request call to a specific lightweight sdk, corresponding to The network function of the network is completed by a sidecar that is specially developed to run independently and only need to communicate through standard protocols without strong binding and coupling with the programming language. The two combine to form an independent application. The sidecar here specializes in communication between services and has nothing to do with business logic, so that business logic can focus on itself, without needing to be aware of the sdk. This solves the problem that developers of each business logic no longer need to care about which SDKs can be called, but only need to implement their own business logic and be able to associate with the standard interface of the corresponding sidecar immediately.

Therefore, in the form of sidecar, service mesh separates service governance from business logic and disassembles it into independent processes to achieve unified governance and network security of heterogeneous systems.

The most typical representative products of service mesh are Istio, Open Service Meshand Alibaba Cloud'sASM

5. Immutable Infrastructure

Immutable infrastructure was proposed by Chad Fowler in a blog in 2013. The reason for this idea is that in the past non-containerized era, we usually deployed applications on bare servers/virtual machines, which supported many configurations at the bottom. This will make it very difficult to rebuild the exact same configuration environment in the event of a disaster, unless we ensure that every operation and maintenance/developer can submit a work order according to the exact process and according to the requirements of the work order itself when implementing the change. Changes are executed verbatim; in addition, there is a risk of inconsistent state.

Therefore, in order to change this status quo, the best way is to ensure that the underlying environment remains unchanged, at least the system environment on which the application is running remains unchanged. Obviously, the era of containerization can solve this problem very well, because modern application containers Among them, each container runs a single application. If the data and state of this application are stored in a storage system outside the container that is independent of the container life cycle, once the container is started, it does not store any other data locally except temporary data. It is ensured that the container is close to a read-only state. Once the container itself fails or needs to be rebuilt for other reasons, it is only necessary to start a new container based on the same image and associate it with the original storage system to restore the exact same environment.

Therefore, the core idea of ​​immutable infrastructure is that once an instance of any infrastructure is created, it becomes a read-only state. If it needs to be modified or upgraded, it cannot be achieved by configuring the instance, but only by replacing it with a new instance .

In this way, in the era of cloud computing such as public cloud, private cloud, and hybrid cloud, if the system is to run on an IaaS environment, consider that if the data does not exist in the container and does not exist on the host where the container is running, it should be It exists on a unified storage system on the periphery of the machine. The declaration cycle of this storage system has nothing to do with the life cycle of the host, so that even if the host fails and the container fails, we can completely reproduce the state before the failure by rebuilding the host container. This avoids the risk of causing state inconsistencies.

In addition, if we follow the immutable infrastructure, there is now a IaCtechnology, that is, infrastructure as code, by calling the corresponding IaaS cloud api and so on to ensure that the underlying system environment is not only containers but also container networks and everything else The environment can be reconfigured and reproducible, so the overall system operation and maintenance will become simpler and easier to implement.



4. Features of cloud native system

As mentioned above, it 动态化is a natural attribute of cloud-native applications and cloud-native infrastructure, and microservice architecture is the key to supporting this goal. Therefore, cloud-native systems often need to have the following functional characteristics:

1️⃣ When building a cloud native system, there should be a unified API gateway

Each service itself provides its functions through api. Of course, we do not want the client to communicate with each microservice independently when obtaining the required functions. Therefore, the api provided by each microservice should be aggregated into a composite api. That is, to provide a unified access interface to the outside world. Therefore, when we go to build a cloud native system, there should also be a key component called api gateway.

2️⃣ Microservice governance

Service governance tools are the fundamental support for the operation of microservices. For example Istio, Open Service Mesh, , Linkerdetc. are all tools for microservice governance, especially in the era of service mesh.

3️⃣ Serverless platform

Serverless is a movement driven by developers and businesses that realize that software is eating the world, but if you build and maintain all the software yourself, you will too. This movement calls for abstracting the most trivial parts of building applications so that developers can really spend their time delivering business value.

The purpose of this movement is to allow developers to single-handedly build applications that handle production-grade traffic. They don't have to manage scaling their infrastructure, they don't have to provision servers, and they don't have to pay for unused resources. They can focus on development.

The most famous serverless platform is the one mentioned aboveKnative

4️⃣ Cloud native orchestration platform

The cloud-native orchestration platform schedules and runs applications, and performs health monitoring, monitoring, elastic expansion and contraction, etc. The most famous is thek8s

5️⃣Flexible deployment

Microservice governance can usually also support flexible deployment functions like corresponding services, such as grayscale release, blue-green release, canary deployment, A/B testing, shadow shadow deployment, etc.



5. Cloud native architecture model

In order to run cloud-native applications normally, the cloud-native architecture model needs to organize at least the following five levels:

1️⃣Infrastructure layer

The first layer is the infrastructure layer, ie infrastructure. It is mainly composed of host, storage and network. In fact, it can also be built on the corresponding private cloud, public cloud or hybrid cloud. Then we can use the host in the cloud at this time, which is the so-called computing, as well as network communication and storage technology.

2️⃣ Supply layer

The second layer is the supply layer/provisioning layer, ie provisioning. This layer is mainly used to complete the resources provided by the underlying infrastructure or infrastructure such as host creation, operating system, installation, storage space allocation, network creation, etc. It is used to configure a key intermediate supply layer used by upper-layer applications. So this layer we call it the host management layer. What they usually use is the relevant layer including the DevOps toolchain or other provisioning tools.

3️⃣ Runtime layer

The third layer is the runtime layer, which mainly includes several key interfaces associated with the container runtime environment. Including such as container runtime interface CRI, container network interface CNI, container storage interface CSI. This can be used to ensure that when an application is run in a containerized manner on top of the runtime environment, it can call the container runtime environment provided by the underlying layer, including our computing resources, network resources and storage resources. Important standard interface. So this time we need a runtime layer to solve such problems.

4️⃣Container orchestration and management

The fourth layer is container orchestration and management. We have emphasized that a single container has no value. Only by combining multiple containers for unified orchestration can its value be exerted. Therefore, on top of the container runtime layer environment, we need to provide a container orchestration and management system, the most famous of which is kubernetes, but kubernetes is a platform for platform. It is not designed to directly organize, maintain and run our modern app/cloud native app. Instead, it is necessary to add and organize other platforms to run applications on its basis. For example, if we want to run microservice applications, we should also attach a service grid system such as istio to our kubernestes, and then the grid system uses the underlying kubernetes to maintain and run our corresponding microservices, mainly Secure and reliable communication between microservices and more. In addition, if we expect to run a serverless type of application, then we also need to add a serverless runtime environment based on kubernetes, among which the more famous open source products are knative and so on.

5️⃣Cloud native application definition and development layer

The fifth layer is the definition and development layer of cloud-native applications. On top of the technology of the first four layers, we can easily develop and define cloud-native applications.

So the architecture diagram looks like this:

insert image description here

First of all, the underlying infrastructure is likely to be a private cloud, a public cloud or a hybrid cloud. On the basis of the cloud environment, it is a container orchestration platform, that is, k8s. Then if we need to use serverless computing, then we need to provide an additional Faas platform, which is the serverless runtime. Then on the platform, we can provide interfaces similar to containers and services, called the Caas interface. Moving on, we should mention a running platform for deploying and providing microservices, which is the service grid system Istio to run various business units in the form of microservices. Then these business units and even our underlying platform itself should be incorporated into the monitoring system. Without monitoring, there is almost no way to manage it. Therefore, our three-dimensional monitoring system consists of logs, indicator monitoring and corresponding distributed link tracking system. It is composed of several monitoring components that are almost inevitable in modern cloud systems. There is also the api gateway, which is used to realize unified traffic management of apis, especially the traffic management of access to external systems, including api governance, traffic control and other related functions. In addition, for so many microservice applications, we should also provide a unified authentication and access management interface Iaam component when necessary. Add another layer, and that's really the development environment that our developers should own and use.

Further refinement, the architecture diagram is as follows:

insert image description here

On the public cloud and private cloud, there should be kubernetes, and then on this basis, we should provide a technology middle-end for microservices, usually it is a service mesh. On the basis of this Service Mesh, we should provide development frameworks, various middleware systems, three-dimensional monitoring systems and microservice governance tools to ensure service discovery, service monitoring, inter-service communication and various network function. In addition, in order to ensure the efficient delivery of each microservice application, we should also follow the model of DevOps, or even GItOps, to implement the application's CI/CD and other related functions. Of course, when accessing external access traffic, we should have a unified access layer for services, which is the API gateway.

Speaking of which, I believe you have a basic understanding of this modern cloud-native architecture system. If we want to briefly summarize, then the technical scope of our cloud native can be roughly divided into the following six aspects:

image-20211101220015097

  1. Cloud application and definition and development process
  2. Cloud native underlying technology
  3. Cloud application orchestration and management
  4. Cloud-native toolset
  5. Monitoring and Observability
  6. Serverless


6. How to build a cloud native system

With that in mind, how exactly do we build cloud native? As long as we follow the previous paradigm and combine the corresponding systems, we can basically build a cloud-native basic platform.

In this cloud-designed platform, each layer has its own function:

  • The microservice architecture is to solve the application complexity problem caused by the monolithic architecture. But in fact, after we split it into microservices, this complexity is left to our external governance system, instead of really reducing the complexity of the service of the service.
  • The service governance framework and the three-dimensional monitoring solution can solve the problems related to the coordination between services and abnormal invocation.
  • Container technology is used to solve problems related to application building, distribution and deployment.
  • k8s is used to address requirements such as service orchestration, scheduling, and elasticity.
  • Service Mesh is used to solve the problems of intrusiveness and traffic governance of the microservice framework. Running Service Mesh on top of k8s can provide better support for the underlying container environment.
  • With the help of IaaS cloud and container technology, problems related to immutable infrastructure can be solved.

Guess you like

Origin blog.csdn.net/qq_45173404/article/details/121091843