[Cloud Native Technology] Interpretation of Cloud Native Technology

The cloud-native technology system may seem messy and complicated, but in different perspectives, it embodies the main line of "getting a touch and moving the whole body". From the perspective of timeline, the development of container technology gave birth to a cloud-native trend of thought, which solved the problem of resource supply at the bottom. Then the open source Kubernetes became the standard specification for container orchestration. When the open application platform based on Kubernetes scalability became more abundant, It has become the most important cornerstone of cloud native ecology. Subsequently, the core ideas of Service Mesh and Serverless technologies are more focused on realizing value on the business side-sinking more capabilities to the infrastructure, providing the possibility of lightweight applications and cloud migration.

From the perspective of technical requirements, microservice architecture is the preferred way to solve the single complexity problem, but it has brought about a substantial increase in the overall complexity of the entire system. Container technology and Kubernetes have respectively solved the deployment and deployment of a large number of applications under the microservice architecture. As well as the management and scheduling of containers, Kubernetes also provides better underlying support for Service Mesh, and also brings the serverless cloud nativeization of the underlying infrastructure and the further sinking of middleware capabilities.

1. Container

A container is a technology that effectively divides a process into an independent space, so as to balance resource usage conflicts between independent spaces. Essentially, a container is a special process. Its core function is to create a "boundary" by constraining and modifying the dynamic performance of the process. In addition, its resource limit ability and the "strong consistency" shown by the mirroring function, Both make container technology one of the most critical underlying technologies for cloud native.

Docker containers are often referred to as "lightweight" virtualization technology because of their isolation effects similar to those of virtual machines, but such a statement is not rigorous. In the virtual machine, the hypervisor is the most important part. It simulates various types of hardware such as CPU, memory, I/O devices through hardware virtualization functions, and then installs a new operating system on these virtual hardware, namely In Guest OS, application processes running in a virtual operating system are isolated from each other.

The difference between Docker and virtual machines is reflected in the different process isolation methods. Docker implements process isolation by attaching additional Namespace parameters to the application. There is no real "Docker container" running on the host. This "obscure" operation It makes the process seem to be running in an isolated "container", so that the container reduces additional resource consumption and occupation, and has great advantages in terms of agility and high performance.

In addition, the core functions of the container also include Cgroups-based resource restriction capabilities and mirroring capabilities. The function of Cgroups is to limit the upper limit of resources that a process group can use, including resources such as CPU, memory, disk, and network bandwidth. The mirroring function makes the container technology show "strong consistency", that is, the contents of the image downloaded at any location are completely consistent, fully reproducing the original complete environment of the image maker, which opens up all aspects of the "development-test-deployment" process , Making container technology the mainstream software release method.

2. Governors

When container mirroring becomes an industry standard for application distribution, the "container orchestration" technology that can define container organization and management specifications has become a key value node in the entire container technology stack. The main container orchestration tools include Docker's Compose+Swarm combination, Mesosphere's Mesos+Marathon combination, the Kubernetes project co-led by Google and RedHat, and the OpenShift and Rancher projects based on Kubernetes. In the end, the Kubernetes project, with its excellent openness, scalability, and active developer community, stood out in the battle of container orchestration and became the de facto standard for distributed resource scheduling and automated operation and maintenance.

The main design idea of ​​the Kubernetes project is to define various relationships between tasks in a unified way from a more macro perspective, and to leave room for supporting more types of relationships in the future. From a functional point of view, Kubernetes is better at automatically handling the various relationships between containers in accordance with the wishes of users and the rules of the entire system, that is, the orchestration of containers, including deployment, scheduling, and expansion between node clusters. Projects such as Mesos and Swarm are good at placing a container on the best node to run it according to certain rules, that is, the scheduling of the container. This is also an important reason why the Kubernetes project finally stood out.

Kubernetes core capabilities:

  • Service discovery and load balancing: Display various application services through Service resources, combined with DNS and multiple load balancing mechanisms to support mutual communication between containerized applications.
  • Storage orchestration: supports multiple storages in the form of plungin, such as local, nfs, ceph, public cloud block storage, etc.
  • Resource scheduling: Set the required resources and resource limits for pod scheduling, support automatic application release and application rollback, and manage application related configuration.
  • Automatic repair: monitor all hosts in the cluster, automatically discover and handle exceptions in the cluster, replace pod nodes that need to be restarted, and make the container cluster run in the desired state of the user.
  • Key and configuration management: store sensitive information through secrets, and store application configuration files through configmap to avoid fixing configuration files in the image and increase the flexibility of container layout.
  • Horizontal expansion function: realizes elastic scaling based on CPU utilization or platform level, such as automatically adding and deleting nodes.

The Kubernetes project consists of a control node Master and a computing node Node. Master, as the control and management node, is composed of three closely coordinated independent components: kube-apiserver is responsible for API services, kube-scheduler is responsible for resource scheduling, and kube-controller-manager is responsible for container orchestration. In addition, the persistent data of the cluster is managed by kube- After apiserver is processed, it is stored in Etcd, such as Pod, Service and other object information. The compute node Node is the workload of the project, and the kubelet component is the core part of it. It is responsible for the creation, startup and shutdown of the container corresponding to the Pod. At the same time, it works closely with the Master node to realize the basic functions of cluster management.

Today, the Kubernetes project is not only the de facto standard for container technology, but also the cornerstone of the development of the entire cloud-native system, redefining various possibilities for application orchestration and management in the infrastructure field. In the entire cloud native ecosystem, the Kubernetes project has played a role in linking the past and the next. To the above, Kubernetes exposes the formatted data abstraction of infrastructure capabilities, such as Service, Ingress, Pod, and Deployment, which are all capabilities exposed to users by the native API of Kubernetes itself. On the other hand, Kubernetes provides standard interfaces for infrastructure capability access, such as CNI, CSI, DevicePlugin, and CRD, so that the cloud can be used as a capability provider to access capabilities into the Kubernetes system in a standardized way. With the development of technical concepts such as microservices and DevOps, open application platforms based on the scalability of Kubernetes will replace PaaS as the mainstream, and the value of the cloud will return to the application itself, and more and more open source projects will be developed with the concept of cloud native , Deployment and operation and maintenance, and finally directly evolved into a cloud service.

3. Microservices

Microservices are the product of the evolution of service architecture. After experiencing monolithic architecture, vertical architecture, and service-oriented architecture (SOA), microservice architecture (MSA) can be regarded as a distributed implementation of SOA architecture. As business development and demand continue to increase, single application functions have become more and more complex, and application iteration efficiency has declined significantly due to centralized R&D, testing, release, and communication modes.

The microservice architecture is essentially to withstand higher operation and maintenance complexity in exchange for better agility. Its advantage lies in small and governed and decentralized, but it also leads to a surge in the demand, cost and complexity of the infrastructure.

So far, there is no unified standard definition for microservices, combined with Martin Fowler's description: microservice architecture is an architectural pattern/architectural style that develops a single application into a set of small services and runs independently in its own process. Use lightweight mechanisms such as HTTP API to communicate with each other. These services are built around specific businesses, deployed independently through a fully automated deployment mechanism, and can be written in different programming languages, as well as different data storage technologies, and maintain a minimum of centralized management.

Dubbo and Spring Cloud are moving towards integration, and more functions will be sinking to the infrastructure

  • Spring Cloud

Spring Cloud is the leader of the first generation of microservice architecture. It provides a one-stop solution for the realization of microservice architecture. As a family-style technology stack, it provides developers with tools to quickly build a common model of distributed systems. Including configuration management, service discovery, fuses, intelligent routing, micro-agents, control buses, one-time tokens, global locks, leader election, distributed sessions, cluster status, etc.

  • Dubbo

As a distributed service framework open sourced by Alibaba, Dubbo is committed to providing high-performance and transparent RPC remote service invocation solutions and SOA service governance solutions. The core part includes: remote communication, cluster fault tolerance, automatic discovery, etc.

In recent years, the Dubbo ecosystem has continued to improve. In May 2019, Dubbo-go officially joined the Dubbo official ecosystem, and subsequently implemented the REST protocol and gRPC support, opening up the Spring Cloud and gRPC ecosystems. The intercommunication problem between the Go project and the Java&Dubbo project has been solved. Effective solution. Today, due to the emergence of Spring Cloud Alibaba, Dubbo will seamlessly integrate various peripheral products of the Spring Cloud ecosystem.

Whether it is Dubbo or Spring Cloud, they are more or less limited to specific application scenarios and development environments, lacking support for versatility and multilingualism, and only solve the problems at the Dev level of microservices, and lack the overall solution of DevOps All these have created conditions for the rise of Service Mesh.

As a complete solution for microservice management and communication, Dubbo and Spring Cloud will coexist and merge for a long time, but some of the functions they provide will gradually be replaced by infrastructure.

  • For example, the microservices deployed on the kubernetes cluster, the use of kubernetes service registration and discovery functions will be easier;
  • Another example is the use of the Istio architecture, functions such as traffic management and circuit breaker will be transferred to the envoy proxy, and more and more functions will be stripped from the application and sink to the infrastructure.

4. Service Mesh

Service Mesh is usually translated as service mesh. In the complex service topology of cloud native applications, Service Mesh is the infrastructure layer responsible for the reliable delivery of requests in these topologies. Service Mesh adds a Sidecar to the request call path, sinks the complex functions originally completed by the client into the Sidecar, realizes the simplification of the client and the transfer of control of communication between services. When there are a large number of services in the system, the service The invocation relationship between them is expressed as a mesh, which is also the origin of the name of the service grid.

We can summarize and summarize the definition of Service Mesh from the following characteristics:

  • Abstraction: Service Mesh strips the communication function from the application, forms a separate communication layer, and sinks it to the infrastructure layer.
  • Function: Service Mesh is responsible for realizing the reliable delivery of requests, which is not different from the traditional library method in terms of function.
  • Deployment: Service Mesh is embodied as a lightweight network agent in deployment. It is deployed in Sidecar mode and applications one-to-one, and the communication between the two is called remotely through Localhost.
  • Transparency: The function implementation of Service Mesh is completely independent of the application. It can independently deploy upgrades, expand functions, and repair defects. The application does not need to pay attention to the specific implementation details of Service Mesh, that is, it is transparent to the application.

The core value of Service Mesh is not only reflected in its functions and characteristics, but also in the separation of business logic and non-business logic. The non-business logic will be stripped from the client SDK and run as an independent process of Proxy, thereby sinking the various capabilities that originally existed in the SDK to the container, Kubernetes or VM-based infrastructure, realizing cloud hosting and light application. Quantify to help application cloud native.

The mainstream Service Mesh open source software includes Linkerd, Envoy and Istio. Both Linkerd and Envoy directly embody the core concept of Service Mesh. They are similar in function, that is, they realize service discovery, request routing, load balancing and other functions, solve communication problems between services, and make applications unaware of service communication. Istio takes a higher perspective and divides Service Mesh into Data Plane and Control Plane. Data Plane is responsible for all network communications between microservices, while Control Plane is responsible for managing Data Plane Proxy, and Istio naturally supports Kubernetes. It bridges the gap between the application scheduling framework and Service Mesh.

The landing of microservices requires a complete set of infrastructure. When containers become the smallest unit of work for microservices, Kubernetes, as a general container management platform, can give play to the greatest advantages of microservice architecture and make it a new generation of cloud computing operating system. Kubernetes can not only support the running of cloud-native and traditional containerized applications, but also cover the Dev and Ops stages. The combination with Service Mesh can provide users with a complete end-to-end microservice experience.

5. Serverless

Serverless generalizes the application scenarios of Service Mesh, not only limited to synchronous communication between services, but also extended to more scenarios with network access and realized through client SDK, including computing, storage, database, middleware, etc. service. For example, in the serverless practice of Ant Financial, the Mesh mode also extends to scenarios such as Database Mesh (database access), Message Mesh (message mechanism), and Cache Mesh (caching).

At present, Serverless is usually regarded as a collection of FaaS (function as a service) and BaaS (back-end as a service), but Serverless only defines a user experience, not a certain technology. FaaS and BaaS are just a type of Serverless Method to realize. As serverless technology continues to mature, more and more applications that use kubernetes services will be transformed into serverless applications.

6. Cloud Native Middleware

Traditional middleware is similar to a water pipeline in a city. It promotes and manages the flow of data from one application to another. It has a high degree of business coupling and cannot bring direct value to users. In the cloud era, the heterogeneity of software and the demand for interconnection have increased significantly. Middleware has been given new functional definitions, that is, independent functions, low coupling, and modular components. A key component of a distributed application development architecture with high availability, high scalability and eventual consistency.

From the perspective of function definition, middleware is a type of computer software that connects software components and applications. It includes a set of services so that multiple software running on one or more machines can interact through the network. It is reusable. Software category. Cloud native middleware includes API, application server, TP, RPC, MOM, and can also take on the role of data integration and application integration. Any software between the kernel and user applications can be understood as middleware.

With the rapid development of IoT and cloud computing technologies, EDA (Event Driven Architecture) is being adopted by more and more enterprises. Through the abstraction and asynchronization of events, it can provide business decoupling and accelerate business iteration. It is also starting to support vertical industries. Shift to general business-critical application architecture, applied in the fields of packaged applications, development tools, business process management and monitoring.

EDA is often implemented through message middleware. Message middleware aims to use an efficient and reliable message delivery mechanism for platform-independent data exchange. By providing message delivery and message queuing models, it can expand inter-process communication in a distributed environment and is based on Data communication integrates distributed systems. Common message middleware includes ActiveMQ, RabbitMQ, RocketMQ, Kafka, etc., which can be applied to scenarios such as cross-system data transfer, high-concurrency traffic peak clipping, and asynchronous data processing.

In the era of cloud computing, cloud vendors provide packages that are closer to business, and mostly use their own serverless services to run event loads. Middleware capabilities are easily implemented through cloud services, including Alibaba Cloud Function Compute, Azure Function, and AWS Lambda. Event handling.

In the future, application middleware will no longer be a provider of capabilities, but a standard interface for capability access. This standard interface will be built through HTTP and gRPC protocols, and the access layer of the entire service and application business logic will be decoupled through Sidecar , Which is consistent with the idea of ​​Service Mesh. Furthermore, the Sidecar model can be applied to all middleware scenarios, thereby "sinking" middleware capabilities to a part of Kubernetes capabilities.

7. DevOps

With the continuous improvement of the cloud-native open source ecosystem and the continuous sinking of complex functions to the cloud, the basic model of software deployment and operation and maintenance has been basically unified. Before DevOps, practitioners used waterfall model or agile development model for software project development. DevOps, as a combination of Development and Operations, is defined as a set of practices for realizing process automation between software development and IT teams. These practices are built on the basis of a collaborative culture between teams and fill the gap between the development side and the operation and maintenance side. The information gap in order to build, test and release software faster and more reliably has now become the mainstream software development and delivery model.

Overall, DevOps includes three parts: development, testing, and operation and maintenance. Specifically, it consists of multiple stages: continuous development, continuous integration, continuous testing, continuous feedback, continuous monitoring, continuous deployment, continuous operation and maintenance, collectively referred to as the DevOps life cycle.

The separation and integration of DevOps functions are fully reflected in the level of information flow. In the stages of development, delivery, testing, test feedback, delivery and release, various information providers and receivers use high-quality tools and systems to achieve smooth and accurate information transmission. And efficiently perform mechanized operations.

From the above development concept, the idea of ​​DevOps stems from the fact that the infrastructure layer is not strong enough and not standardized enough, so the business side needs a set of tools to bond R&D, operation and maintenance personnel and corresponding infrastructure. But as Kubernetes and infrastructure become more and more complex, the cloud native ecosystem will make corresponding abstractions and layers. The role of each layer only interacts with its own data abstraction, that is, the focus of the development side and the operation and maintenance side. Separate. The ever-generalizing Serverless will also become an ideological orientation and part of DevOps. On the capability side, "light operation and maintenance", "NoOps", and "self-service operation and maintenance capabilities" will become the mainstream methods of application operation and maintenance. On the application side, the application description will be extensively abstracted on the user side, event-driven and serverless concepts are split and generalized, and can be applied to diversified scenarios other than FaaS.

 related articles

  1. Interpretation of cloud native technology

 

 

Guess you like

Origin blog.csdn.net/qq_41893274/article/details/114127030