HCIP Study Notes-Cloud Native Technology-7

The concept and background of cloud native

1.1 Background - "Three Stages and Two Transformations" of Enterprise IT Digital Transformation

image.png

  • Server stage: It is characterized by hardware equipment as the center, and business applications are customized according to the differentiation of different manufacturers' equipment and operating system virtualization software; equipment installation, debugging, application deployment, operation and maintenance are basically completed by manpower, and the degree of automation is low , Lack of unified device and application management capabilities. Later, with the emergence of virtualization software, the utilization rate of resources and the flexibility of scaling containers have been improved to a certain extent, but it has not fundamentally solved the problems of separation of infrastructure and software and complex operation and maintenance.
  • Cloudization stage: Distributed and discrete devices in the traditional mode are unified to realize the pooling of backup resources such as computing and storage networks. Through a unified virtualization software platform, a unified resource management interface is provided for upper-layer business software to realize resource management. The automation of management capabilities shields part of the infrastructure differences and enhances the versatility of applications. However, due to the large differences in virtualization software platforms, especially some commercial enhancements of various vendors, capabilities cannot be shared among vendors, and applications are still Unable to build in a fully standardized mode, application deployment is still resource-centric
  • Cloud-native stage: In this stage, enterprises shift their focus from resource-centric to application-centric, including agile application delivery, fast elasticity, smooth migration, lossless disaster recovery, etc. Therefore, enterprises begin to consider how to integrate infrastructure Integrate with the business platform to provide a standard operation, monitoring, and governance platform for business applications, and sink the general business capabilities to the platform side to better help enterprises realize application automation.

1.2 What is cloud native

image.png

  • In 2015, the cloud native foundation CNCF was established, marking the transformation of cloud native from a technical concept to an open source implementation. CloudNative Computing Foundation was jointly established by Google, Huawei and other companies on July 21, 2015. HUAWEI CLOUD is the only Asian founding member of the Cloud Native Foundation (7CNCF) and the only platinum member in China.
  • The Cloud Native Computing Foundation (CNCF) is committed to cultivating and maintaining a vendor-neutral open source ecosystem to promote cloud native technologies. We make these innovations accessible to the masses by democratizing the most cutting-edge models.
  • CNCF is committed to promoting the widespread adoption of cloud-native technologies by cultivating and maintaining an ecosystem of open-source, vendor-neutral projects, thereby realizing the vision of making cloud-native everywhere. CNCF's definition of cloud native makes the concept of cloud native more concrete, making cloud native easier to be understood by various industries, and laying the foundation for the wide application of cloud native in the whole industry. In the past few years, cloud-native key technologies are being widely adopted. According to the CNCF survey report, more than 80% of users have already used or plan to use the microservice architecture for business development and deployment, which brings users' awareness and use of cloud-native technologies to a new level The stage technology ecology is also changing rapidly.

1.3 Cloud Native Development

image.png

  • Starting from the basic container engine, cloud-native open source projects continue to expand their application fields, and their ability to adapt to various scenarios such as edge and heterogeneity continues to deepen. From the early open source container engine project Docker, to Kubernetes, Swarm, and Mesos that implement efficient container orchestration, to better solve the problem of microservice governance, Istio based on Service Mesh technology, and KubeEdge launched for edge scenarios, Projects such as K3s (a lightweight Kubernetes distribution) and Volcano for high-performance heterogeneous computing scenarios have all become boosters to accelerate the integration of cloud native and industries, and to promote innovation in various industries
  • In 2020, based on the in-depth analysis and comprehensive research on cloud native, China Academy of Information and Communication compiled the "White Paper on Cloud Native Development in 2020" in combination with the development status and trends of the domestic cloud native industry. In 2020, HUAWEI CLOUD proposed the concept of Cloud Native 2.0 to the industry for the first time, and hopes that through Cloud Native 2.0, every enterprise can become a "new cloud native enterprise"

1.4 Cloud Native 2.0

image.png

  • In the initial stage of enterprise digital transformation, it is mainly to move the business from offline to the cloud. At this stage, the enterprise mainly simply deploys and runs the business on the cloud, which can be called ON CLOUD. In this form, through the cloudification of resource pools, the problems of operation and maintenance, deployment, and expansion in the IDC era have been solved. However, a series of application-level problems caused by traditional single application architectures and chimney architectures have not been effectively resolved. However, the value of the cloud to the business is still mainly at the stage of resource supply, and the value of the cloud cannot be fully utilized.
  • In the cloud native 1.0 period, cloud native technology was concentrated on the infrastructure layer, with a single architecture and a resource-centered approach. The supported application ecology was relatively simple, and it was mainly applied in the Internet industry.
  • With the deepening of enterprise digital transformation, enterprises need to fully enjoy the dividends brought by cloud computing. They need to make business capabilities born in the cloud and grow longer than the cloud, from the current ON CLOUD to IN CLOUD. At the same time, the new cloud-based capabilities and existing Ability to organically coordinate and stand without breaking. Born in the cloud refers to building enterprise applications based on cloud-native technologies, architectures, and services, and growing in the cloud refers to making full use of the advantages of the cloud to help enterprise applications and business development, bringing the digital construction of enterprises and business intelligence upgrades to a new stage. We call it the cloud native 2.0 era
  • Cloud Native 2.0, enterprise cloudification from "ON Cloud" to "IN Cloud", born in the cloud, grown in the cloud and established without breaking. The new capabilities of the enterprise are built on the basis of cloud native, so that it is born in the cloud, and the whole life of applications, data and AI The cycle is completed on the cloud to make it longer than the cloud; at the same time, the existing capabilities are inherited in a way that is not broken, and organically coordinated with the new capabilities to upgrade to a new stage of intelligence, empowering "new cloud-native enterprises". Cloud native 2.0 is the enterprise
  • In the new stage of intelligent upgrading, enterprise cloudification is moving from "ON Cloud" to "IN Cloud", becoming a "new cloud-native enterprise". The new capabilities and the existing capabilities can be established without breaking up and organically coordinated to achieve resource efficiency, application agility, business intelligence, security and credibility.

1.5 Advantages of Cloud Native 2.0

image.png

  • In the cloud native 2.0 era:
    • Cloud-native technology needs to shift from resource-centric to application-centric. Cloud-native infrastructure must be able to perceive application characteristics, and applications can use cloud-native infrastructure more intelligently and efficiently.
    • Based on the cloud-native multi-cloud architecture, it supports the distributed trend of cloud-native applications, supports device-edge-cloud collaboration and multi-cloud collaboration, and supports complex full-scenario applications for government and enterprises.
    • Cloud Native 2.0 is an open system, where new capabilities and existing capabilities are organically coordinated, established but not broken
    • Cloud native 2.0 is a full-stack capability, which extends to full-stack technologies such as applications, big data, databases, and AI.

1.6 Ten Architecture Models of Cloud Native 2.0

image.png

1.7 HUAWEI CLOUD Cloud Native 2.0 Architecture Panorama

image.png

  • Cloud-native hardware layer: Aiming at the pursuit of the ultimate cost-effectiveness of cloud computing power, through the introduction of cloud infrastructure service-aware hardware PCI boards (SDI / Qingtian offload), self-developed general-purpose CPU (Peng), heterogeneous NPU (Yiteng) , through a series of hardware offloading and deep software and hardware collaboration for homogeneous and heterogeneous computing, to create the most cost-effective computing platform that works with containers and virtual machine runtimes.
  • Cloud-native OS: In addition to standard operating system functions, the responsibility of this layer in the cloud context is to divide physical server resources into multiple virtual machines and multiple containers by "dividing resources into large and small". Provide support for resource scheduling and flexible computing, and minimize storage and network virtualization performance overhead through hardware passthrough.
  • Cloud-native elastic resource layer: The responsibility of this layer in the cloud context is a small-to-large process, such as cloud-native computing, especially K8S container clusters and their extended tasks, as well as the Yaoguang intelligent scheduling system, which connects the cloud edge and cross-reqion Global scheduling, cloud-native network virtualization functions, cloud-native distributed storage and advanced storage such as disaster recovery and high reliability.
  • Cloud-native application and data enablement layer: covering cloud-native distributed middleware, blockchain, cloud-native edge, cloud security enablement, cloud-native database, and cloud-native big data, AI ModelArts Pratt & Whitney AI development platform, cloud Native video, and cloud-native IOT Internet of Things.
  • Cloud-native application life cycle: including DevSecOPS pipeline, cloud-native service governance and orchestration, CMDB for tenants to deploy their own business, monitoring and operation and maintenance services, etc.
  • Cloud-native multi-tenancy framework: multi-tenant authentication and rights management for cloud services (identity authentication for accessing cloud services and cloud resource capabilities, and access rights management for cloud service object instances), cloud-native operations and billing, cloud services Open API capability (entrance to cloud service consumption interface), and cloud native Console (entrance to cloud service consumption interface), etc.

Introduction to Open Source Container Technology

2.1 What is container technology

image.png

  • Docker was the first system to make containers portable between different machines. It not only simplifies the process of packaging applications, but also simplifies the libraries and dependencies of packaged applications, and even the file system of the entire operating system can be packaged into a simple portable package, which can be used on any other machine running Docker The definition of using container technology in the industry is the container engine technology represented by Docker and the container orchestration technology based on K8S. Many other container technologies follow or are compatible with OCI standards, such as Kata secure containers.
  • Compared with using virtual machines, containers have the following advantages:
    • More efficient use of system resources: Because containers do not require overhead such as hardware virtualization and running a complete operating system, containers have a higher utilization rate of system resources. Including application execution speed, memory loss, etc.
    • Faster startup time: Traditional virtual machine technology often takes several minutes to start application services, while Docker container applications, because they run directly on the host kernel, do not need to start a complete operating system, so they can be started in seconds: even milliseconds time
    • Consistent runtime environment: A common problem in the development process is the environment consistency problem. Due to the inconsistency between the development environment, test environment, and production environment, some problems were not discovered during the development process. The Docker image provides a complete runtime environment except the kernel, ensuring the consistency of the application runtime environment
    • Easier migration: Since Docker ensures the consistency of the execution environment, it makes the migration of applications easier. And it can run on many platforms, whether it is a physical machine or a virtual machine, the running results are consistent
    • Easier maintenance and expansion: The layered storage and mirroring technology used by Docker makes it easier to reuse the repeated parts of the application, and also makes the maintenance and update of the application easier. In addition, the Docker team maintains a large number of high-quality official images together with various open source project teams, which can not only be used directly in the production environment, but also can be used as a basis for further customization, which greatly reduces the cost of image production for application services.

2.2 Introduction of key technologies

image.png

  • The several key technologies used by Docker were not invented by Docker, but the mature technology of Linux. Docker integrated these technologies to form a revolutionary result.
  • Among them, Namespace is responsible for the isolation of the operating environment, that is, each container is an independent process, which is isolated through namespace technology
  • Cqgroup is responsible for the isolation or monopoly of running resources. It can specify the number of resources for each container without encroaching on each other. Each container is invisible to each other, including process isolation, network isolation, and file isolation.
  • Union filesystem is a unified standard for miniaturization of application operations. Container images provide the basis for container operations, but container images are not equal to containers. Container images are managed through storage-driven technology. A series of layered read-only files, and when the container image runs as a container, a writable layer will be added to the top layer of the image, that is, the container layer. All modifications to the runtime container are actually read to this container. The modification of the writing layer, all changes to the container, such as writing new files and modifying existing files, will only be applied to this container layer.
  • Because the container is a built-in capability of the operating system kernel, there is no need to run virtualization and new kernels on this basis. It is essentially process-level isolation, so it is more lightweight, and the loss of deployment, operation and maintenance and performance is also lower. Small. At the same time, because the container is packaged together with the application and the operating environment, it has good portability and a high degree of standardization, which provides a good foundation for large-scale flexible expansion and management of applications.
  • Dockerd is a system process resident in the background in the docker architecture, called docker daemon
  • Containerd is an intermediate communication component between dockerd and runc. Docker's management and operation of containers are basically completed through containerd.
  • Containerd-shim is a carrier that actually runs containers. Every time a container is started, a new containerd-shim process will be started.
  • RunC is a command-line tool for running applications in the OCI standard format

2.3 Introduction to Kata Containers

image.png

  • Kata container is an open source container project initiated by companies such as Intel, Huawei, and Red Hat. It provides the ability to run container management tools directly on bare metal and achieve strong security isolation of workloads. It combines the security advantages of virtual machines with the speed and manageability of containers. Perfect unity.

2.4 Typical use process of Docker container

image.png

2.5 Kubernetes

image.png

  • The word Kubernetes comes from Greek and means helmsman or navigator. Because there are 8 letters from K to s, it is also referred to as k8s. Kubernetes is a product contributed by Google to the open source community. It is an open source product based on its own internal container borg (cuckoo) after removing its own business attributes. Kubernetes is the industry-recognized de facto standard in the field of container orchestration. Almost all public cloud vendors' container technologies are implemented based on Kubernetes.
  • The standard architecture of K8s is based on clusters. A cluster is a complete set of K8s products. The standard architecture of K8s in most enterprises is based on clusters, on which the management plane is encapsulated for cluster-level management.
  • For application developers, Kubernetes can be regarded as a cluster operating system. Kubernetes provides functions such as service discovery, scaling, load balancing, self-healing and even elections, freeing developers from infrastructure-related configurations
  • Based on its technical characteristics, it has the following advantages: automatic deployment, restart, migration, and scaling based on the defined application state, and the plug-in mechanism makes K8S compatible with various infrastructures (public cloud, private cloud); the flexible isolation mechanism can quickly provide services for different The team builds the operating environment.
  • There will be a master control node Master in the cluster, which is responsible for managing the entire container cluster. Generally, the number of Masters used in etcd high-availability scenarios is at least 3. There will be many business nodes Node in the cluster, responsible for running container applications. Master will install kubelet on each Node as its Agent for managing Node.

2.5.1 Introduction to Kubernetes cluster architecture

image.png

  • Master node:
    • API Server: A transfer station for components to communicate with each other, accepting external requests and writing information to ETCD
    • Controller Manager: Performs cluster-level functions, such as replicating components, tracking Node nodes, handling node failures, and more.
    • Scheduler: The component responsible for application scheduling, which schedules containers to run on Node according to various conditions (such as available resources, node affinity, etc.)
    • Etcd: A distributed data storage component used to save all network configurations and state information of objects in the cluster.
  • Node node:
    • Kubelet: Kubelet is mainly responsible for dealing with Container Runtime and interacting with API Server to manage containers on nodes. Real-time monitoring and performance data collection of resources and containers on the Node machine through cAdvisor
    • Kube-proxy: an access proxy between application components to solve the access problem of applications on nodes
    • Container Runtime: The software responsible for running containers, such as Docker, containerd, CRI-O, and Kubernetes CRI.
  • Kubectl is a command-line tool for Kubernetes clusters. You can install kubectl on any machine and operate Kubernetes clusters through kubectl commands.
  • When using K8s, the user uses the apiserver on the Master to call the required application services and other resource objects defined in the declarative interface. The controller and scheduler of the master will be created in Node according to the user's definition, and will be monitored at all times. Its state is guaranteed to always conform to the user's definition. The container application on Node provides "system" access capability through kube-proxy.

2.5.2 Resource Management - Pod

image.png

  • The smallest unit of Kubernetes orchestration is not a container, but a thing called a Pod. Translated into Chinese, Pod means Pea Ying. There can be many peas in a pea mile, and these peas are container instances one by one.
  • In many cases, it is necessary to use containers to host microservices (what is a microservice, in simple terms, it is a small and single service). When doing microservice design, it is generally recommended that one application has one process. If the carrier is a container, it is one container and one process. But the reality is that many times we need to install related service monitoring software or data reading software in order to manage microservices. This means that we need to install multiple software, that is, multiple processes, in one container. This breaks the principle of one container, one process that we just said. In order to meet the design principles of microservices, the concept of Pod was designed. Generally, there will be multiple containers in a Pod, one service container (for providing services), and multiple auxiliary containers (for completing the monitoring of service containers or data management). For example, we have a Pod, and there are three containers in the Pod , respectively: web container, monitoring container and log reading container. First, only web software runs in the web container, and the exposed port is 80. The monitoring software running the web container in the monitoring container only needs to monitor 127.0.0.1:80 to complete the monitoring of web services. Because the containers within the Pod share IP addresses. The log reading container only needs to report the file reading under the relevant path to the corresponding log management platform, because the containers in the ROD share data storage.
  • Container Runtime Interface (Container Runtime Interface) is referred to as CRI. CRI defines the service interfaces of containers and mirrors, because the container runtime and the life cycle of mirrors are isolated from each other, so two services need to be defined. CRI is the main protocol for communication between the kubelet and the container runtime.

2.5.3 Resource detection

image.png

  • Probe Type:
    • Liveness Probe: Indicates whether the container is running. If the liveness probe fails, the kubelet will kill the container, and the container will be determined according to its restart policy.
    • Readiness Probe: Whether the container is ready to serve requests. If the readiness probe fails, the endpoint controller will remove the pod's IP address from the list of endpoints for all services that match the pod.
    • Startup probe: Whether the application in the container has been started. If a startup probe is provided, other probes are disabled until this probe succeeds. If the startup probe fails, the kubelet will kill the container and the container will be restarted according to its restart policy
  • Some typical uses of DaemonSet
    • run cluster daemons on each node;
    • Run the log collection daemon on each node:
    • Run a monitoring daemon on each node
  • Kubernetes supports affinity and anti-affinity at the node and Pod levels. By configuring affinity and anti-affinity rules, users can be allowed to specify hard restrictions or preferences, such as deploying foreground Pods and background Pods together, deploying certain types of applications to certain specific nodes, deploying different applications to different nodes, etc. .

2.5.4 Resource Scheduling

image.png

  • Kubernetes provides a Controller (controller) to manage Pods. The Controller can create and manage multiple Pods, providing copy management, rolling upgrades, and self-healing capabilities.
  • Deployment: Currently, the most commonly used controller is Deployment, and a ReplicaSet is automatically created when a Deployment is created. Deployment can manage one or more RS,w and manage Pod through RS.
  • The Pods under the Deployment controller have a common feature, that is, each Pod is identical except for the name and IP address. When needed, Deployment can create a new Pod through the Pod template; when not needed, Deployment can delete any Pod. Usually: A Pod contains a container, or several containers that are particularly closely related. A ReplicaSet contains one or more identical Bods. Deployment contains one or several different ReplicaSets.
  • Kubernetes provides StatefulSet to solve this problem, which is as follows: StatefulSet provides a fixed name for each Pod, and the Pod name is added with a fixed suffix from 0-N. After the Pod is rescheduled, the Pod name and HostName remain unchanged. The concept of StatefulSet providing each Pod with a fixed access domain name Service through Headless Service will be introduced in detail in later chapters. StatefulSet ensures that hw35802903Pod can still access the same persistent data after rescheduling by creating a PVC with a fixed identifier.
  • Job and CronJob are responsible for batch processing short-lived one-time tasks, that is, tasks that are executed only once, and it guarantees that one or more Pods of the batch task end successfully.
    • Job: It is a resource object used by Kubernetes to control batch tasks. The main difference between batch processing business and long-term servo business (Deployment, Statefulset) is that batch processing business runs from beginning to end, while long-term servo business runs forever without the user stopping. The Pod managed by the Job automatically exits after the task is successfully completed according to the user's settings.
    • CronJob: It is a time-based job, similar to a line in the crontab file of the Linux system, which runs the specified job at a specified time period.

2.5.5 Resource Configuration

image.png

  • Secret is the same as ConfigMap in that it saves data in the form of key-value pairs. The difference is that when creating a Secret, the Value of the Secret must use Base64 encoding.
  • Secret:
    • There is less risk of Secret exposure during the process of creating, viewing, and editing Pods.
    • The system takes extra precautions with Secret objects, such as avoiding writing them to disk where possible.
    • Only the Secret requested by the Pod is visible in its container, and one Pod cannot access the Secret of another Pod.

2.5.6 Kubernetes network

image.png

  • There are many ways to connect bridges between different nodes, which are related to specific implementations. However, the cluster requires Pod addresses to be unique, so cross-node bridges usually use different address segments to prevent Pod IP addresses from being duplicated.
  • Inter-container communication within a Pod: Containers within a Pod share the same network namespace, which is usually provided by infrastructure containers. All containers running in the same Pod are similar to multiple processes on the same host, and can interact with each other through the loopback interface. There is no problem of port conflicts between different Pods, because each Pod has its own IP address.
  • Any other Pods and nodes in the cluster can communicate directly with Pods via IP without any network address translation, tunneling or proxying. The same iP is used inside and outside the Pod, which also means that standard naming services and discovery mechanisms, such as DNS, can be used directly. The communication requirements in this kind of communication model are also the problems that K8s network plug-ins need to solve. Their implementation methods include superimposed network model and routing network model, etc. There are more than a dozen popular solutions, such as flannel and so on.
  • Communication between Pods: Pods can communicate directly through IP addresses, but the premise is that Pods must know each other's IP. In a cluster, Pods may be destroyed and created frequently, which means that the IP of Pods is not fixed. In order to solve the problem that the IF of the Pod is not fixed, sefice provides an abstraction layer for accessing the Pod. No matter how the back-end Pod changes, the Service provides external services as a stable front-end. At the same time, Service also provides high availability and load balancing functions, and Service is responsible for forwarding requests to the correct Pod.
  • Flannel is a network planning service designed by the CoreOS team for Kubernetes. Simply put, its function is to allow containers created by different node hosts in the cluster to have unique virtual IP addresses for the entire cluster.
  • Compared with the simplicity of Flannel, Calico is famous for its performance and flexibility. Calico's functions are more comprehensive, not only providing network connectivity between hosts and pods, but also involving network security and management.

2.5.7 Kubernetes network - Service

image.png

  • After the Pod is created, direct access to the Pod will have the following problems:
    • Pods will be deleted and rebuilt by controllers like Deployment at any time, and the results of accessing Pods will become unpredictable.
    • The IP address of the Pod is assigned after the Pod is started, and the IP address of the Pod is not known before it is started
    • Applications are often composed of multiple pods running the same image, and accessing pods one by one becomes unrealistic
  • RC, RS, and Deployment only guarantee the number of microservice Pods supporting services, but they do not solve the problem of how to access these services. A Pod is just an instance of a running service, which may stop on one node at any time, and start a new Pod with a new IP on another node, so services cannot be provided with a certain IP and port number. To provide services stably requires service discovery and load balancing capabilities. The work of service discovery is to find the corresponding backend service instance for the service accessed by the client. In the K8s cluster, the service that the client needs to access is the Service object. Each Service corresponds to a valid virtual IP within the cluster, and the cluster accesses a service through the virtual IP.
  • A Kubernetes Service defines an abstraction: a logical set of Pods, and a policy by which they can be accessed - often called a microservice. This set of Pods can be accessed by the Service, usually through LabelSelector.
  • The implementation types of Service mainly include
    • ClusterIP: Provide a virtual IP address inside the cluster for Pod access (default mode).
    • NodePort: Open go ports on Node for external access
    • LoadBalancer: Accessed through an external load balancer.

2.5.8 Kubernetes Network - Ingress

image.png

  • Ingress can provide load balancing, SSL termination and name-based virtual hosting. Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
  • To use the Ingress function, an Ingresscontroller must be installed on the Kubernetes cluster. There are many implementations of IngressController, the most common being NGINX IngressController officially maintained by Kubernetes; different vendors usually have their own implementations. For example, Huawei Cloud CCE uses Huawei Cloud Elastic Load Balancing Service ELB to implement Ingress Layer 7 load balancing.

2.5.9 Persistent storage

image.png

  • The life cycle of a Volume is the same as that of the Pod that mounted it, but the files inside the Volume may still exist after the Volume disappears, depending on the type of the Volume. All containers in the Pod can access the volume, but it must be mounted, and it can be mounted to any directory in the container
    • PV: It is a persistent storage volume, which mainly defines a directory that is persistently stored on the host machine, such as an NFS mounted directory
    • PVC: It is the attribute of the persistent storage that Pod wants to use, such as the size of Volume storage, read and write permissions, and so on.
  • Although the PV and PVC methods can shield the underlying storage, PV creation is more complicated and is usually managed by the cluster administrator. The way Kubernetes solves this problem is to provide a method for dynamically configuring PVs, which can automatically create PVs. Administrators can deploy a PV provisioner (provisioner), and then define the corresponding StorageClass, so that developers can choose the type of storage to be created when creating a PVC. The PVC will pass the StorageClass to the PV provisioner, and the provisioner will automatically create a PV.
  • StorageClass describes the storage type "classification" in the cluster, and StorageClass needs to be specified when creating PVC/PV.
  • The Kubernetes administrator sets the type of network storage and provides the corresponding PV descriptor configuration to Kubernetes. When the user needs storage, he only needs to create a PVC, and then associate the PVC with a Volume in the Pod, so that the Pod can use the storage resources.

2.6 Introduction to the cloud migration process on the local K8S cluster

image.png

  • Cluster migration roughly includes the following six steps:
    • Target cluster resource planning. To learn more about the differences between CCE clusters and self-built clusters, refer to the key performance parameters in the resource planning of the target cluster: resource planning as needed. It is recommended to keep the performance configuration of the migrated cluster relatively consistent with that of the original cluster.
    • Resource migration outside the cluster. If you need to migrate related resources outside the cluster, HUAWEI CLOUD provides a corresponding migration solution. Includes container image migration and database and storage migration
    • Migration tool installation. After the resource migration outside the cluster is completed, the application configuration can be backed up and restored respectively in the original cluster and the target cluster through the migration tool.
    • Resource migration within the cluster. You can use tools such as open source disaster recovery software Velero to back up resources in the original cluster to object storage and restore them in the target cluster.
      • Backup of the original cluster application: When the user performs a backup, first create a Backup object in the original cluster through the Velero tool, and query the data and resources related to the cluster for backup, and upload the data to the object storage compatible with the S3 protocol. Resources will be stored in JSON format files.
      • Target cluster application recovery: When restoring in the target cluster, Velero will specify the temporary object bucket that previously stored the backup data, download the backup data to the new cluster, and then redeploy the resources according to the JSON file. 0335802903
    • Resource update adaptation. Migrated cluster resources may have problems that cannot be deployed, and the resources that have errors need to be updated and adapted. Possible adaptation problems mainly include the following categories: mirror update adaptation, access service update adaptation, StorageClass update adaptation, Database Update Adaptation
    • The rest work. After the cluster resources are normally deployed, it is necessary to verify the functions in the migrated application and switch the business traffic to the new cluster. After confirming that all services are running normally, the original cluster can be taken offline.

Introduction to HUAWEI CLOUD Container Service

3.1 Cloud Container Engine CCE

image.png

  • The cloud container engine deeply integrates Huawei Cloud's high-performance computing (ECS/BMS), network (VPC/EIP/ELB) storage (EVS/OBS/SFS) and other services, and supports heterogeneous computing architectures such as GPU and ARM. Availability zone (AZ), multi-region (Region) disaster recovery and other technologies build high-availability Kubernetes clusters, and provide high-performance and scalable container application management capabilities, simplifying the construction and expansion of clusters.

3.2 CCE Cluster Architecture and Features

image.png

  • The CCE team is the first team in China to invest in the Kubernetes community. It is the main contributor to the container open source community and the leader of the container ecosystem. CCE service is the earliest Kubernetes commercial product in the country and the first batch of products in the world to pass the CNCF-conformance certification. The main value of CCE lies in the open and open source ecology, enhanced commercialization features, flexible and easy-to-purchase infrastructure.
  • Volcano: The native K8S has weak support for batch computing business. Volcano has made two enhancements in the field of batch computing. On the one hand, it has done advanced job management: such as queuing, priority, eviction, backfill, anti-starvation and other capabilities. On the one hand, it has intelligent scheduling capabilities, such as topology-aware affinity scheduling, dynamic driver-executor ratio adjustment, etc., and also supports Gang Scheduling, PS-Worker and other scheduling and distributed frameworks in scheduling
  • Users can use the cloud container engine service through the CCE console, Kubectl command line, and Kubernetes API

3.3 CCE node

image.png

  • Nodes are the basic elements of container clusters. In the cloud container engine CCE, high-performance elastic cloud servers (ECS) or bare metal servers (BMS) are mainly used as nodes to build highly available Kubernetes clusters.
  • The concept of secure containers is mainly compared with ordinary containers. Compared with ordinary containers, its main difference is that each container (Pod to be precise) runs in a separate micro virtual machine, has an independent operating system kernel, and security isolation of the virtualization layer. Because the container security isolation of the cloud container engine CCE has stricter requirements than independently owning a private Kubernetes cluster. Through secure containers, the kernels and computing resource networks of different containers are isolated, which protects Pod resources and data from being seized and stolen by other Pods.
  • Workloads are applications running on Kubernetes. Whether a workload is a single component or multiple components working together, it can be run in a set of Pods on Kubernetes.
  • CCE provides container deployment and management capabilities based on Kubernetes native types, and supports life cycle management such as container workload deployment configuration, monitoring, expansion, upgrade, load control, service discovery, and load balancing.

3.4 Cluster network composition

image.png

  • Suggestions for network segment planning:
    • Network segments cannot overlap, otherwise conflicts will result. And all subnets under the VPC where the cluster is located (including the extended network segment 0 subnet) cannot conflict with the container network segment and service network segment.
    • Ensure that each network segment has enough IP addresses available. The IP address of the node network segment must match the size of the cluster; otherwise, the node cannot be created due to insufficient IP addresses. The IP address of the container network segment must match the business scale; otherwise, Pods cannot be created due to insufficient IP addresses.
  • Under the cloud native network 2.0 model, since the container network segment and the node network segment share the network address under the VPC, it is recommended not to use the same 58 subnets for the container subnet and the node subnet, otherwise it is easy to cause insufficient IP resources and cause container or node creation failure Case.
  • The container network models supported by CCE are: container tunnel network, VPC network, cloud native 2.0 network
  • The container tunnel network builds a container network plane independent of the node network plane through tunnel encapsulation on the basis of the node network. The encapsulation protocol used by the CCE cluster container tunnel network is VXLAN; Network packets are encapsulated into UDP packets for tunnel transmission. The container tunnel network has the advantages of strong versatility and interoperability with a small amount of tunnel encapsulation performance loss, and can meet most scenarios with low performance requirements.

3.5 Cloud Native Network 2.0

image.png

  • Advantages: The VPC directly used by the container network is easy to troubleshoot network problems and has the highest performance. Supports direct communication between the external network in the VPC and the container IP. Capabilities such as load balancing, security groups, and elastic public network IP provided by VPC can be directly used.
  • Disadvantages: Since the VPC directly used by the container network will consume the address space of the VPC, it is necessary to plan the container network segment properly before creating the cluster.
  • Only CCE Turbo clusters support the use of Cloud Native Network 2.0.

3.6 CCE container storage

image.png

  • The container storage function of cloud container engine CCE is based on the Kubernetes storage system, and deeply integrated cloud storage services are fully compatible with Kubernetes native storage services, such as EmptyDir, HostPath, Secret, ConfigMap and other storage. CCE implements the cloud storage service access capability based on the Kubernetes community container storage interface (csl, ContainerStorage Interface), aiming to establish a set of standard storage call interfaces between the container orchestration engine and the storage system, through which the container orchestration The engine provides storage services
  • CSI (Container Storage Interface): container storage interface, providing storage resources, through the CSI interface Kubernetes can support various types of storage. For example, Huawei Cloud CCE can easily connect to Huawei Cloud Block Storage (EVS), File Storage (SFS), and Object Storage (OBS).
  • The CSI container storage plug-in in the CCE cluster is called Everest. Everest is a cloud-native container storage system. Based on CSI (Container Storaqe Interface), it connects Kubernetes clusters with Huawei Cloud Cloud Disk Service EVS, Object Storage Service OBS, Elastic File Service SFS, and Extreme Speed File storage capabilities of storage services such as SFS Turbo. This plug-in is a system resource plug-in, and clusters of kubernetes 1.15 and above are installed by default when they are created

3.7 Comparison of CCE and CCE Turbo Features

image.png

3.8 Comparison between self-built Kubernetes cluster and cloud container engine

image.png

3.9 Container Mirroring Service SWR

image.png

  • Simple and easy to use:
    • Quickly push and pull container images without self-build and O&M
    • The management console of Container Image Service is easy to use and supports full life cycle management of images
  • Safe and reliable:
    • The container image service follows the HTTPS protocol to ensure the safe transmission of images, and provides multiple security isolation mechanisms between accounts and within accounts to ensure the security of user data access
    • The container image service relies on Huawei's professional storage services to ensure more reliable image storage
  • mirror acceleration
    • The container image service uses Huawei's patented P2P image download acceleration technology to enable CCE clusters to obtain faster download speeds while ensuring high concurrency.
    • The container image service intelligently schedules the global construction nodes, and automatically allocates them to the nearest host node for construction according to the image address used. It can pull foreign images and automatically allocate them to idle nodes according to the load, which can speed up the efficiency of image acquisition.

3.10 CCE Applicable Scenarios

image.png

  • Based on the practice of customers and partners, and the basic functions of CCE, take the following four scenarios as examples:
    • Gradual transformation of traditional T-architecture. The most traditional architecture is generally a single heavy-duty application. The single heavy-duty application is decoupled and split into multiple lightweight modules. Each module is carried by an adapted K8S resource form. For example, a stateless application uses Deployment, and a stateful application uses StatefulSet, etc., make the upgrade and expansion of each module more flexible, and easily respond to market changes.
    • Improve the online efficiency of business. Container mirroring runs through all links from development to testing to operation and maintenance to ensure the consistency of the business operating environment, and the business can be used out of the box and quickly launched
    • Coping with scenarios where business load fluctuates significantly. The fast automatic elastic scaling of the container ensures that the business performance is still stable in the case of sudden surges, and the system automatically expands the capacity in seconds to quickly respond to concurrent peaks.
    • Save resources and reduce costs. Because containers can divide resources in a more fine-grained manner on virtual machines, applications can use resources more fully, thereby improving resource utilization

3.11 Cloud Container Instance CCI

image.png

  • Serverless is an architectural concept, which means that there is no need to create and manage servers, and no need to worry about the running status of the server (whether the server is working, etc.), only need to dynamically apply for the resources required by the application, and leave the server to dedicated maintenance personnel to manage and maintain. Then focus on application development, improve application development efficiency, and save enterprise IT costs.
  • CCE provides a semi-managed cluster, because the cluster needs to be managed, and HUAWEI CLOUD also provides another type of fully managed cluster, HUAWEI CLOUD Cloud Container Instance CCI.
  • Product Features:
    • One-stop container lifecycle management: Using cloud container instances, you can run containers directly without creating and managing server clusters
    • Supports multiple types of computing resources. Cloud container instances provide multiple types of computing resources to run containers, including 0CPU, GPU, and Ascend chips (Huawei self-developed AI chips).
    • Support multiple network access methods: Cloud container instances provide a variety of network access methods, support four-layer seven-layer load balancing, and meet access requirements in different scenarios.
    • Supports multiple persistent storage volumes: cloud container instances support data storage on HUAWEI CLOUD cloud storage. Currently supported cloud storage includes: EVS, SFS, OBS, etc.
    • Supports extremely fast elastic scaling: Cloud container instances support user-defined elastic scaling policies, and can freely combine multiple elastic policies to cope with sudden traffic surges during business peaks.
    • Comprehensive container status monitoring: Cloud container instances support monitoring the resource usage of container running, including CPU, memory, GPU, and video memory usage.
    • Support dedicated container instances: cloud container instances provide dedicated container instances, run Kata containers based on high-performance physical servers, and achieve security isolation at the virtual machine level without loss of performance.

3.12 CCI Product Architecture

image.png

  • When users use CCI, they don't need to pay too much attention to the underlying hardware and resource utilization, but only need to focus on their own business. At the same time, CCI provides on-demand and second-by-second billing, which is convenient for customers to use.
  • Dedicated container instance tenants exclusively occupy physical servers and support multi-department business isolation. It runs Kata containers based on high-performance physical servers to achieve security isolation at the virtual machine level without loss of performance. The upgrade and maintenance of the server is undertaken by HUAWEI CLOUD, and users only need to focus on their own business.

3.13 High Security

image.png

  • Cloud container instances have both container-level startup speed and virtual machine-level security isolation capabilities, providing a better container experience.
    • Natively supports Kata Container.
    • Based on Kata's kernel virtualization technology, it provides comprehensive security isolation and protection
    • Own hardware virtualization acceleration technology to obtain higher performance secure containers

3.14 Extreme Elasticity

image.png

  • For example, the current mainstream big data, AI training and reasoning applications (such as Tensorflow, Caffe) all run in a containerized manner, and require a large number of GPUs, high-performance network and storage hardware acceleration capabilities, and are task-based computing, requiring fast Apply for a large number of resources, provide high-performance computing, network and high IO storage, meet the demands of intensive computing nw.33, and release computing tasks quickly after completion. On-demand second-level billing: According to the actual number of resources used,
  • On-demand billing per second, avoiding the expense of business inactive periods and reducing user costs.
  • Volcano is a Kubernetes-based batch processing platform that provides a series of features required by machine learning, deep learning, bioinformatics, genomics and other big data applications that Kubernetes currently lacks. Volcano provides general computing capabilities such as high-performance task scheduling engine, high-performance heterogeneous chip management, and high-performance task operation management. open source on Github)

3.15 Operation and maintenance free

image.png

  • Users do not need to be aware of clusters and servers, which greatly simplifies operation and maintenance work and reduces operation and maintenance costs

3.16 Applicable scenarios

image.png

  • CCI is mainly suitable for task-based scenarios, including:
    • Based on the AI ​​training and reasoning scenarios provided by the support for heterogeneous hardware, the training tasks can be hosted on CCI.
    • HPC scenarios, such as gene sequencing scenarios
    • Sudden expansion scenarios in a long-term stable operating environment, such as e-commerce flash sales, hot marketing, etc.
  • The main advantages are cost reduction due to on-demand use, free operation and maintenance due to full hosting, and consistency and scalability due to mirror standardization.
  • There are two billing modes: pay-as-you-go and package purchase. The core hours for fee calculation are the number of cores * time. For example, when 730 cores are used, 730 cores can be used for 1 hour, or 730 hours can be used for 1 core.
    • Pay-as-you-go billing mode: take the instance as the unit, adopt the pay-as-you-go billing mode, bill by the second, and use the hour as the billing cycle
    • Package package mode: the purchased resource package is within the validity period, the deduction method is to deduct the amount in the purchased resource package first, and the excess part will be settled by paying on demand. Users can purchase resource packs repeatedly. When there are multiple resource packs, the resource pack with the earliest expiration time will be deducted first.

3.17 Application Orchestration Service AOS

image.png

  • To use the application orchestration service, you only need to create a template describing the cloud resources and applications you need, and define the dependencies and references of cloud resources and applications in the template. AOS will create and configure these cloud resources and applications according to the template . For example, to create an elastic cloud server (including virtual private cloud and subnet), you only need to write a template to define the elastic cloud server, virtual private cloud and subnet, and define the dependencies between the elastic cloud server and virtual private cloud, subnet, subnet and virtual private Cloud dependencies, and then use the template to create a stack through AOS, and the virtual private cloud, subnet, and elastic cloud server are created successfully.
  • Product Features:
    • Support for automatic orchestration resources: AOS provides automatic orchestration capabilities and supports the orchestration of HUAWEI CLOUD mainstream cloud services. For details, see Cloud services that support orchestration. AOS also provides related services such as resource planning, application design, deployment, change and other life cycle management, and reduces operation and maintenance costs through automation.
    • Support hybrid orchestration of application and cloud service resources: standard language (YAML/JSON) can be used to describe the required basic resources, application systems, application upper-level supporting services and the relationship between the three. According to the unified description, resource provisioning, application deployment, and application loading can be automatically completed in accordance with the defined dependency sequence with one click. The deployed resources and applications can be managed in a unified manner.
    • Provide rich application templates: AOS template market provides a wealth of free resources, including basic resource templates, service combination templates, industry scene templates, etc., covering hot application scenarios. You can directly use the public template to create it with one click, and complete the second-level deployment of all cloud services.

3.18 Multi-cloud container platform MCP

image.png

  • Karmada (Kubernetes Armada) is a multi-cluster management system based on Kubernetes native API. In multi-cloud and hybrid cloud scenarios, Karmada provides pluggable, fully automated management of multi-cluster applications to achieve multi-cloud centralized management, high availability, fault recovery, and traffic scheduling.
  • Unified cluster management: The multi-cloud container platform realizes unified management of clusters of multiple cloud operators through cluster federation, supports dynamic cluster access and global cluster monitoring dashboard.
  • Global application management: Based on multi-cluster and Federation technology, the multi-cloud container platform can realize Kubernetes management in multiple different regions and different clouds, support unified global application management, and support the deployment of cross-cluster applications based on Kubernetes community cluster federation standardized interfaces, Full life cycle management such as deletion and upgrade.
  • Cross-cluster elastic scaling capability: The multi-cloud container platform supports cross-cluster application elastic scaling policies to balance the distribution of application instances in each cluster and achieve global load balancing.
  • Cross-cluster service discovery capability: The multi-cloud container platform supports the creation of federated services and a cross-cluster service discovery mechanism, which can realize regional affinity of services based on the principle of service proximity access and reduce network delay
  • Standard compatibility: The multi-cloud container platform is compatible with the latest Federation architecture of the Kubernetes community, providing native Kubernetes API and Karada APl.
  • Multi-cloudization of single-cluster applications: The multi-cloud container platform supports one-click conversion of single-cluster applications to multi-cloud applications, deploying application instances to multi-cloud and multi-clusters, and conveniently and quickly completing scenarios such as business multi-cloud disaster recovery and multi-cloud business traffic sharing.
  • Cross-cluster application cloning and migration capabilities: The multi-cloud container platform supports cloning or migrating the application of a certain cluster to other clusters. This capability can be used to complete the active migration of cross-cloud and cross-cluster applications, or the replication of cross-cloud and cross-region mirroring environments. application

3.19 Cloud Native Service Center OSC

image.png

  • Service release: The service provider uploads the service package, verifies the lifecycle management of the service in the OSC and the business characteristics of the service, and publishes it as a commodity for other tenants to subscribe to.
  • Service subscription: The service center includes Huawei's self-developed services, services released by ecosystem partners, and open source services. All services support user subscriptions, and instances can only be deployed after user subscriptions are successful.
  • Service unsubscription: Users can unsubscribe from services at any time, and the system will automatically delete deployed services and instances when unsubscribing from services
  • Private service upload: The services developed by users according to Helm, Operator Framework or OSC service specifications can be uploaded to OSC as private services for management.
  • Service upgrade: When a service provider releases a new version for a service, users who subscribe to this service will receive an upgrade prompt, and the user can choose whether to upgrade the service to the latest version.
  • Instance deployment: After subscribing to the service, users can deploy instances. Users can specify the deployed region, hw3580hw3580 container cluster, and operating parameters according to service capabilities.
  • Instance O&M: Provides the O&M view of the instance. You can view the O&M information such as instance monitoring and logs. If you need in-depth data analysis, you can jump from the O&M view to the corresponding cloud service.
  • Instance update: Users can modify the running configuration of the instance.
  • Instance deletion: When the service life cycle carried by the instance ends, the user can delete the instance to reclaim related resources.
  • Service Publishing: A service package that supports service providers to manage commodities in OSC. The service provider first uploads the service package to the OSC, and the product can only be officially released after the format verification and vulnerability scanning have passed. After the service is released successfully for the first time, only the new version of the service needs to be uploaded. When new users subscribe to the service, they subscribe to the latest version.

Overview of Serverless

4.1 What is Serverless

image.png

  • Serverless computing doesn't mean that we no longer use servers to host and run code; it doesn't mean that operations engineers are no longer needed. It refers to the fact that consumers of serverless computing no longer need to spend time and resources on server configuration, maintenance, updates, expansion, and capacity planning. All these tasks and functions are handled by the serverless platform. Therefore, developers focus on writing the business logic of the application. Operation and maintenance engineers can focus more on key business tasks.
  • Serverless two-sum form:
    • Functions-as-a-Service (FaaS) typically provides event-driven computing. Developers use functions triggered by events or HTTP requests to run and manage application code. DevFire deploys to FaaS small units of code that execute as discrete actions on demand and scale without having to manage servers or any other underlying infrastructure
    • Backend-as-a-Service (BaaS) It is an API-based third-party service that can replace a subset of core functions in the application. Because these APIs are provided as services that scale automatically and operate transparently, they appear to be serverless to developers.
  • Simply put, FaaS is responsible for executing functions/codes, and BaaS only provides back-end services that applications rely on in the form of APIs

4.2 Value of Serverless

image.png

  • Serverless products or platforms bring the following benefits to developers:
    • Zero Server Operations: Serverless significantly changes the cost model of running software applications by eliminating the overhead involved in maintaining server resources.
      • No need to configure, update and manage server infrastructure. Managing servers, virtual machines and containers is a significant expense for companies, including staff, tools, training and time.
      • Flexible scalability: Serverless FaaS or BaaS offerings scale instantly and precisely to handle each individual incoming request. For developers, serverless platforms have no concept of "pre-planned capacity", nor do they need to configure "auto-scaling" triggers or rules.
    • No Computing Cost While Idle: From a consumer perspective, one of the greatest benefits of serverless offerings is that idle capacity incurs no cost. For example, serverless computing services don't charge for idle virtual machines or containers: in other words, when code isn't running or doing meaningful work. Of course this does not include other costs such as stateful storage costs or added functionality/functionality/feature sets.
  • Generally, Serverless is recommended when the workload is: asynchronous, concurrent, and easy to parallelize into independent work units. Infrequent or sporadic needs with large, unpredictable variances in scaling requirements. Stateless, ephemeral, with no significant need for instantaneous cold start times. Being highly dynamic in terms of changing business requirements requires increased developer velocity.

4.3 Huawei Serverless Service-FunctionGraph

image.png

  • When users use FunctionGraph, they do not need to activate or pre-configure computing, storage, network and other services. FunctionGraph provides and manages the underlying computing resources, including server CPU, memory, network and other configuration resource maintenance, code deployment, elastic scaling, load balancing, For security upgrades, resource operation monitoring, etc., users only need to provide program packages according to the programming languages ​​supported by FunctionGraph, upload them and run them.
  • The function supports Node.js, Java, Python, Go, C# and other runtime languages; supports online editing code OBS file import, upload ZIP package, upload JAR package, etc.; function supports various types of triggers such as SMN, APIG and OBS It provides collection and display of monitoring indicators and running logs of calling functions, real-time graphical monitoring indicators display, and online query logs, which is convenient for users to view function running status and locate problems. Function Flow is a tool for arranging FunctionGraph functions, which can Arrange multiple functions into a workflow that coordinates the execution of multiple distributed function tasks, and unify plug-in development and debugging (consistent on cloud and off cloud); HTJP functions focus on optimizing Web service scenarios, and users can directly send HTTP requests to URLs Trigger function execution; the user enables the call chain through the page function configuration. After opening, he can link to the APM service page to view information such as ivm and call chain. Currently, only JAVA functions are supported, and users are allowed to directly package and upload the container image, which is loaded and started by the platform.
  • FunctionGraph2.0 is a new generation of function computing and orchestration services
    • In-depth integration with CloudIDE, multi-function parallel debugging, call chain tracking, function application wizard-style construction and full lifecycle management
    • 6 major programming languages ​​and custom runtimes, cold start delay within 100 milliseconds, and millisecond-level elastic scaling.
    • It is the first in China to support stateful functions and visualized function orchestration functions
    • Supports zero transformation of web applications to achieve serverless.

4.4 Serverless Lifecycle Management

image.png

  • Application development: The out-of-the-box IDE environment on the cloud enables full-chain tracking and commissioning for multiple clustered Serverless applications, supports code breakpoints, stack viewing, call topology, and code hot replacement
  • CICD: A continuous delivery tool deeply integrated with Serverless runtime services and an application operation and maintenance observation tool to form a lightweight DevOps capability for Serverless applications.
  • Function application hosting: Serverless application lifecycle management: Unified serverless application specification, providing full application lifecycle management capabilities, and supporting rapid application experience and reuse through templates and markets.
  • CAE (Cloud Application Engine) is an application-oriented serverless hosting service that provides a one-stop application hosting solution with extremely fast deployment, extremely low cost, and simplified operation and maintenance. Supports rapid release of applications from source code, software packages, and image packages, second-level elastic scaling, and pay-as-you-go. The infrastructure can be free of operation and maintenance, and the application life cycle can be managed according to the observable operation indicators.

4.5 Visualized function flow supports complex business scenarios

image.png

  • Users associate event triggers, functions, and process controllers in a flow chart through connections on the visual layout page, and the output of each node is used as the input of the next node in the connection. The programmed process will be executed sequentially according to the order set in the flow chart. After successful execution, it supports viewing the running records of the workflow, which is convenient for users to diagnose and debug easily.

4.6 The unified plug-in supports development and debugging on and off the cloud

image.png

  • CloudIDE support (on the cloud): Create functions through templates, view functions on the cloud and download them to the cloud, use IDE to debug online, and push functions to the cloud.
  • VSCode plug-in support (under the cloud): Create functions through templates, view functions in the cloud and download them to local debugging Use VSCode plug-in debugging to push local functions to the cloud

4.7 Low retrofit cost

image.png

  • HTTP functions focus on optimizing Web service scenarios, and users can directly send HTTP requests to URLs to trigger function execution. Added types in the function creation and editing interface. The HTTP function only allows the creation of APIG/APIC trigger types, and other triggers do not support it

4.8 Support container image

image.png

  • There are still many difficulties in switching from traditional application development to serverless function development
    • The formats of the runtime and deliverables are not uniform: Some runtime environments provided by serverless function vendors are docker and some are microVM. A lot of learning costs.
    • Ecological immaturity: lack of support for popular open source tools (such as CI/CD pipelines)
  • On the other hand, the mature ecosystem of containers has solved the problems of portability and agile delivery very well, and container images have become the standard deliverables in the cloud-native era. However, the container itself does not ease the operation and maintenance, reduce the cost of idle resources and other problems.
  • Developers can create custom images under event functions, or create custom images under HTTP functions.

4.9 Applicable Scenarios

image.png

thinking questions

image.png
image.png
image.png

end flowering

Guess you like

Origin blog.csdn.net/GoNewWay/article/details/131027405
Recommended