Seeing the subtleties and knowing the work: looking at the development of cloud computing from the perspective of enterprise after-sales technical support

Author: Yu Kai

Subtle changes in after-sales business

As a member of Alibaba Cloud Enterprise Container Technical Support, I face various questions about containers raised by enterprise customers around the world every day. Through the experience of technical support in the past few years, I have gradually discovered some inertia of customers with container problems. Which ones are they? Heavy users, which are light customers, what industries these customers are probably distributed in, etc.

During the gradual contact process, we found that there are some heavy container users, and the problem scenarios raised are also gradually changing. Due to the involvement of laws and regulations, the following data cannot be provided in full, but only a brief description of the relevant issues is provided.

vertical dimension

Since the end of last year, the number of work orders related to edge clusters has gradually begun to rise, with a relatively large growth rate. Among the edge clusters involved, more than half of the customer clusters are relatively large, with cluster node sizes on the order of hundreds or even thousands of nodes.

horizontal dimension

Customer 1:

This user is currently one of the domestic ToC-side personalized recommendation service providers. This customer only started using the Container Service Edge version ACK Edge product this year. So far, the number of nodes in the edge cluster has quickly exceeded 100.

Customer two:

This user is currently a pioneer in the domestic electric vehicle industry and has been on the hot topic list of new energy. This customer is using the container service Edge version ACK Edge product for the first time. So far, the number of nodes in the edge cluster has exceeded one thousand, accounting for Nearly half of the customer's container cluster nodes.

Customer three:

This user is one of the world's leading unmanned IoT device providers. After starting to use the ACK Edge product of the Container Service Edge version last year, the proportion of ACK Edge has been rapidly increased. Currently, the edge cluster has taken on most of the business forms of the customer's containers. At the same time, most of the container work orders asked by the customer are about the edge cluster. .

Customer four:

This user is a leading enterprise in the field of private e-commerce. This year, it began to use the Container Service Edge version of ACK Edge products and rapidly expanded the cluster size. So far, the number of nodes in the edge cluster has exceeded a thousand. The use of edge clusters by these customers is consistent with my experience in serving enterprise customers in the past few months. That is, edge computing seems to be increasingly becoming a direction for cloud-native customer business, and the proportion will become higher and higher. . These customers who use edge clusters for short periods of time do not seem to have unified customer industry attributes and profiles. They include Internet e-commerce, manufacturing, new energy cycling and other transportation industry lines, etc. They do not have strong industry attributes like the public cloud. For example, on the Internet, education and training customers prefer public clouds and ToG, large transportation customers prefer private clouds, etc. It seems that the emergence of edge computing was born for complex terminal business scenarios.

This actually raises several questions:

  1. What is edge computing.

  2. Is it because edge computing products were first provided to customers and expanded in scale? Or is it because customers have a large demand that product development needs for edge computing are formed.

Regarding point 1, this issue is too huge, so I’ll give it a try here, and I’ll explain it later when I have time. Regarding the second point, I personally think this is not a chicken and egg problem. My understanding is that future cloud computing will fully unleash the potential of cloud computing, and more business applications will be born in the cloud and grow in the cloud ; facing Customers have increasingly complex and personalized needs and need to take advantage of the advantages of cloud native, from the center to the edge; the concept of cloud native is becoming more complete and clear, from Kubernetes to service grid to serverless, cloud and business complement each other and achieve each other. , to use a sentence: karma drives the clouds to grow, and the clouds move with the karma.

Looking at edge cloud computing from the market perspective

Global Market

According to polarismarketresearch's edge computing market analysis, the global edge computing market size will be US$74.1 billion in 2021. It is expected that by 2030, the entire market size will continue to grow to US$1.4 trillion, with a compound annual growth rate of nearly 38.8%, among which the fastest growing It is the Asia-Pacific market. It can be seen that edge computing will still be in a period of rapid development in the next ten years.

A large number of devices are connected to the Internet, which generates huge amounts of data in traditional data center infrastructure. The rapid adoption of emerging technologies and computing devices across many industries is expected to generate large amounts of dispersed data. Generating large amounts of data reduces latency and inefficiencies in processing; edge computing relocates this data processing to the data source, rather than relying on data centers to process and analyze the data. This helps optimize organizational efficiency.

The Asia-Pacific region is expected to witness faster growth over the forecast period owing to rapid digitization, technological innovation, and expansion of connected devices in countries such as China, India, Japan, and South Korea. Additionally, regional growth is supported by the presence of major telecom companies and rising IT investments to deploy edge computing.

Rising government funding for digitization and rising need for many businesses to store and process data are aiding the growth of the market. In addition, emerging IoT applications across smart cities generate large amounts of data, and the growing need to process and analyze data at a relatively low cost compared to cloud computing to generate information and enhance real-time decision-making near the data source is driving the demand for detailed data. segment market growth.

Domestic market

According to iResearch analysis data, less than 5% of Chinese companies used edge computing in 2020, but the proportion planned to use is as high as 44.2%. It can be seen that although edge cloud computing is still in its infancy, there is very broad room for growth in the future. According to iResearch's calculations, the overall edge cloud scale is expected to grow at a compound annual growth rate of 44.0% to 55 billion yuan by 2025, of which regional edge clouds will achieve growth with the first mature scenarios such as interactive live broadcast, vCDN, and Internet of Vehicles. Lead. In 2030, China's edge cloud computing market is expected to reach nearly 250 billion yuan. The compound annual growth rate from 2025 to 2030 has declined compared with the previous five years. Scenarios such as the industrial Internet, smart parks, and smart logistics in on-site edge clouds will During this period, they mature rapidly. 

Note: This picture information comes from iResearch

What is edge cloud computing

The Möbius strip of computing power

Since the birth of general-purpose computers, the distribution of computing power and processing has shown the characteristics of alternating cycles between centralized architecture and distributed architecture. Each cycle takes about twenty years. 

Host mode: At the beginning of the birth of general-purpose computers, a centralized computing model based on large-scale computers was adopted to serve multiple terminals through time-sharing technology. 

C/S mode: With the development of semiconductor technology and the evolution of civilian integration of integrated circuits, personal computers are getting smaller and smaller, and their performance has been significantly improved. The spread of PCs has allowed end users to connect to servers via local area networks;

Cloud computing model: Based on virtualization, distributed computing and other technologies, cloud computing can obtain the required resources from the resource sharing pool anytime, anywhere and on demand, and the computing power returns to a centralized architecture;

Edge cloud computing: With the popularization of 5G, the Internet of Things, terminal volume and data volume are growing rapidly, centralized cloud computing has shown bottlenecks, and computing power has begun to migrate to the edge side. 

The rapid advancement of technology promotes cloud computing performance improvement and cost reduction. New models break the existing cost and benefit balance, and then enter a new round of computing power distribution cycle. 

Concept and business form

Edge cloud computing is a new type of distributed computing built on the edge infrastructure located between the central cloud and the terminal. It is the sinking of cloud computing capabilities from the center to the edge, emphasizing the realization of everything under the central cloud computing model through cloud-edge integration and collaborative management. The business needs that cannot be met are cloud computing that is closer to the source of data generation.

"Edge" is a non-absolute relative concept. The different requirements of edge services on network latency, bandwidth, data magnitude, economy and other aspects will affect the optimal location for edge cloud deployment. Shared services such as autonomous driving and cloud gaming can be deployed on regional edge clouds at the regional or provincial and municipal levels, while exclusive edge cloud services for factories, ports, parks, etc. can be deployed on edge data close to customer sites. In the medium and upper levels, it can also be achieved through more lightweight devices such as edge gateways. From a technical perspective, regional edge cloud and on-site edge cloud are both based on edge data centers and achieve edge cloud capabilities through the sinking of ICT infrastructure, while IoT edge cloud is for various on-site devices represented by industrial scenarios. Perform cloud upgrades and transformations. 

Positioning and core values

The emergence of edge cloud computing is to supplement the lack of centralized cloud computing capabilities. Therefore, the emergence of edge cloud computing is not to replace centralized cloud computing. When we discuss it in a broad sense, it should actually be placed within the overall framework of cloud-edge-end. Next, the edge cloud is regarded as the central cloud sinking close to the end user. In fact, edge cloud computing is like an octopus. The octopus' brain only has 40% of neurons, and the remaining 60% of neurons are distributed on each thigh of the octopus, forming a "1 brain + N cerebellum" neural computing structure. This is very similar to the central cloud + edge cloud + end user architecture. After various end users collect massive data, small-scale and local data that need to be processed in real time are processed and fed back nearby on the edge cloud; and complex , large-scale global data processing is handed over to the central cloud for processing and publishing. The central cloud and edge cloud have unified management, control and intelligent scheduling to form a reasonable allocation of computing power and the realization of business logic.

Compared with central cloud computing, edge cloud computing is closer to the end users who generate and use data. These end users are very sensitive to network delays and transmission costs. Edge cloud computing is the sinking of cloud computing capabilities to the edge. At the same time, it also meets the requirements of low latency and low cost. However, the physical equipment and operating environment on the edge side do not have unified standards like the central cloud, and the performance of the hardware is uneven. Therefore, the edge cloud needs to coordinate with the central cloud for processing, combining the large-scale computing capabilities of the central cloud with the low cost of the edge cloud. The characteristics of delay and low cost are to realize ultra-low-latency information interaction that cannot be achieved in the centralized cloud computing mode, and to realize part of the self-closed-loop processing and feedback of data. 

ultra low latency

At this stage, the main motivation for applying edge cloud is latency, especially in scenarios that require real-time interaction and real-time feedback, such as intelligent terminal equipment, vehicle networks, autonomous driving, etc. Under the traditional cloud computing model, network latency is difficult to further reduce due to strong physical distance constraints from end users to the central cloud. At the same time, the order-of-magnitude growth of smart terminal devices will inevitably bring requirements for massive data processing.

transmission cost

Under central cloud computing, all data generated by end users needs to be sent back to the cloud for processing. The cost of long-distance data transmission is relatively high, and most of the data sent back to the cloud is mostly useless information. With the explosive growth of terminals, , causing a lot of losses to the computing power of the central cloud.

cyber security

Some industries have extremely high data security requirements due to national policies, industry characteristics, data privacy protection and other requirements, and sensitive data cannot be transmitted back to the cloud. However, these industries also have business cloudification needs.

Typical application scenarios

Ultra-low latency requirements, massive data processing, edge intelligent scheduling, and data security specifications are the main factors that prompt enterprises to choose edge cloud computing. Currently, ultra-low latency characteristics and massive data processing are the biggest disadvantages of edge cloud computing compared to central cloud computing. Advantage. As shown in the figure on the right, in scenarios such as the Industrial Internet, Internet of Vehicles, smart transportation, cloud games, and VR/AR, the demand for data transmission and computing power is huge, and edge cloud computing can meet these high requirements. 

Note: This picture information comes from iResearch

Kubernetes: From centralization to marginalization

After the previous foreshadowing, we can have a rough preliminary judgment on future cloud computing. So how should Kubernetes, as the cornerstone of cloud native, develop in edge computing scenarios? Is it similar to IOE, which is gradually phased out with the trend of the times, or is it similar to Vmware, which is not affected in its own private domain, or is it like the current AI large model that will become the mainstream in the future. Let me first talk about my personal opinion. The Kubernetes plug-in system and list-watch mechanism make it naturally suitable for edge cloud computing.

Kubernetes is an architectural solution designed with application as the center. It uses Kubernetes as an orchestration tool to shield the underlying infrastructure and architecture downwards to achieve unified scheduling and management of different underlying resource architectures. It uses standard container mirroring to support multiple Automated deployment and rapid recovery of business forms and applications; horizontal expansion breaks through the boundaries of central cloud computing, allowing the use of underlying computing power to break through the restrictions of region, cloud vendors and physical equipment, forming a cloud-edge-end integrated collaboration Deployment plan.

Challenges of Kubernetes in edge cloud computing

Kubernetes is a cloud-native system with a distributed architecture that realizes the separation of management and business. The master node is responsible for managing worker nodes, scheduling Pods, and controlling the cluster running status. The worker node is responsible for running the container, emptying the container status and reporting it in a timely manner. In edge cloud computing scenarios, we mainly face the following challenges:

  1. Kubernetes is a central storage architecture with strong consistency. The status of related Kubernetes resources will be recorded to the control side and these resources will be uniformly scheduled and managed. In edge multi-scenarios, the network status between the edge and the center is unstable. , then this strong consistency logic will encounter challenges;

  2. The Worker node communicates with the Matser node through the List-Watch mechanism to synchronize the Kubernetes resources on the node. However, when network bottlenecks occur at the edge and center, the Worker node has no autonomy and cannot make its own decisions.

  3. Kubelet needs to implement many policies, such as container CRI, CSI, CNI and other network, storage, computing and other resources. On large-scale nodes, sometimes the resources occupied by kubelet are close to 1GB, which is a challenge for edge low-configuration hardware version facilities.

The main version of the Kubernetes community does not have a mainstream open source edge version, and edge cloud computing involves more complex scenarios. The current edge cloud computing open source project of the CNCF community mainly targets the above three challenges, using Kubernetes' multi-plugin support capabilities to distribute the management and control center tasks It is a deployment that realizes unified management and control of Kubernetes Master nodes and intelligent scheduling; each edge cluster node has independent management and control to achieve autonomy and business synchronization of its own edge, thereby realizing cloud-edge-end integrated collaboration of cloud management and edge autonomy.

Alibaba Cloud Container Product Strategy Points in the New Era

OpenYurt: No Intrusive Solution

Edge autonomy capability

OpenYurt introduces a per-node proxy (YurtHub) and local storage to cache cloud apiserver state, so if a node disconnects, the cached data can be used by Kubelet, KubeProxy or user Pods.

Cross-NodePool network communication capabilities

OpenYurt uses Raven to provide cross-NodePool network communication capabilities. Install a node daemon on each node, and select only one daemon in each node pool as the Gateway to establish VPN tunnels between node pools. Other daemons in the node pool configure cross-node pool network routing rules to ensure that traffic passes through the Gateway. node.

Multiple NodePool management

In order to better support the cloud-edge collaboration architecture, OpenYurt pioneered the concept of management Pool, which encapsulates the management of node resources, applications and workload traffic.

Advanced workload upgrade model

OpenYurt enhances the DaemonSet upgrade model and adds OTA (On-The-Air) and Auto Upgrade models. For example, OTA upgrade scenarios for cars, etc.

Programmable resource access control

The YurtHub component has a built-in programmable data filtering framework. The return data from the cloud will go through a filter chain, thereby non-aware and on-demand conversion of the return data to meet the specific needs of cloud-edge collaboration scenarios.

Cloud edge network bandwidth reduction

OpenYurt proposes to introduce the concept of Pool Scope Data, and other YurtHubs will obtain Pool Scope Data from the pool-coordinator, thus eliminating the use of public network bandwidth by each node to obtain such data from the cloud kube-apiserver.

Cloud native edge device management

OpenYurt abstracts and defines the basic characteristics (what it is), main capabilities (what it can do), and the data generated (what information it can transmit) of edge terminal devices from a cloud-native perspective. Finally, through the cloud-native declarative API, developers are provided with the ability to collect, process and manage device data.

Address: https://openyurt.io

The corresponding commercial product of OpenYurt is the container service Edge version ACK Edge, which supports full life cycle management of container applications and resources in edge computing scenarios.

  • Create a highly available edge Kubernetes cluster with one click through the console and provide rich management and operation capabilities.
  • Supports rich heterogeneous edge node resources, including self-built IDC resources, ENS, IoT devices, X86, ARM architecture, etc.; and supports mixed scheduling of heterogeneous resources.
  • For edge computing weak network connection scenarios, it provides node autonomy and network autonomy capabilities to ensure high-reliability operation of edge nodes and edge services.
  • Provide reverse operation and maintenance network channel capabilities.
  • Provides edge unit management, unit deployment, and unit traffic management capabilities.

https://help.aliyun.com/zh/ack/ack-edge/product-overview/ack-edge-overview

summary

Kubernetes is a large and complex architecture. There are nearly a hundred mainstream components. However, the business and architectural challenges faced by moving the entire Kubernetes to edge scenarios are huge. Different scenarios and demand focuses have different solutions. This is why the current solutions in Kubernetes edge scenarios are relatively fragmented, and there is no mainstream and absolutely dominant edge cloud computing solution.

Breaking the boundaries of central cloud computing, expanding Kubernetes from centralization to marginalization, and building a cloud-edge-end integrated basic cloud infrastructure architecture are the current development directions of edge cloud computing projects, all in order to better serve marginalized business scenarios. . In terms of business, it realizes centralized management and control of applications and cloud-edge-end collaboration in edge-side operation; in terms of operation and maintenance, it implements automated operation and maintenance, high reliability and fast recovery of edge services, reducing operation and maintenance costs in edge scenarios. Therefore, the scenario of edge cloud computing is that cloud and business complement each other and achieve each other. Business drives cloud growth and cloud follows business.

reference:

Edge Computing Community: Top Ten Edge Computing Open Source Projects in 2022

Carrier Edge Computing Network Technology White Paper (2019)

"China's Edge Cloud Computing Industry Outlook Report"

"China Cloud Computing Development White Paper"

OpenYurt: In-depth analysis of edge metadata filtering framework

[OpenYurt in-depth analysis] Elegant implementation of reverse proxy with caching capabilities

An in-depth interpretation of OpenYurt: YurtHub’s scalability from the perspective of edge autonomy

Edge Computing Market Share, Size, Trends, Industry Analysis Report,2022 - 2030

Market Guide for Edge Computing

Edge Computing Market- Market Size, Share, Growth, Trends and Forecast 2023 to 2032

The author of the open source framework NanUI switched to selling steel, and the project was suspended. The first free list in the Apple App Store is the pornographic software TypeScript. It has just become popular, why do the big guys start to abandon it? TIOBE October list: Java has the biggest decline, C# is approaching Java Rust 1.73.0 Released A man was encouraged by his AI girlfriend to assassinate the Queen of England and was sentenced to nine years in prison Qt 6.6 officially released Reuters: RISC-V technology becomes the key to the Sino-US technology war New battlefield RISC-V: Not controlled by any single company or country, Lenovo plans to launch Android PC
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/3874284/blog/10116631