CNStack Cloud-Edge Collaboration Platform: It's So Simple to Realize the Native Edge

Cloud Native and Edge

With the development and popularization of cloud technology, the cloud-native concept represented by K8s is increasingly accepted by enterprises and has become a solid foundation for enterprise digital transformation. The immutable infrastructure it advocates, with resources as the management object, descriptive API, eventual consistency and other concepts, has become the industry's unified cognitive standard for infrastructure.

Edge computing is not a brand-new concept, but an architectural model that was proposed very early and has been implemented in many different eras. The purpose is to push the computing power to the source of data generation as much as possible to avoid system instability due to network or other hardware constraints, and to improve the response speed of the system terminal.

Traditional edge computing products focus more on edge-side communication capabilities. However, in cloud-native scenarios, how to enjoy the various features and advantages brought by cloud-native while avoiding the natural limitations and constraints of edge scenarios is an urgent problem to be solved for today's cloud-edge collaboration edge platform.

CNStack cloud-edge collaboration platform - EdgePaaS focuses on realizing edge computing capabilities under the cloud-native technology stack. The following introduces the product features of EdgePaaS through five collaborative capabilities: resource collaboration, application collaboration, service collaboration, data collaboration, and device collaboration, and how EdgePaaS helps businesses land on the edge.

Features

Resource coordination

K8s, site

The design purpose of the cloud-edge collaboration platform is to minimize the intrusion of users on the basis of K8s, and realize the resource management collaboration between the cloud and edge through only a few concepts.

The cloud-edge collaboration platform proposes the concept of "site" to realize the unified management of edge resources.

A site is somewhat similar to the concept of an availability zone. If only from the perspective of resource grouping, there are indeed similarities. However, there is an essential difference between the two. Availability zone is a way to divide resources from the perspective of availability, and its core goal is to achieve business disaster recovery. However, a "site" is usually a division of resources based on geographical factors or business management relationships, and its essence is still the projection of the edge's own business management rules.

Through "sites", infrastructure resources such as computing, network, and storage can be independently maintained in the site dimension, and each site can even be regarded as an independent small cluster.

application collaboration

Application definition, distribution, deployment, operation and maintenance, and network disconnection autonomy

Even under the cloud-edge collaboration architecture, "applications" are still the first citizens.

An application is a unified abstraction of various business workloads in the digital world. Even in edge scenarios, no matter how rich the devices are and how many kinds of data are generated, the core business logic is still in the "application". We need the "application" to process and analyze the data, and then trigger the downstream logic.

EdgePaaS does not make any special requirements for business applications. As long as it is a standard K8s application, it does not need any modification. Whether it is directly using the workload packaging method or the Helm chart packaging method, it can be used directly and obtain the edge environment. The operation capabilities of the whole life cycle such as management, distribution, deployment, operation and maintenance, monitoring, etc.

Under K8s, if a node's network is terminated, the platform will expel the node and drift the workload on the node to other reachable nodes. This feature can greatly improve the availability of business applications under the K8s framework, but due to the natural vulnerability of the network in edge scenarios, this feature will bring great instability to edge applications. EdgePaaS implements a set of edge autonomy mechanisms to solve this problem.

After a network interruption occurs at the edge site, the central management and control will tentatively schedule the scheduling of the edge nodes, but will not expel the existing business load, but wait for the network recovery of the edge nodes. On the edge side, the control program will not stop the load in the node because it is disconnected from the center side, but will continue to try to connect to the center until the network is restored.

Service Collaboration

service topology

Under the traditional centralized architecture, we realize the discovery between services through the registration center, and realize the communication between services relying on the network connectivity of the service cluster. However, for applications deployed in edge environments, the services in each site are independent instances, and there are almost no calls between sites. All services should occur within the site or between the cloud and the edge.

EdgePaaS allows a service to route traffic based on the distribution of nodes in the cluster. For example, a service could specify that traffic be preferentially routed to the same node or pool of nodes as the client pod.

data collaboration

Traffic optimization, content distribution

Due to the natural particularity of the edge environment, various restrictions on the network have been brought about, mainly including:

  • Cloud edge network bandwidth is limited
  • Cloud-side traffic costs are high
  • Network one-way visibility
  • poor network reliability

Due to the emergence of these constraints, cloud-side data collaboration will no longer be simple. In view of the particularity of the edge network, EdgePaaS provides different collaborative support for operation and maintenance data and business data.

Operation and maintenance data collaboration

Data management and communication between the cloud and the edge are very repetitive, especially under K8s, the operation of various components depends on a large number of system resources, such as: node, pod, endpoint, etc. The traffic of this type of data will increase significantly as the number of nodes in the site increases.

In a site dimension, the internal network environment of the site is relatively loose. EdgePaaS opens a central-side apiserver proxy at each site to uniformly proxy the repeatedly acquired network resources in the site, so that the edge-side control traffic will not follow increases with the number of nodes.

business data collaboration

In addition to operation and maintenance data, there is also a large amount of business data that flows between the cloud and the edge. For example, a picture, an algorithm model, a mirror image, etc. These contents have the following salient features:

  • immutable content
  • Managed at the center, used at the edge
  • Content requirements require a lot of bandwidth (it can be a single large file or a small file with multiple requests)
  • Response speed is required

For this scenario, EdgePaaS provides a complete non-intrusive solution:

On the edge site side, EdgePaaS provides a site-dimensional access proxy, and data consumers only need to use standard protocols (http, ftp, sftp) to obtain the data they care about. At the same time, relying on the feature of nearby access in service collaboration, users do not need to make separate configurations for the applications of each site, and all share one service to achieve content acquisition.

Using this solution, the access traffic to static resources will be significantly reduced, effectively reducing the requirements for cloud-side bandwidth, and at the same time, it can continue to provide resource access capabilities in the event of network interruptions, thereby further protecting the autonomy of network interruptions.

At the same time, this solution also has the ability to preheat. Since the content distribution process is independent of the data consumption process, the data that consumers need to use can be actively sent to the business unit in advance before the consumption behavior occurs, thereby realizing data preheating.

Device coordination

device twin

EdgeX Foundry is a plug-and-play, open software platform for Edge IoT powered by the community. It is highly flexible and scalable, and can greatly reduce the complexity of applications interoperating with edge devices, sensors and other hardware. EdgeX Foundy adopts a layered and service design, which includes device services, core services, support services, application services, and two auxiliary services, security and management, from bottom to top. EdgeX Foundry's layering and services provide a bi-directional translation engine between edge devices/nodes and cloud/enterprise applications. Sensor and node data can be transmitted to applications in a specific format, and application instructions can also be sent to edge devices.

EdgePaaS integrates the EdgeX framework through OpenYurt, bringing the complete ecosystem to the edge site. Users can deploy the EdgeX suite at the edge site with one click, and also support access, management, and operation of IOT devices by operating CR, so as to realize the device twinning capability. At the same time, in terms of device protocol support, EdgePaaS has built-in a variety of common device protocol drivers: ModBus, MQTT, ONVIF, GPIO, REST, etc.

The ability to use device twins in cloud-side scenarios will also bring the following advantages:

  • Reduce the complexity of connecting devices
  • Improve system response speed
  • Isolate equipment development and application development to improve integration capabilities

Author: good name

Original link

This article is the original content of Alibaba Cloud and may not be reproduced without permission.

Guess you like

Origin blog.csdn.net/yunqiinsight/article/details/129671188
Recommended