Unlock new cloud-native scenarios | Cloud-native accelerates the integrated development of cloud, edge and end

Cloud Native Accelerates the Development of Cloud-Edge-End Integration


The "14th Five-Year Plan" clearly stated the need to " coordinate the development of cloud services and edge computing services ." At the same time, the State Council's "14th Five-Year Plan" Digital Economy Development Plan also pointed out that it is necessary to " strengthen edge computing capabilities for specific scenarios ." As China's cloud computing has entered a period of inclusive development, the demand for edge computing has surged, and cloud-edge-device integration has become an important evolution direction in the future .


The wide application and deep integration of cloud-native and edge computing technologies will further accelerate the implementation process of cloud-edge-end integration.

Cloud Native Edge Computing Architecture


The following is an introduction to the complete edge computing architecture based on Lingqueyun's edge computing solution. This architecture includes three parts: terminal, edge and cloud. The terminal part can be cameras, sensors, VGA, robots, etc. In the second part, the edge terminal is further divided into three types of computing environments: edge gateway, near edge, and far edge according to the distance from the edge device.


The edge gateway is part A, which is closely connected with the device. The first hop for data transmission from the edge to the cloud is usually directly connected to the device by wired or wireless means. Edge gateways have the characteristics of many types of applications and many device interfaces. The key to adopting container technology is to improve the integration of gateways and integrate multiple types of gateways into one. In the edge computing project I encountered, initially the customer’s edge gateway was different from the device according to the interface, and more than 20 arm boards were used to manage different devices. After using the container, we carried out resource integration, and the final solution Only 9 arm boards are used, which saves resources by half.


The near edge is part B, which is usually deployed according to the edge site. A supermarket, gas station, high-speed toll station, and airplane are all considered as a near edge computing environment. The network delay between this environment and the device needs to be controlled at about 2ms. This platform is usually composed of more than three physical server nodes and is an important part of edge computing. The feature of this part is that it carries important edge services and requires more support, such as storage, network, GPU, middleware, microservice framework, etc., but the overall resources are still relatively tight. The container platform is just suitable for such a computing environment. It not only provides basic operation support for the business, but also ensures the elastic scaling of the business, fault self-healing, batch release, etc., bringing additional value to the business.


The next computing environment is the far edge, which is part C. This environment is deployed by region, and the network delay between endpoints is controlled at about 10ms. The reason why I put this part between edge computing and cloud computing is because this part does not exist in some customers' computing models, or this part will evolve into an enterprise private cloud, which is completely based on the business level architecture. From the perspective of infrastructure, the far edge is a private cloud with complete capabilities. Due to the variety of business forms, it generally needs to support two computing models: virtual machine and container. Fortunately, Kubernetes has a project like Kubevirt, which allows us to manage virtual machines in a cloud-native way. In reality, this is no longer a new technology and has been widely used in scenarios such as operators. In the remote edge environment, we adopt the idea of ​​software-defined data center, and use k8s as the base to carry all the computing, network, storage, load balancing, security and other capabilities required by the data center. In addition, on the far edge, there is also a unified management capability for the near edge and edge gateways, and the management objects include management on the business and platform sides.


The last is the cloud computing environment, which is part D. This part is what we generally understand as IaaS, PaaS and other capabilities. For the unified management of the remote edge environment, we adopt a distributed cloud architecture, which can perform application pushdown, unified operation and maintenance, and platform upgrades for the remote edge environment in the cloud.


It is worth mentioning that in practice, not every customer must follow such a complex and huge architecture. Usually, what customers need is a subset of the entire architecture, which depends entirely on the customer's business architecture.


In Lingqueyun's edge computing architecture, we do not build such a complex architecture from scratch. Instead, based on the mature product architecture that has been running stably and has been verified by hundreds of leading enterprise customers, a small amount of expansion and enhancement is carried out to adapt it to the characteristics of the edge computing environment, so as to ensure that this solution has sufficient richness. Functional and sufficient stability.

K8s container + edge computing = edge-native


The essence of the platform is to better support the business, and the flexibility and stability of the business is the ultimate goal we pursue. In the field of edge computing, we use edge-native to describe business types that can give full play to edge capabilities, which is similar to cloud-native to describe new business types running in the cloud. Containers not only provide a good infrastructure for edge computing, but also can effectively support the development and operation of edge-native services. Here we break down the seven characteristics of edge-native business one by one:


First, businesses are usually hierarchical. From the edge gateway to the near edge to the far edge, and finally to the cloud, different parts of the business run in different hierarchical environments, and together they constitute a complete business. The multi-level edge management capability provided by the container fully matches the actual construction architecture of edge computing, and the container can improve the flexibility of business-level deployment, and the business can quickly sink from the cloud to the edge, or migrate from the edge to the cloud.


Second, business needs cache. This is related to a large amount of data processing on the edge side. Edge computing itself cannot solve such problems. Traditionally, it can only be solved by the business itself. When we adopt containers, containers can easily sink middleware and database services to edge gateways or near-edge environments, whether it is an x86 server or an arm box.


Third, the business needs to be elastically scalable. It is precisely because of the limited edge computing resources that elastic scaling, a flexible resource allocation mechanism, is more valuable. In the traditional edge computing model, infrastructure cannot help businesses solve such problems, and needs to be solved at the business level. Bring a lot of trouble to the business. However, elastic scaling is the strength of the container. The standard k8s includes the HPA capability. Through simple configuration, the business can be flexibly expanded and contracted based on CPU, memory, and monitoring indicators to achieve more intensive use of resources.


Fourth, the business form should be composed of multiple small services. This concept of small services borrows from the concept of cloud microservices. He emphasizes that services should be as small as possible, can be adapted to more compact devices, and reduce dependencies between services, enabling rapid service assembly. This is exactly the advantage of the container. The container itself is the best carrying tool for small services. In addition, technologies such as service mesh are also conducive to the implementation of service governance for C, C++, Java and other services, which can help development and troubleshooting quickly.


Fifth, edge-native is a near-earth service. Business equipment, data, and interactions all occur on the edge side. You can imagine scenarios such as gas stations, supermarkets, and high-speed toll stations. The service routing technology in the container can implement flexible business publishing, not only realizing local services, but also cross-local services under extreme circumstances.


Sixth, fault self-healing. The common phenomenon of edge computing in the event of failure, because the edge side does not have stable cooling and shockproof measures, it is more prone to failure than the data center, which greatly increases the operation and maintenance costs of enterprises. Some of my clients need to hire several people in each city to maintain the normal operation of the office business. The container pod copy management technology can realize fast fault self-healing. Once the probe finds that the business cannot be served, the container platform will quickly restart and migrate the business to ensure that there are enough copies running in the cluster.


Finally, security requirements are also a typical feature of edge-native. This is due to the increased attack points of edge computing. Containers have already solved the isolation problems in computing, network, and storage, and DevSecOps can help businesses improve code security and image security. More importantly, once a vulnerability problem occurs in the software supply chain, such as the recent Ngnix 0day vulnerability problem, it is necessary to upgrade the business to solve this problem. In the container environment, we can quickly solve the vulnerability problem through business batch updates. Even the problems of the platform itself can be solved with one-click platform upgrades.


Edge-native is the goal of the business, but it is not just the responsibility of the business development team. Similar to cloud-native, we need to give sufficient support on the infrastructure and platform side, and containers are an important technology to realize edge- native means.

Container Edge VS Hyper-Converged Edge


 

In communication with customers, some customers are hesitant to use hyper-convergence to solve the problem of edge platforms. Here we can simply compare the degree of adaptation of hyper-convergence and containers to edge computing from a technical perspective.


Here we abstract the architecture of hyper-convergence and container technology. The left side is the hyper-convergence architecture. Usually, several hyper-convergence servers are deployed on the site, and the unified management of the edge side is realized through the cloud management platform in the cloud; the right side is the container technology. , Deploy Kubernetes to manage physical servers at the site, and implement unified management of the edge side through container cloud management in the cloud.


We can compare these two solutions one by one from the cloud to the edge. First of all, in the cloud, the management object of the hyper-converged solution is a virtual machine, which essentially manages resources and cannot perceive the running status of the application; while the object managed by the container solution is the container, that is, the business itself, which is very important for business operation and maintenance. Friendly, we care more about how well the business is doing than just resources.


At the cloud-edge network level, in addition to conventional management flows and monitoring flow data, an important bandwidth-occupied object is the virtual machine image or container image. The size of the container image is similar to the service itself, which is 1%-10% of the virtual machine image. Therefore, when we deliver an image to the edge side, the container technology can greatly save the already scarce cloud-side network bandwidth at the network level.


On the edge side, hyper-convergence uses virtual machines to run services. After the virtual machines are running, each virtual machine needs an independent operating system, so the operating resources of the virtual machines are relatively high. The container is a shared operating system, and the runtime resources occupied by a container are basically similar to the business in the container, and there is no additional expenditure. In this way, the resource occupation of container technology has a huge advantage, and more operating resources such as CPU and memory can be reserved for the business.


In addition, the most important thing is the platform's support for business. Containers are more streamlined and flexible. Fault self-healing, elastic scaling, and grayscale publishing are the strengths of containers , and these virtualization implementations are too cumbersome.
Therefore, after discussing the solution with customers, most customers prefer the pure container edge solution, using containers to support the construction of near-edge or edge gateways, while virtualization, hyper-convergence, and container co-construction will be used on the far edge and cloud.

Cloud Native Edge Computing Empowers ISVs to Deliver, O&M


 

Traditional edge computing has its clear usage scenarios, and it is the process of business getting closer to data. At present, more and more ISVs are beginning to consider adopting edge computing-like solutions to improve the efficiency of service delivery and O&M . This part of ISVs includes industries such as education, medical care, and radio and television.


Traditionally, ISVs need to conduct on-site deployment and on-site development for each customer, and a project requires on-site services ranging from weeks to months or even years, which pushes up ISVs' labor costs.


To solve this problem, ISVs have begun to adopt edge computing solutions to realize unified delivery, operation and maintenance of remote services. The solution is very simple. Deploy a container management platform, that is, K8s, in each customer environment, deploy an edge management module in the cloud, and then connect the customer environment to the cloud environment network, so as to realize the unified operation and maintenance of customer business through the cloud environment. This architecture can improve the response speed of customer service to a certain extent , and at the same time reduce the cost of project field service . At present, this is still an innovative technology model, but in the future, it may help companies form new business models, improve operational capabilities, and achieve sustainable income.


Immediately start a new experience of cloud-side collaboration


If you are willing to become an ISV partner of Lingqueyun, or have enterprise-level consulting and trial needs, please contact us to explore the best practices of cloud-native edge computing.

Previous: AceCon Speech Record | Cloud Native Edge Construction Practice Sharing

Next: The next stop of intelligent manufacturing: cloud-native + edge computing two-wheel drive

Guess you like

Origin blog.csdn.net/alauda_andy/article/details/126649066