Edge Computing Architecture Analysis

Edge computing architecture analysis
The top ten keywords of cloud computing in 2021 are: cloud native, high performance, chaos engineering, hybrid cloud, edge computing, zero trust, optimized governance, digital government, low-carbon cloud, and enterprise digital transformation.
This article mainly refers to the following articles
https://mp.weixin.qq.com/s/sULsk-JNNaLPa9NG69fb3w
https://mp.weixin.qq.com/s/LaqElDQjmr0xpZ7S9sNxwA
https://mp.weixin.qq.com/ s/Jcy_zv4xI7giktRpA4y3-A
1
Cloud-native: cloud computing architecture is accelerating its reconstruction with cloud-native as the technical core.
With China's accelerated deployment in the field of "new infrastructure", cloud computing has ushered in new development opportunities, and the digital transformation of thousands of enterprises has accelerated Shifting gears puts forward new demands on the efficiency of cloud computing. With its unique technical characteristics, cloud native fits well with the essential needs of cloud computing development, and is becoming the technical core that drives the "qualitative change" of cloud computing.
In the future, it will be the general trend to reconstruct IT architecture with cloud native as the technical core.
2
High performance: Cloud high performance computing drives the development of the digital economy
At present, computing power drives cloud computing, big data, artificial intelligence and smart applications from concept to reality. Big data, cloud computing and other "computing power-dependent" industries focus.
With the continuous development of cloud computing, the computing power on the cloud has been continuously enriched and enhanced from the three dimensions of computing resources, network resources and storage resources. Transparent tasks and access to more applications. With this advantage, the high-performance cloud market rose against the trend.
3
Chaos engineering: escorting the stability of complex systems
The difficulty in guaranteeing the stability of complex systems is becoming a pain point in the development of the industry. The emergence and rise of chaos engineering has escorted the stability of complex systems and ensured that distributed systems in the production environment still have strong capabilities in the face of out-of-control conditions. toughness.
At present, although chaos engineering has been gradually implemented in many industries such as the Internet, finance, communication, and industry, it is still in the early stage of exploration, and standards and norms are urgently needed to promote the healthy development of the industry. The China Academy of Information and Communications Technology has compiled standards such as "Chaos Engineering Platform Capability Requirements", "Chaos Engineering Maturity Model", "Software System Stability Measurement Model" and other standards, and has carried out relevant evaluation work on chaos engineering, and will also set up a chaos engineering laboratory.
4
Hybrid cloud: becoming the mainstream model for enterprises to go to the cloud
With the further clarification of the 14th Five-Year Plan, hybrid cloud has become one of the focuses of future domestic cloud computing development. In recent years, the rapid development of hybrid cloud technologies and solutions has also deepened their application in various industries, and has become the mainstream model for enterprises to migrate to the cloud.
From the perspective of market acceptance, 82% of users worldwide have applied the hybrid cloud deployment model; from the perspective of industry supply, public cloud service providers, private cloud providers, telecom operators, traditional IT service providers, and cloud management service providers Many manufacturers are attracted by the broad prospects of hybrid cloud, and have launched their own solutions; from the perspective of industry applications, the landing practice and application scenarios of hybrid cloud are becoming more and more abundant.
5
Edge computing: on the
rise Edge computing is on the verge of taking off, with increasing industry attention, increasingly mature technology systems, increasingly rich application scenarios, and evolving standards.
Throughout the entire edge computing industry ecosystem, companies and organizations such as chip equipment, cloud service providers, operators, software and solution providers, and open source organizations have launched related products and services, and the entire ecosystem has become increasingly complete.
The "Top Ten Cases of Cloud-Edge Collaboration in 2021" released by the China Academy of Information and Communications Technology shows that edge computing has been applied in key fields such as industry and transportation. Digital transformation plays an important role.
6
Zero trust: Continuous integration with cloud-native security
With the continuous acceleration of the process of cloud migration of enterprises, the traditional security protection system centered on the boundary has encountered bottlenecks, and concepts such as zero trust and cloud-native security have emerged, providing enterprises with a new generation of security systems. guidelines.
At present, cloud native and cloud security are accelerating the trend of integration. First, in the operation stage, zero trust is constantly being native as a cloud security product, and zero trust has evolved from privatized deployment to SaaS services. SD-WAN realizes secure access to the service edge (SASE) by integrating zero trust, and zero trust on the cloud is realized. The elastic expansion of security performance can cope with massive access requests. At the same time, micro-isolation, as a key technology of zero trust, controls the access of east-west traffic in the cloud, making up for the shortcomings of traditional security protection mechanisms in the cloud environment. Second, cloud-native security emphasizes security from the research and development stage. More and more enterprises begin to design application systems based on the principle of zero trust. Cloud services or applications on the cloud will achieve native zero trust, and security capabilities will be greatly improved.
7.
Optimizing governance: Enterprises move to the cloud to accelerate optimization and governance requirements.
With the deepening of the use of the cloud by enterprises, the focus of enterprises has shifted from consulting and migration to the cloud, and gradually shifted to the optimization after the cloud, and the cloud optimization governance system has gradually formed.
The cloud optimization governance system can optimize and improve the whole life cycle of cloud strategy formulation, route planning, adoption and implementation, and cloud optimization for enterprises, so that enterprises can understand and use cloud better, and provide new impetus for digital transformation of enterprises.
8
Digital government: digital technology enables innovation in government governance
Improving the level of digital government construction is an important chapter in the “14th Five-Year Plan”. As digital government ushered in the blue ocean market, companies have accelerated their deployment. It is the future trend of digital government to give full play to the enabling role of digital technologies such as cloud computing, promote the reengineering of government governance processes and model optimization, and continuously improve the scientific decision-making and service efficiency.
In the future, the level of digital government construction and the maturity of operational effects will become the focus of the industry.
9
Low-carbon cloud: a technology engine for enterprise digitalization and energy saving and carbon reduction
With the accelerated development of the digital economy, enterprise data centers have become major energy consumers, severely restricting the green development of enterprises and the whole society. Low-carbon cloud can improve resource efficiency and empower society to save energy and reduce carbon.
"Low-carbon cloud" refers to the use of cloud computing to improve the utilization of computing, storage, network and other resources, comprehensively improve the resource efficiency of the whole society, and integrate cloud computing with big data, artificial intelligence and other technologies to empower enterprises and the whole society to save energy carbon reduction goals.
10
Digital transformation of enterprises: from macro to micro
implementation Digital transformation of enterprises is an important strategic means for the country to promote economic and social development. In 2017, the government work report put forward the concept of "digital economy" for the first time, and it has been directly written into the government work report 4 times so far. The "14th Five-Year Plan" clearly puts forward a series of important planning goals such as "driving the transformation of production methods, lifestyles and governance methods through digital transformation as a whole". The concept of digitalization is gradually implemented from macro to micro digital in all aspects of the enterprise.
Overall Architecture of Edge Computing The overall
edge computing system is divided into three parts: cloud, edge, and end, as shown in Figure 2-1.

insert image description here

▲Figure 2-1 The overall architecture of edge computing (click the image to enlarge)
01 Cloud
CPU supports X86 and ARM architecture; operating system supports Linux, Windows and macOS; container runtime supports Docker, Containerd and Cri-o; cluster orchestration uses Kubernetes, Includes control nodes, compute nodes, and cluster storage.
The core components of the control node include Kube-apiserver, Kube-controller-manager and Kube-scheduler, the computing node components include Kubelet and Kube-proxy, and the cluster storage component includes Etcd.
The load on the cloud runs in the form of Pods. Pods are composed of Containers. Containers are independent spaces isolated from NameSpace and Cgroup based on the operating system.
02 The edge
CPU supports X86 and ARM architecture; the operating system supports Linux; the container runtime supports Docker; the edge cluster orchestration uses KubeEdge, including CloudCore for the cloud part, EdgeCore for the edge part, and SQLite for edge cluster storage. Loads on the edge run in pods. 03
The end is composed of EdgeX Foundry, a service framework that manages end devices running on the edge cluster, and end devices. EdgeX Foundry is the device service layer, core service layer, support service layer, and export service layer from bottom to top. This is also the physical domain.
The sequence of data processing to the information field.
The device service layer is responsible for interacting with southbound devices; the core service layer is between northbound and southbound, serving as a message pipeline and responsible for data storage; the support service layer includes a wide range of microservices, mainly providing edge analysis services and intelligent analysis services; open The service layer is the gateway layer of the entire EdgeX Foundry service framework.
Detailed explanation of edge computing with multiple diagrams
Explains the edge computing system from two aspects: components and concept analysis.

  1. Components: The edge computing system consists of three parts: cloud, edge, and end, and each part has more than one solution. Choose Kubernetes for the cloud component, KubeEdge for the edge component, and EdgeX Foundry for the end component.
  2. Concept Analysis: Explain the related concepts involved in the cloud, edge, and end components that make up the edge computing system.
    01 Composition of edge computing system
  3. Cloud - Kubernetes
    Kubernetes is Google's open source large-scale container orchestration solution. The entire solution consists of core components, third-party components, and container runtimes, as shown below.
    1) Core components  Kube - apiserver
    : A message bus for communication between Kubernetes internal components, and the only way to expose cluster API resources The scheduled load matches the available resources best  Kube-proxy: Act as a proxy for load access within nodes and load access between nodes  Kubelet: According to the scheduling result of Kube-scheduler, operate the corresponding load 2) Third-party components  Etcd: Metadata and status data of the storage cluster Pure three-layer network solution, no additional encapsulation and decapsulation, and low performance loss CoreDNS: responsible for domain name resolution in the cluster A container runtime that is comparable in volume and stability to Docker  Cri-o lightweight container runtime is currently not guaranteed to be stable













insert image description here

  1. Edge - KubeEdge
    KubeEdge is a Kubernetes-based edge computing platform open sourced by Huawei. It is used to extend the orchestration function of containerized applications from the cloud to edge nodes and devices, and to provide network, application deployment and Metadata synchronization provides infrastructure support. KubeEdge is licensed under Apache 2.0 and is free for personal or commercial use.
    KubeEdge consists of a cloud part, an edge part, and a container runtime, as shown below.
     Cloud part | CloudCore  Responsible for delivering the events and instructions
    of the cloud part to the edge, and receiving the status information and event information reported by the
    edge
    , report the state information and event information of the edge to the cloud part
     Container runtime | Docker
     Currently, KubeEdge supports Docker by default
     Officials say that container runtimes such as Containerd and Cri-o will be supported in the future
  2. Edge - EdgeX Foundry
    EdgeX Foundry is an open source edge computing IoT software framework project run by the Linux Foundation. The core of the project is an interoperable framework based on a reference software platform that is completely independent of hardware and operating systems, building a plug-and-play component ecosystem and accelerating the deployment of IoT solutions. EdgeX Foundry enables interested parties to collaborate freely in open and interoperable IoT scenarios, whether using open standards or proprietary solutions.
    The EdgeX Foundry microservice collection constitutes four microservice layers and two enhanced base system services. The four microservice layers include a series of services from physical domain data collection to information domain data processing, and two enhanced basic system services provide service support for the four microservice layers.
    From the physical layer to the application layer, the four microservice layers are the Device Service layer, the Core Service layer, the Supporting Service layer, and the Export Service layer. The two enhanced foundations System services include security and system management services, as described below.
    1) Device service layer
    Device-modbus-go: Go realizes the service of connecting to devices using
    Modbus
    protocol
    mqtt-go: Go realizes the service of connecting devices using MQTT protocol  Device-sdk -
    goGo realizes the SDK SDK for connecting other devices, providing greater flexibility for device access
    2) Core service layer
    device sends command
     Core-metadata: Responsible for describing the capabilities of the device itself, providing functions to configure new devices and pairing them with the device services they own
     Core-data: Responsible for collecting southbound device layer data and providing data services to northbound services
     Registry & Config: Responsible for service registration and discovery, providing information about EdgeX Foundry related services for other EdgeX Foundry microservices, including microservice configuration properties
    3) Support service layer
     Support-logging: Responsible for logging
     Support-notification: Responsible for event notification
     Support-scheduler: responsible for data scheduling
    4) Export service layer
     Export-client: Client for exporting data
     Export-distro: Application for exporting data 5
    ) Two enhanced basic system services
    , API to stop all
    microservices  Sys-mgmt-executor: responsible for starting and stopping the final execution of all microservices
    02 Concept analysis
    The related concepts of cloud, edge, and end that make up the edge computing system are as follows.
     Cloud: The concepts involved include Container, Pod, ReplicaSet, Service, Deployment, DaemonSet, Job, Volume, ConfigMap, NameSpace, Ingress, etc.
     Edge: The current implementation of the edge system is to cut the original components of the cloud and sink them to the edge, so the concept involved in the edge is a subset of the cloud, which is consistent with the cloud.
     Side: A set of microservices deployed on the side, and no new concepts are currently introduced.
    At present, both the edge and the end are using the concept of cloud, so this section mainly analyzes the concept of cloud. The related concepts involved in the cloud are explained in the form of diagrams below. As can be seen from Figure 1-1, Container is a new environment isolation technology on top of the operating system. The independent space isolated by the container contains the runtime environment and dependent libraries required by the application. On the same host, containers share the operating system kernel.
    insert image description here
    ▲Figure 1-1 Container analysis
    As can be seen from Figure 1-2, a Pod is composed of a group of containers, and the containers in the same Pod share storage and network namespaces. In an edge computing system, Pod is the smallest schedulable unit and the final carrier of application load.
    insert image description here
    ▲Figure 1-2 Pod analysis
    As can be seen from Figure 1-3, ReplicaSet is used to manage Pods and is responsible for keeping the expected number of Pods consistent with the actual number of Pods. In an edge computing system, ReplicaSet is responsible for maintaining multiple instances of applications and self-healing failures.
    insert image description here
    ▲Figure 1-3 ReplicaSet analysis
    As can be seen from Figure 1-4, Service acts as an access proxy for a group of Pods and performs load balancing among multiple Pods. The life cycle of a Pod is relatively short and changes frequently. In addition to serving as an access proxy and load balancing for the related Pods, the Service also maintains the corresponding relationship with the Pods.
    insert image description here
    ▲Figure 1-4 Service analysis
    As can be seen from Figure 1-5, Deployment is an abstraction of ReplicaSet, and some advanced functions are added on the basis of ReplicaSet. The functions and application scenarios are the same as ReplicaSet.
    insert image description here
    ▲Figure 1-5 Deployment analysis
    As can be seen from Figure 1-6, DaemonSet is responsible for starting an instance of the specified Pod on each node. This function is generally used in scenarios where network plug-ins, monitoring plug-ins, and log plug-ins are deployed.
    insert image description here
    ▲Figure 1-6 DaemonSet analysis
    As can be seen from Figure 1-7, Job is used to manage Pods running in batches, and Pods of this management type will be triggered in batches on a regular basis. Different from the Pod managed by Deployment, the Pod managed by Job will exit after executing the corresponding task and will not reside forever. In edge computing systems, AI models are generally trained with Pods managed by jobs.
    insert image description here
    ▲Figure 1-7 Job analysis
    As can be seen from Figure 1-8, Volume is used to provide storage for Pods and is associated with the corresponding Pods by mounting. Volumes are divided into temporary storage and persistent storage. Volumes of temporary storage type will be deleted when Pods are deleted, and Volumes of persistent storage type will not be deleted when Pods are deleted.
    insert image description here
    ▲Figure 1-8 Volume analysis
    As can be seen from Figure 1-9, ConfigMap, as the carrier for Pod storage configuration files, is associated with Pod through environment variables (env) and file volumes. In an edge computing system, it is more convenient to manage configuration information in a ConfigMap manner. ConfigMap can also encrypt sensitive information in the configuration, making the configuration information more secure.
    insert image description here
    ▲Figure 1-9 ConfigMap analysis
    As can be seen from Figure 1-10, NameSpace is a mechanism for isolating resources such as Pod, Service, ConfigMap, Deployment, and DaemonSet. It is generally used in scenarios where different teams in the same company isolate resources. Edge computing systems use NameSpace to limit the resources a team can use (CPU, memory) and the resources needed to create loads.
    insert image description here
    ▲Figure 1-10 NameSpace Analysis
    As can be seen from Figure 1-11, Ingress can be used as a bridge for communication between the cluster and outside the cluster—exposing the services in the cluster to the outside of the cluster, and at the same time, it can reasonably manage and control the traffic entering the cluster. In an edge computing system, Ingress is a resource object that needs to work with the Ingress Controller and reverse proxy.
    insert image description here
    ▲Figure 1-11 Ingress analysis

Reference link
https://mp.weixin.qq.com/s/sULsk-JNNaLPa9NG69fb3w
https://mp.weixin.qq.com/s/LaqElDQjmr0xpZ7S9sNxwA
https://mp.weixin.qq.com/s/Jcy_zv4xI7giktRpA4y3- A

Guess you like

Origin blog.csdn.net/wujianing_110117/article/details/123948080