These 4 things let you understand the true face of edge computing

About the Author

James Falkoff, an investor in Converge, a Boston-based venture capital firm.

Edge computing has occupied a place in the technological zeitgeist, with innovative and cutting-edge capabilities. For several years, people have always believed that edge computing will become a computing method in the future. But in fact, the discussion is still only hypothetical, because the infrastructure required to support edge computing still has a lot of room for development.

Now, this situation is changing as a variety of edge computing resources (from microdata centers to dedicated processors to necessary software abstractions) flood into the hands of application developers, entrepreneurs, and large enterprises. Now when answering about the usefulness and meaning of edge computing, we don't have to follow the script. So, what does the development of the real world tell us about this view? In particular, does the popularity of edge computing match the actual situation?

In this article, I will outline the current state of the edge computing market. In general, the trend of edge computing is real. Due to cost and performance reasons, the demand for decentralization of applications continues to grow. Some aspects of edge computing have been hyped, while others have not attracted the attention they deserve. The following four key points are proposed to help decision-makers have a practical understanding of the current and future functions of edge computing.

Insert picture description here

1. Edge computing is more than just low latency

Edge computing is a paradigm that makes computing and data storage more efficient. It is in sharp contrast with the traditional cloud computing model-in the traditional cloud computing model, calculations are only concentrated in a few super-large data centers. The edge can be located anywhere closer to the end user or device than a traditional cloud data center, perhaps 100 miles, 1 mile, local or on the device. No matter which method is adopted, the traditional edge computing narrative emphasizes that the function of the edge is to minimize the delay in order to improve the user experience or enable new applications that are sensitive to delay. Such a statement easily makes people's understanding of edge computing insufficient. Although reducing latency is a very important use case, it is not necessarily the most valuable use case. Another use case for edge computing is to minimize network traffic to and from the cloud or "cloud offload" in some views, which may at least bring as much economic value as reducing latency.

The fundamental driving factor for cloud offloading is the huge increase in the amount of data generated by users, devices or sensors. "Fundamentally speaking, the edge is a data problem," said Macrometa CEO Chetan Venkatesh. The startup is dealing with data challenges in edge computing. Cloud offloading occurs because it costs a lot of money to migrate all data, so many companies prefer not to migrate data to other places. At this point, edge computing provides a way to extract values ​​from local devices because it does not need to migrate data outside the edge. If necessary, the data can be reduced to a more economical subset to be sent to the cloud for storage or further analysis.

A very classic use case for cloud offloading is to process video or audio data, which are the two most bandwidth-consuming data types. According to a person I have recently contacted to participate in the deployment, a retailer with stores in more than 10,000 locations in Asia is using edge computing technology to simultaneously process in-store video surveillance and language translation services. But in addition, there are other data sources that are equally expensive to transfer to the cloud. Another contact said that a large IT software supplier is analyzing real-time data from the customer's local IT infrastructure to prevent problems and optimize performance. It uses edge computing to avoid sending all data back to AWS. In addition, industrial equipment also generates massive amounts of data, so it is also the main application scenario for cloud offloading.

2. Edge computing is an extension of the cloud

Although the early propaganda caliber is that the edge will replace the cloud, it should be more accurate to say that the edge expands the scope of the cloud. It will not affect the trend of companies migrating their businesses to the cloud. However, a series of measures are currently underway to extend the cloud computing formula of on-demand resource availability and physical infrastructure to locations that are farther and farther away from traditional cloud data centers. These edge locations will be managed using tools and methods evolved from the cloud, and with the continuous development of the edge and the cloud, the boundary between the cloud and the edge will become blurred.

In fact, the edge and the cloud are part of the same continuum. You can get a glimpse of this fact from the edge computing plans of public cloud providers such as AWS and Azure. If your company wants to do local edge computing, Amazon will send you an AWS Outpost, which is an assembled computing and storage architecture that can mimic the hardware design of Amazon's own data center. It will be installed in the customer's own data center and monitored, maintained and upgraded by Amazon. Importantly, the services that Outposts run are on which many AWS users rely, such as EC2 computing services, so that the edge is similar in operation to the cloud. There are many other major manufacturers' products with similar goals. From these products, we can receive a clear signal that cloud providers want to unify the cloud and edge infrastructure under one umbrella.

3. The edge infrastructure is being implemented in phases

Although some applications are best run locally, in many cases, application owners want to benefit from edge computing without having to support any local footprint. This requires understanding a new type of infrastructure. Although some parts of the infrastructure look like a cloud, it is geographically more distributed than the dozens of hyperscale data centers that make up the cloud today. This type of infrastructure is now gradually being applied, and it may be divided into three development stages, each of which expands the scope of the edge by reaching a wider geographical area.

Phase 1: Multi-regional and multi-cloud

Regarding the first step of edge computing, many people may not consider applying edge computing to a large number of applications. This step utilizes multiple regions provided by public cloud providers. For example, AWS has data centers in 22 geographic regions, among which AWS customers that provide services to users in North America and Europe can run their applications in Northern California and Frankfurt. From one area to multiple areas can greatly reduce the delay, for a large number of applications, this can provide a good user experience.

At the same time, there is a trend toward multi-cloud, which is driven by a series of considerations, including cost efficiency, risk reduction, avoiding vendor lock-in, and the desire to obtain the best-in-class services provided by different providers. "Executing a multi-cloud strategy is a very important strategy and architecture today" Mark Weiner told me, he is the CMO of Volterra, a distributed cloud computing company. Like the multi-region approach, the multi-cloud approach marks the first step toward distributed workloads in cloud computing, and distributed workloads are moving toward increasingly decentralized edge computing methods.

Phase 2: Regional Edge Computing

In the second stage of edge evolution, the edge will be extended to a deeper level. Edge computing will utilize infrastructure in hundreds or thousands of locations, rather than just super-large data centers in dozens of cities. Facts have proved that a group of players already have such an infrastructure: Content Delivery Network (CDN). For 20 years, CDNs have been a pioneer in the development of edge computing. They cache static content closer to the end user to improve performance. Although AWS already has 22 regions, there are 194 typical CDNs like Cloudflare.

The difference is that these CDNs have now begun to open their infrastructure to general workloads, not just caching static content. Today, CDNs such as Cloudflare, Fastly, Limelight, StackPath, and Zenlayer provide a combination of container as a service, VM as a service, bare metal as a service, and serverless functions. In other words, they are starting to look more like cloud providers. Forward-looking cloud providers also provide such infrastructure, and AWS has sold the first step of multi-regional infrastructure, introducing the first so-called Los Angeles local area, and promising to provide more local areas .

Phase 3: Access Edge

The third stage of edge evolution drives the edge to expand further, so that it is only one or two network hops away from the end user or device. In traditional telecommunications terms, this is called the access part of the network, so this type of architecture has been labeled as the access edge. The typical form of Access Edge is a micro data center, which can be as small as a single rack, as large as a half trailer, and can be deployed on the side of the road or at the bottom of a cellular network tower. Behind this, innovations in power and cooling will enable increasingly smaller density infrastructure to be deployed in these small data centers.

New entrants such as Vapor IO, EdgeMicro, and EdgePresence have begun to build these micro data centers in a few cities in the United States. 2019 is the first year of expansion. From 2020 to 2021, large amounts of funds will continue to be invested in these expansion projects. By 2022, the return on edge data centers will become the focus of investors' attention. Ultimately, these rewards will answer the following question: Are there enough killer applications to bring the edge closer to the end user or device?

Our answer to this question is still in the ignorant stage. Recently, I have talked with many practitioners, and they all expressed doubts about whether the micro data center in Access Edge has sufficient marginal benefits than the regional data center at the edge of the area. Early adopters have used the regional edge in a variety of ways, including various cloud offloading use cases and reducing latency to optimize the user experience (such as online games, advertising services, and e-commerce). In contrast, applications that require Access Edge's ultra-low latency and very short network routing sound more out of reach: autonomous driving, drones, AR/VR, smart cities, remote surgery, etc. More importantly, these applications must weigh the advantages of Access Edge instead of using local or on-device methods to perform calculations locally. However, there will definitely be a killer application for Access Edge-maybe it hasn't caught everyone's attention today, but we will have a deeper understanding of it in a few years.

4. New software is needed to manage the edge

In the above content, I briefly explained several architectures in edge computing and the "edge" can be located in many places. However, the ultimate direction of the industry is to unify and standardize-no matter where the edge is located, the same tools and processes can be used to manage cloud and edge workloads. This will require improvements to the software used to deploy, scale, and manage applications in the cloud, which were designed in the past with only a single data center architecture in mind.

Startups such as Ori, Rancher, and Volterra, and large companies such as Google's Anthos, Microsoft's Azure Arc, etc. plan to develop cloud infrastructure software in this way. In fact, all these products have one thing in common: they are based on Kubernetes, which has become the main method of managing containerized applications. But these products go beyond the original design of Kubernetes and can support distributed multiple Kubernetes clusters. These clusters may be on top of a heterogeneous infrastructure pool composed of "edge", local environment and public cloud, but thanks to these products, they can all be managed in a unified manner.

Initially, the biggest opportunity for these products is to support the first stage of edge evolution, that is, through one or more clouds, using a small number of regions, and appropriately distributed deployment. But this happens to put them in a good position to support the upcoming more distributed edge computing architecture. "To solve today's multi-cluster management and operation and maintenance problems, then when you solve a wider range of edge computing use cases, you will occupy an advantageous position." Rafay Systems CEO, Haseeb Budhani said.

Edge, not far from glory

Now that resources to support edge computing continue to emerge, edge-oriented thinking will become more common among people who design applications. After experiencing an era where resources are concentrated in a few cloud data centers, there is now a reverse force that requires increased decentralization. Edge computing is still in its infancy, but it has moved from theory to practice. Now this industry is developing rapidly. As everyone knows, cloud computing has only 14 years of history, so we have reason to believe that edge computing will definitely leave a glorious mark in the computing field in the near future.

Guess you like

Origin blog.csdn.net/qq_42206813/article/details/105725204