Berkeley: serverless is the next-generation computing paradigm

Abstract: Serverless technology is a choice based on economies of scale for cloud vendors.

Introduction

In the past HC2020, facing the era of diversified computing power, Huawei released three development kits for DC distributed computing, one of which is the Yuanrong component. Yuanrong is a distributed parallel application development framework based on functional computing. It hopes to help developers define the development mode and operation mode of DC distributed computing. Regarding the function calculation here, colleagues keep asking about the relationship or difference between this and Serverless?

In different scenarios of the company, it has been two years to promote the use of serverless technology, and now I also borrow this introduction to talk about my understanding.

1. The essence of serverless

The current relatively formal definition of serverless (CNCF white paper) has several characteristics: it is a further development of cloud computing. Compared with current cloud computing, it brings two key benefits: NoOPS and Pay as You Run . At this stage, the implementation form of Serverless technology is represented by Lambda released by AWS. Others include Microsoft Azure Function, Google Cloud Functions, etc. In 2019, Berkeley released its "Cloud Programming Simplified" outlook, and defined Serverless as the next-generation computing paradigm of cloud computing. Cloud computing has evolved from microservice technology to serverless technology. We can better understand the logic behind these technologies by looking at the nature of cloud computing, and we can also understand why Berkeley has focused its attention after successfully asserting the rise of cloud computing. What about Serverless technology?

Figure 1: The current stage and form of serverless technology

1.1 The rise of cloud computing to the evolution of cloud native ecology

The rise of cloud computing, after the vigorous development of CPU hardware capabilities, has benefited from the software ecology of OS+ISV and the maturity of virtualization technology. Cloud computing cleverly continues the OS+ISV ecology, and ISV can be seamlessly migrated to the cloud. The cloud vendor uses virtualization technology to provide IAAS services to customers. Satisfy customers: 1. The operating conditions of the application software have not changed; 2. There is no need to maintain the physical host, only the application software itself.

First of all, from the perspective of the service form of cloud computing, for enterprise applications and their infrastructure, there are now two levels of users and infrastructure providers, as shown in the following figure. This logical level division is very important. In the software ecosystem, the original infrastructure platform and application software were managed and maintained by users themselves. At this time, the role of a professional platform provider appeared to provide infrastructure.

Figure 2: Cloud computing brings the concept of infrastructure providers

Secondly, we return to the process of the rise of cloud computing. As shown in Figure 3, Cloud Vendor uses the maturity of virtualization technology without changing the original OS+ISV ecological gameplay and provides users with IAAS services, so that users’ software is almost seamless. Migrate to the cloud vendor’s infrastructure. In this way, cloud vendors quickly gathered some enterprise users to go to the cloud. After this stage, cloud vendors such as AWS quickly innovated. In addition to IAAS services, cloud middleware, cloud security, and third-party services integrate a large number of cloud application operations and businesses. Logical service. Gradually build the ecological environment required by the cloud native ecology. After this first phase, container technology continues to evolve, and a cloud-native software ecosystem begins to take shape. It is obvious that the interface of the software ecosystem has risen from GuestOS to the level of containers. The deployment of application software is also done by platform providers. Users No longer pay attention to what the operating system is running on the infrastructure. In this software stack, the scope of cloud vendors, the platform provider cover, has gone up one step further. This change is not only a change brought about by the cloud native ecology, but also the business logic of cloud vendors.

Figure 3: Schematic diagram of the emergence and evolution of cloud computing

Why do you say that? You can see the next section.

1.2 The business logic of cloud computing is built on economies of scale

At present, cloud computing is concentrated in several cloud vendors, and the successful vendors are based on their own businesses that consume a lot of infrastructure. The cloud business has gradually expanded and developed. For example, AWS and Alibaba Cloud are based on their own electronics.商服务平台. After Google Cloud and Azure found their own mobile user service and SAAS service scale operations, the two also gradually occupied market shares.

Observing the development process of cloud computing, we can say that cloud computing vendors follow the development model of economies of scale. Combined with economies of scale, there are two important phenomena or laws. Understanding these two phenomena can help us understand the evolution direction of cloud technology.

First, it can be explained as economies of scale. Simply put, as the scale of production (cloud computing) expands, the average unit output (service income) cost (infrastructure cost) tends to decrease. Geoffrey West of the United Kingdom studied the development law of urban population and industry, and concluded that the output of economies of scale is super-linear, while the cost follows the sub-linear law, as shown in the figure below.

Knowing this phenomenon, we can understand why cloud vendors are striving for scale. AWS was launched in 2002 and continued to promote cloud services. It was only when AWS released its financial report in 2013 that it entered a profitable period of economies of scale. AWS now invests 10 billion U.S. dollars in CAPS every year to continue to build cloud scale, with a global scale of >500W servers. Based on the advantages of scale cost, it has built a virtuous circle driven by long-term value cost and technology ecology, and mastered the pricing strategy of cloud services. In 19 years, reInvent claimed to have achieved 70+ price reductions, while still obtaining 20+% of cloud computing services. Operating profit margin.

Figure 4: Cloud computing follows the phenomenon of economies of scale

Second, it can be explained as the effectiveness of scale. The scale of production continues to expand. When the unit cost of infrastructure is reduced to a minimum, the optimal production scale is reached. If there is no change in production technology, the production scale is continued to be expanded at this time, and the average unit output cost will gradually rise. Currently, AWS, which has entered a virtuous circle of scale effects, has basically maintained its capex/revenue ratio at about 40-50%. Although it is relatively stable, it also needs to seek room for continued cost reduction.

Figure 5: LAC curve of economies of scale

At the same time, current cloud vendors mainly serve IAAS and provide virtual machine resources for tenants, all encountering the problem of low resource utilization, including CPU utilization and memory utilization. Industry data: The data center provided by cloud vendors has a CPU resource utilization rate not higher than 20-30%. Tenants purchase virtual machines with a fixed VCPU and memory configuration. Cloud vendors actually use a packing algorithm on the platform, and assemble them into the free space of the data center according to the needs of the tenants. Tenants purchase resources according to their peak business. In this case, a large number of tenant resources are in non-peak business for a long time, and cloud vendors are basically powerless to deal with the issue of resource utilization. At the same time, cloud vendors' self-operated businesses use technologies such as the mixing of different businesses and SLA scheduling. For example, Google has long claimed that the improved version of Brog can obtain 90% CPU resource utilization in the data center. Such a status quo is also the reason why cloud vendors have proposed to share computing instances, such as AWS T instances. Through the user's SLA policy, with the user's knowledge, the shared control right of the VCPU can be obtained to achieve high CPU utilization.

Back to the front we mentioned the two perspectives of users and platform providers. First, cloud vendors hope to gain more control over resources, so that ultra-large-scale cloud computing can continue to enjoy economies of scale, and the unit resource cost can continue to fall. Secondly, tenants are concerned about the stability of their business operations on the one hand, and on the other hand, they also hope to focus more on the business itself. So we can understand the direction of cloud computing technology: the level of software stack managed by cloud vendors will definitely be higher and higher, and cloud computing technology must be able to solve the elasticity and high scalability of user business. Cloud vendors obtain maximum control over application operating resources, pursue high resource utilization and low cost, and tenants obtain applications with business SLA guarantees.

Serverless technology is a choice based on economies of scale for cloud vendors.

1.3 Serverless technology is the choice to match cloud native economies of scale

As shown in Figure 3, Serverless further implements computing abstraction after the container Runtime, and the software stack managed by cloud vendors is further upgraded to Runtime. Here the author separates function computing from serverless technology. Function computing is an abstraction of computing paradigm. The abstraction of computing is further divided into two levels, function (code logic) and function runtime (resources, libraries, etc. required for function operation), namely

Function calculation = function + function runtime

Serverless computing also uses the above-mentioned abstraction of function computing. Under the cloud native ecosystem, users can further focus on business code logic and directly use the Runtime provided by cloud vendors. Compared with containers, the software stack managed by cloud vendors has increased by another level. The author classifies serverless technology as a cloud-native technology, because serverless provides services to tenants and must rely on a large number of back-end services and runtimes provided by cloud vendors, namely

Serverless = FaaS + BaaS

Function computing is a level of leisure. With the help of end users or event-based applications such as IOT, the code is separated from its runtime. Cloud vendors provide the runtime of the function code and its physical resources. As shown in the figure below, the platform provider obtains the maximum controllable range of the software stack, and the user only needs to pay attention to its code. Therefore, the application of function granularity allows platform providers to obtain the largest technology space. Based on this space, the scale and cost of cloud computing are further reduced. Therefore, serverless technology is a must-choice option for cloud vendors.

Figure 6: Serverless enables platform providers to obtain the largest controllable technology space in the software stack

However, in the current stage of Serverless, the scope of application is mainly event-based, short-time task-based applications. The function of the user-written task has a constraint on execution time and resources. Based on this, the platform provider obtains the maximum scheduling authority, so it provides a pricing strategy of charging per time and charging on demand. It is well applied in the service-driven scenario of a large number of terminals, giving full play to the benefits of serverless on-demand flexibility and on-demand billing. Obviously, such an application range is not enough to meet the expectations of cloud vendors. So from Serverless= FaaS+BaaS, another BaaS aspect, cloud vendors must promote the rapid evolution of serverless computing.

The two key features seen from the user's perspective in the official definition of Serverless are NoOPS and Pay as You Run. The form that meets these two characteristics is also Serverless technology, so Serverless technology is broader than the concept of functional computing, and does not necessarily have to be based on the abstraction of functional computing. As long as it can provide users with NoOPS and Pay as You Run services, it can be classified as Serverless technology. Go in. So cloud vendors continue to advance two things.

One thing is to make BaaS serverless. We have all seen this. Cloud DB and cloud storage serverless products have been launched. There is another thing that cloud vendors need to promote the current serverlessness of applications, that is, current users continue to discuss their own application services and their runtime, but the autoscaling and parallelization templates of services are provided by cloud vendors. This can be clearly seen in Google's product strategy.

Google has two clear technical lines for Serverless: one is the cloudRun product, the serverless platform based on the evolution of the K8S container platform, which is equivalent to the serverless application platform, which promotes the evolution of existing microservice applications to serverless; the second is CloudFunction for mobile applications +MBaaS products. Two technical lines promote the evolution of Serverless technology. Of course, AWS will not lag behind. Although AWS does not have the same powerful ecological control of the runtime platform as Google, AWS also directly provides services such as AutoScaling and ASM to guide application services to serverless.

Based on the above-mentioned business logic behind serverless technology, it has become a must for cloud vendors, and Berkeley's assertion that serverless is the next-generation computing paradigm in the cloud era lies in its confidence.

 

Click to follow and learn about Huawei Cloud's fresh technology for the first time~

Guess you like

Origin blog.csdn.net/devcloud/article/details/109046194