Serverless Application Hosting Helps Enterprises Accelerate Innovation

Author: Xiong Feng

Serverless Application Hosting Architecture in the Cloud Native Era

Looking back on the past ten years, digital transformation has continuously integrated and reconstructed technological innovation and business elements, redefining the growth pole under the new business model. Business is evolving from a solid paradigm in the era of great industry to a brand-new model for innovative business organizations and new business species. With the extensive and in-depth digital transformation in various industries in China, both industry giants and small, medium and micro enterprises have to face the unknown opportunities and challenges brought about by digital transformation.

—— "Alibaba Cloud Cloud Native Architecture White Paper"

insert image description here

In recent years, the pace of cloud migration of traditional enterprises is accelerating. It can be said that cloud migration has gradually become an inevitable choice for enterprise development. In this process, cloud native helps enterprises maximize the use of cloud capabilities and maximize the value of cloud through open and standard technical systems, agile construction and operation of highly elastic, fault-tolerant, and easy-to-manage systems. Therefore, more and more enterprises and industries have begun to embrace cloud native. It can be said that cloud native not only reconstructs the entire software technology stack and life cycle, but also reconstructs the way enterprises go to the cloud.

insert image description here

The concept of cloud native has been in a process of continuous evolution and innovation from its introduction, to its growth, and to its popularity today. The hosting form of applications under the cloud native system is constantly evolving along with the enterprise application architecture.

Most of the earliest applications were centralized and monolithic, and the application realized domain model sharing and more detailed module splitting through elegant layering. With the explosive development of the Internet, the distributed architecture gradually replaces the centralized architecture.

The emergence and popularization of containers, through container-style packaging, standardized development and operation and maintenance have made large-scale, cross-language distributed applications a reality. Cloud-native application hosting architecture ushered in the first leap: containerization.

After that, the microservice architecture became popular on a larger scale, followed by the increasingly complex infrastructure that enterprises need to operate and maintain, and the geometric growth of the number of containers that need to be managed. On the one hand, Kubernetes shields the differences in the infrastructure of the IaaS layer, and with its excellent portability, helps applications run consistently in different environments including data centers, clouds, and edge computing;

On the other hand, with its excellent openness, scalability, and active developer community, Kubernetes stands out in the large-scale container orchestration battle and becomes the de facto standard for distributed resource scheduling and automated operation and maintenance. Cloud-native application hosting architecture ushered in the second evolution: fully embrace Kubernetes.

Although Kubernetes has brought many benefits, it is still full of challenges for most enterprises to implement Kubernetes in the production environment and continuously ensure the stability, security and scale growth of the system. In this context, Nodeless Kubernetes has come into everyone's attention: on the basis of retaining the complete Kubernetes capabilities, the complex operation and maintenance and capacity management work is downgraded to the cloud infrastructure base. So far, the cloud-native application hosting architecture has ushered in the third stage: Serverless Kubernetes.

Although Serverless Kubernetes has greatly reduced the burden of enterprise operation and maintenance of Kubernetes, the complexity and steep learning curve of Kubernetes itself are still daunting. How to let users' applications run on Kubernetes can not only enjoy many technologies brought by Kubernetes Dividends, and being able to transform them to zero as much as possible, has become another problem that needs to be solved urgently. Based on this, the cloud-native application hosting architecture ushered in the fourth stage: serverless application hosting.

The latter two modes are the Serverless architecture and form we will focus on today. So what exactly is Serverless? Different organizations have given different expressions and definitions from different perspectives. Here we choose the two most influential definitions:

insert image description here

The Berkeley Serverless paper believes that: Serverless Computing = FaaS + BaaS. To be considered a serverless application, an application must be able to achieve automatic scaling and billing based on usage.

CNCF believes that: Serverless computing refers to building and running applications without the need for server operation and maintenance management. It describes a fine-grained deployment model, in which an application is packaged into multiple functional modules and uploaded to the platform, which is then executed, scaled, and billed according to the exact current needs.

Although the perspectives and expressions are different, careful readers can quickly extract common keywords: on-demand usage, pay-as-you-go (cost), O&M-free (efficiency), and automatic scaling (elasticity). The essence is to liberate the limited resources and energy of the enterprise from the complicated infrastructure operation and maintenance, and invest and focus on its own core business logic.

insert image description here

Here we can compare buying a car, renting a car, and online car-hailing to understand what Serverless is.

An enterprise's self-maintaining server is like buying a private car. Although it has paid a huge resource cost (buying a car) and operation and maintenance costs (car insurance, maintenance), the carrying capacity is fixed (limited seats), and a large number of them are usually idle (when not driving) still have costs).

Enterprises buying cloud hosts to build their own business systems are like car rentals. Although they can be leased flexibly for a long period of time, expansion and contraction are relatively troublesome, and idle costs have been reduced, but they still exist.

The serverless era is like an online car-hailing service, with pay-as-you-go usage, automatic elastic scaling based on load, and basically no idle costs.

insert image description here

After understanding what serverless is, let's see how serverless application hosting can make application operation and maintenance easier, improve resource utilization, and help enterprises reduce costs and increase efficiency? We look at this question from three perspectives:

  • The operation and maintenance model has gradually evolved from manual operation and maintenance to an operation and maintenance model in which the cloud platform takes the main responsibility, and then evolved to a free operation and maintenance model in which the cloud platform is fully responsible.
  • The resource utilization rate has gradually evolved from the initial extremely low resource utilization rate based on peak procurement to a certain degree of utilization rate improvement based on node scaling, and has evolved to on-demand use that fully matches business peak fluctuations.
  • Resource costs have evolved from fixed cost expenditures to flexible payments based on resource water levels and to request-based payment models.

New Upgrade of Serverless Application Engine (SAE) 2.0

Combining the architecture and capability requirements discussed above, and corresponding to Alibaba Cloud's cloud-native serverless product matrix, Alibaba Cloud Serverless Application Engine (SAE) is a one-stop, fully managed, O&M-free, and extremely flexible serverless application hosting platform. It can realize an application hosting platform that does not require code changes, is simple and convenient to operate, and has adaptive elastic characteristics.

On the SAE platform, users no longer need to worry about complex infrastructure issues, they only need to upload code packages or container images to fully manage online services. SAE will automatically be responsible for the operation of the application and the adjustment of elastic instances, and also provide thoughtful auxiliary functions such as network, load balancing, and monitoring.

insert image description here

On the basis of Kubernetes Infrastructure, SAE is application-centric, and has a built-in microservice engine MSE agent, which provides a complete set of microservice capabilities and forms the best serverless practice represented by SAE+MSE. At the same time, it can achieve 100 % Embrace open source and give back to open source. Based on this best practice of cloud-native serverless microservices, development efficiency can be increased by 70% and costs can be reduced by 60%.

insert image description here

SAE provides rich elasticity indicators and flexible elasticity strategies:

  • Elasticity of monitoring indicators: On the basis of traditional CPU and Memory indicators, business-oriented elastic indicators are added, such as QPS, RT, and the number of TCP connections.
  • Timing flexibility: Provide white screen timing to set the expansion/reduction time, and the ability to expand/reduce the number of instances.
  • Hybrid elasticity: An elasticity strategy based on a mixture of timing elasticity and index elasticity. Based on the flexibility of monitoring indicators, the timing elasticity is superimposed on the traffic peaks in a fixed period of time as an enhanced solution to realize the refined elastic requirements of timing flexibility or monitoring indicator elasticity in different time periods.

insert image description here

SAE provides an efficient closed-loop DevOps system that completely covers the entire process from development state to deployment state to operation and maintenance state:

  • Seamless docking with open source Jenkins: Through the built-in Maven plug-in, you can complete the complete process from Source Code to build to the entire deployment. It can support several modes such as War package, Jar package and image deployment.
  • The most complete CI/CD solution on the cloud: The difference between it and Jenkins is that the code can be directly hosted on the cloud, and the code hosting can be completed by the cloud effect. It can also achieve security management on the code side, customize the pipeline, and provide a complete and consistent environment for building and running. It has relatively complete functions and is generally suitable for medium-sized enterprises.
  • The lightest and easiest-to-use CI/CD solution: deploy SAE through container image service. Its lightness lies in opening up the code warehouse through WebHook, customizing some rules for building mirrors and triggers on the container image service, and automatically building and deploying when the code is submitted. If you use an enterprise-level container image service, you can also implement image security scanning, anti-vulnerability, and global multi-domain distribution capabilities.

insert image description here

SAE provides a series of simple and efficient operation and maintenance capabilities such as WebShell, log collection, and events; it provides all-round observability capabilities and comprehensive authority management, account distribution and other enterprise-level capabilities.

insert image description here

Recently, SAE released a new upgrade 2.0, which brought three new upgrades:

  • First of all, the use of the product is more simplified, and the application can be launched without modification. There is no learning cost in the use process, and it only takes a few seconds to create and publish the application. In addition, paying for what you use can reduce application costs by more than 40%.
  • Secondly, the SAE2.0 standard is more open, built using container standards, and core capabilities will also be open sourced, providing a wealth of platform engineering capabilities that can help improve R&D and O&M efficiency by 50%.
  • Finally, the elastic capability has been continuously strengthened. SAE2.0 has realized the elastic scaling capability of 100 milliseconds, automatically adjusts the resource usage according to the traffic, optimizes the cold start effect of the application, and also supports reducing the instance to 0, that is, when there is no In the case of business traffic, no fee is incurred. These characteristics make it more friendly to emerging businesses and innovative startups.

At present, SAE 2.0 has launched a comprehensive public beta, and everyone is welcome to try it.

references:

1. https://developer.aliyun.com/ebook/6958?spm=a2c6h.14164896.0.0.149460cexwsCk2

2. https://zhuanlan.zhihu.com/p/137215790

3. https://github.com/cncf/wg-serverless/tree/master/whitepapers/serverless-overview

4 . https://developer.aliyun.com/article/1136342

5. https://developer.aliyun.com/article/933307?utm_content=m_1000345005

Click here to receive SAE free trial experience to deploy Serverless applications

Guess you like

Origin blog.csdn.net/alisystemsoftware/article/details/132366643