Alibaba Cloud Function Computing releases new features, supports container mirroring, and accelerates serverless applications

Let's take a video to see the excellent performance of the combination of function calculation and container in the video transcoding scene. Click to watch the video >>

The threshold of FaaS

Serverless cloud services help developers take on a large number of complex responsibilities such as expansion and contraction, operation and maintenance, capacity planning, and integration of cloud products, allowing developers to focus on business logic, improve delivery speed (Time-to-market), and continue Optimize costs. Function-as-a-Service (FaaS), as the earliest and most widely used serverless computing form on the cloud, has attracted a large number of developers in a few years, and has gradually established the optimal selection logic of `Serverless. However, the migration from traditional applications to FaaS still faces many challenges in the developer experience:

  • The environment is not uniform : the format of the deliverables defined by various manufacturers, the compatibility and richness of the operating environment are not the same, and the developers need to adapt or even recompile;
  • Learning cost : packaging dependent libraries, building into compressed code packages and familiar development and deployment methods are different;
  • Service restrictions : For example, the code package is limited to 100MB, forcing the separation of deliverable code dependencies, increasing the difficulty of management and release;
  • Deliverables lack version management : the format is not standard, best practices are not uniform, and developers are responsible for themselves;
  • Ecological immaturity : lack of support and integration of popular open source tools (such as CI/CD pipeline).

On the other hand, containers have achieved disruptive innovation in portability and delivery agility. The ecological precipitation surrounding containers is very rich and mature, and is widely accepted and used. Application containerization is rapidly becoming the de facto standard for development and deployment. However, the container itself does not alleviate the problems of operation and maintenance, expansion and contraction, idle costs, and cloud service integration. Therefore, realizing the integration of FaaS and container ecology will help developers obtain more technical dividends.

Alibaba Cloud Function Computing releases new features, supports container mirroring, and accelerates serverless applications

Function computing supports container mirroring

Alibaba Cloud Function Computing (FC for short) now supports container images as function deliverables, integrating the excellent development, deployment, and ecology of containers 上线前into function computing's own features such as free operation and maintenance, zero idle costs, and cloud service integration ( 上线后) , Fully upgrade the developer experience:

  • Simplify application serverless : no need to modify code or recompile binary, shared object (*.so), local debugging, keep development and online environment consistent;
  • Larger function code limitation : The image before decompression supports a maximum of 1 GB (compared to the maximum 50MB before decompression of the code package), which avoids separation of code and dependencies, and simplifies distribution and deployment;
  • Container image hierarchical cache : incremental code upload and pull, improve development efficiency and reduce cold start delay;
  • Mirror sharing and reuse : logic can be transplanted, reducing repeated development and construction;
  • Hybrid deployment : the same application Serverfull (ECS, container ACK), Serverless (FC, ASK, SAE), mixed deployment of different applications or switching between different services of the same application, achieving consistent performance, rigid delivery of resources, rapid expansion, and minimal operation and maintenance Balance
  • CI/CD : Continuous construction, integration testing, code upload, storage, and standard version management. Rich open source ecological CI/CD tools can be reused.

Alibaba Cloud Function Computing releases new features, supports container mirroring, and accelerates serverless applications

Typical customer scenario

A. Event-driven audio and video processing

The characteristics of audio and video processing, such as large traffic fluctuations, high requirements for computing resource flexibility, monitoring of video upload events, and reliance on workflows and queues, make FaaS the first choice for self-built audio and video services to the cloud. However, FFmpeg, the most commonly used software in such scenarios, often requires custom compilation to meet different needs. The compiled binary depends on libraries such as shared objects (*.so) and glibc in the compilation environment, and is not compatible with the FaaS runtime environment and cannot run. Recompiling not only brings extra work, but different dependencies and versions also bring challenges to business stability. As shown in the example in the figure below, the existing Dockerfile is used to keep the transcoding logic and related dependencies in the existing installation method and completely isolated container sandbox operating environment, which greatly reduces the migration cost, stability risk and FaaS development and deployment learning costs.

Alibaba Cloud Function Computing releases new features, supports container mirroring, and accelerates serverless applications

B. Serverless AI/ML model prediction, reasoning serving

AI/ML inference and prediction services can also enjoy the benefits of FaaS free operation and maintenance, automatic scaling, and low cost. However, popular frameworks in the community, such as TensorFlow, share and reuse in the form of container images by default. Not only does the official provide complete version coverage, the community ecology based on the official mirror is also very active. In the offline model training phase, it is deployed on ECS or ACK/ASK GPU clusters as container images. In the serving inference/prediction stage, CPU is often the more cost-effective choice. Serving is characterized by demand-driven, which not only needs to be able to respond quickly to burst traffic, but also release resources during the trough period, or even scale down to zero to save costs. These requirements are naturally what function computing is good at.

Before there was no container image support, it was not easy to deploy an example of TensoflowFlow serving on function computing. The size of TensorFlow's own library far exceeds the 50MB limit of the code package. Packing dependencies into NAS can bypass this problem, but it increases the difficulty of getting started and migration. Irregular dependencies and version management also introduce stability risks to changes. Using container image and function computing HTTP server programming model, a few simple lines of Dockerfile can run on FC. Tensorflow Serving example:

Alibaba Cloud Function Computing releases new features, supports container mirroring, and accelerates serverless applications

Function computing supports container mirroring to help AI/ML scenarios to smoothly mix and deploy containers and functions, unifying CI/CD tools, processes and best practices. Function computing free of operation and maintenance, high concurrency, 100 millisecond-level instance expansion and 100% resource utilization further optimize service quality and cost.

Alibaba Cloud Function Computing releases new features, supports container mirroring, and accelerates serverless applications

C. Serverless evolution of traditional Web monolithic HTTP applications

Traditional Web monolithic application modernization has three main demands: splitting responsibilities, reducing operation and maintenance pressure (resource planning, system upgrades, security patches, and other operation and maintenance burdens) and cost optimization. Although it is a best practice to adopt a single-duty function, it often takes longer to design and refactor to split duties. With the help of the mirroring support capabilities of function computing, single applications can be easily migrated to FaaS services to meet the requirements of free operation and maintenance, flexible horizontal expansion and 100% cost efficiency.

Due to historical reasons or business complexity, the operating environment (container mirroring) and business logic of traditional Web applications are often highly coupled and decoupling is expensive. For serverless transformation, it is sometimes necessary to upgrade the operating system and dependent library versions, and recompile in the environment provided by FaaS vendors. Migrating to a serverless architecture has time costs and stability risks. Function computing's support for container mirroring helps traditional containerized Web applications to enjoy the value of Serverless faster without modification, and to focus time and energy on business logic innovation and iteration instead of repetitive boring environments, relying on version management, upgrade maintenance and capacity Planning and scaling.

D. Cloud on and off, hybrid deployment across cloud vendors

The pace of enterprise cloud migration is accelerating. However, due to business characteristics, the hybrid operation of private and public clouds will be the norm for a long time to come. Enterprises even need multi-cloud vendors to ensure the requirements of migration, disaster tolerance, and rigid delivery of resources. Container mirroring is the default choice for unified software deliverables on and off the cloud. Function computing custom runtime selects the HTTP server standard interaction method, and the function code programming method is not tied to the vendor, which alleviates the enterprise's concerns about cloud vendor-lockin (vendor-lockin), functions that can run on the cloud, and even others under the cloud Cloud vendors can also be deployed separately as independent HTTP Web applications to service requests. Container-packaged functions can run on container services of other cloud services or IaaS self-built services to achieve multi-cloud disaster tolerance and elastic resource protection.

E. Best Practices for Cold Start

The deliverables of the FaaS code package separate the business logic from the execution environment, minimize the amount of data that needs to be loaded to run the business logic, and optimize the cold start speed to the greatest extent. The container image is the unified delivery of the operating environment and business logic, making a trade-off between portability and cold start speed. The introduction of a custom operating environment will inevitably increase the additional cold start delay. For this, we recommend the following cold start optimization best practices:

  • The container image address is recommended to use the VPC image address in the same region as the function calculation, for example registry-vpc.cn-hangzhou.aliyuncs.com/fc-demo/helloworld:v1beta1, to obtain the optimal image pull delay and stability
  • Mirroring is minimized, using tools like docker-slim to save only necessary dependencies and code, avoiding extra delays caused by unnecessary documents, data or other files
  • When resources are allowed and thread-safe, using it with single-instance multiple concurrency can avoid unnecessary cold starts and reduce costs.
  • Container images are used together with reserved instances to eliminate cold starts.

F. DevOps/GitOps best practices

The container image support standardizes the build steps and function deliverables, making it possible to reuse CI/CD tools. Function calculates and Ali cloud cloud effect service integration DevOps, introduced the CI / CD line. As shown in the figure below, when new code is pushed into the master branch of the code repository (Github/Gitlab), the build pipeline task is started. According to the Dockerfile specified in the code, the container image will be built and pushed to the Alibaba Cloud container image service. The last step of the pipeline will deploy and release the new version of the function to complete an automated release.

Alibaba Cloud Function Computing releases new features, supports container mirroring, and accelerates serverless applications

In addition to the fully automated continuous integration delivery experience of cloud-effect DevOps, Alibaba Cloud Container Image Service and self-built open source CI/CD pipelines can also be used to automate function release as shown in the following figure. The standardization of the function calculation publishing method allows enterprises to continuously deliver multiple different services with a unified tool, reducing the learning cost of deployment tools for development and operation personnel, and automating deployment to increase the success rate and delivery speed (time-to-market).

Alibaba Cloud Function Computing releases new features, supports container mirroring, and accelerates serverless applications

Similarities and differences with Custom Runtime

Function computing has launched a custom runtime Custom runtime in 2019 , so what are the similarities and differences between the custom-container released this time and the existing runtime?

  • The same programming model and the interaction method of the functional computing system: the same HTTP server protocol, the existing custom runtime functions can be directly transplanted to the environment-compatible custom container environment without modifying the code:
  • The two runtimes have different applicable scenarios and trade-offs:
    1. For non-containerized applications, you can continue to use custom runtime
    2. For scenarios with low cold start delay tolerance, it is recommended that you use custom runtime to save image pull time
    3. For asynchronous offline and containerized tasks (Job type), it is recommended that you use cutome-container runtime
    4. Applications that use function computing reserved instances and tightly coupled deployment environment and business logic can give priority to using custom-container runtime

future plan

As containers gradually become the standard way of application delivery and deployment, FaaS will integrate more closely with the container ecosystem, helping containerized applications become serverless at a lower cost, including the integration of surrounding supporting ecosystems such as declarative deployment methods, and K8s is similar to application abstraction, cloud native observability software integration.

Based on container image pull acceleration, function computing can take into account the performance of portability and fast startup. The original intention of container technology and serverless is to help users deliver faster (time-to-market) and continuously optimize costs, eliminate waste caused by idle resources, and increase corporate competitiveness. In the end, the two major technical areas of cloud native: Serverless and container technology will become closer, and the difference in development, deployment, operation and maintenance will continue to shrink, so that developers can hardly modify business logic to choose the right one for different workloads. Technical solutions, using open, standard, and unified cloud-native technology to continuously innovate to create more value for customers.

Guess you like

Origin blog.51cto.com/14902238/2561340