SAE 2.0 makes containerized application development easier

Author: Shao Dan

The evolution of cloud-native containerized application hosting models

The concept of cloud native has been in a process of continuous evolution and innovation since it was proposed, expanded, and has become extremely popular today. The hosting form of applications under the cloud native system continues to evolve with the enterprise application architecture. Most of the earliest applications were centralized and monolithic. Applications achieved domain model sharing and more detailed module splitting through elegant layering. With the explosive development of the Internet, distributed architecture gradually replaces centralized architecture, and cloud native application hosting has also experienced four stages of evolution.

Phase One: Containerization

The emergence and great popularity of Docker, through container-style packaging, standardized development and operation and maintenance, have made large-scale, cross-language distributed applications a reality.

Phase 2: Fully embrace Kubernetes

Since then, microservice architecture has become more popular on a larger scale. With this, the infrastructure that enterprises need to operate and maintain has become increasingly complex, and the number of containers that need to be managed has increased exponentially. On the one hand, Kubernetes shields the differences in IaaS layer infrastructure and helps applications run consistently in different environments including data centers, clouds, and edge computing with its excellent portability; on the other hand, with its excellent openness and portability, Kubernetes With its scalability and active developer community, it stands out in the battle for large-scale container orchestration and has become the de facto standard for distributed resource scheduling and automated operation and maintenance.

Phase 3: Serverless Kubernetes

Although Kubernetes has brought many benefits, it is still full of challenges for most enterprises to implement Kubernetes in a production environment and continue to ensure system stability, security and scale growth. In this context, Nodeless Kubernetes comes into everyone's sight: while retaining the complete Kubernetes capabilities, complex operation and maintenance and capacity management work is moved to the cloud infrastructure base.

Phase 4: Serverless containerized application hosting

Although Serverless Kubernetes has greatly reduced the burden of operating and maintaining Kubernetes for enterprises, the complexity and steep learning curve of Kubernetes itself are still daunting. How can users run their applications on Kubernetes while enjoying the many benefits brought by Kubernetes? Technological dividends and the possibility of zero transformation as much as possible have become another problem that needs to be solved urgently.

Serverless application engine SAE is born

Serverless application engine SAE was born under this background. It is a zero-code modification, extremely easy-to-use, adaptive and flexible application full hosting platform. SAE allows you to operate IaaS and K8s without operation and maintenance, and deploy online applications in any language (such as Web/microservices/Job tasks) from source code/code package/Docker image to SAE in seconds, and automatically scale instances and bill based on usage. , supporting capabilities such as logging, monitoring, and load balancing are available out of the box.

The emergence of SAE solves the problem that many enterprises want to use K8s but have difficulty getting started. They can enjoy the technical dividends of K8s with a very low threshold, and have on-demand usage, pay-as-you-go charging model and adaptive elasticity. It also provides a strong boost for enterprises to reduce costs and increase efficiency.

Serverless application engine SAE2.0 comprehensive upgrade

This year, the Serverless application engine SAE has entered the 2.0 era and achieved a comprehensive upgrade. The first is elasticity:

Play faster

On the basis of ensuring full compatibility with enterprise development habits, the elastic efficiency of SAE2.0 has been greatly improved, from the second level to the hundreds of milliseconds, and begins to support the ability to shorten to 0. Shrinking the capacity to 0 means no charges will be incurred when there is no business traffic, which can make the resource utilization infinitely close to the load of the requested resources.

More savings after bombing

After doing a lot of user research, we found that for many enterprise applications, there is no need to maintain a large amount of resources when there are no requests or when the request processing is completed. Then we can release its CPU or only retain very low CPU resources after the request is processed, and maintain the memory state to achieve the purpose of keeping resources alive and instances alive. This is idle billing.

The most important purpose of idle accounting is to use the release of CPU to save CPU costs; and by retaining memory, millisecond-level recovery can be achieved when the next instance is started, saving resources to the maximum extent while also Guaranteed to have very low latency.

Play more stably

Through the optimization of the whole link on the platform side, the delay is reduced by 45%, and the runtime performance fluctuation is reduced to 7%. While the elasticity is thinner and the elasticity is more stable, the stability is also optimized.

SAE2.0 has a built-in traffic gateway , and the corresponding single-instance concurrency can be configured according to each instance, which is similar to the number of concurrencies we usually talk about. When concurrency increases, the corresponding instance can be expanded based on its actual number of requests.

When I have no requests, the CPU will not be billed, which is the idle billing mentioned above. When a request comes, one instance will be allocated first based on the actual number of concurrencies. When this instance is full, the next instance will be expanded. This enables automatic expansion based on the actual traffic when the traffic fluctuates. The ability to shrink.

SAE2.0 provides the ability to configure multi-version traffic for web applications . It enables independent network configuration for each version. According to business needs, the traffic ratio corresponding to multiple versions can be dynamically configured, and there is no need to specify the number of instances corresponding to it. The number of instances is based on the configured instance upper limit and traffic ratio through automatic elastic capacity expansion. , thus achieving the coexistence of multiple versions.

In addition, in terms of development experience, SAE2.0  can upgrade traditional monolithic architecture or microservice architecture to Serverless application architecture without any coding changes . And with one-click deployment and second-level application creation capabilities, efficient application release is achieved. At the same time, SAE2.0 also has platform engineering capabilities such as CLI and S2A, which greatly improves users' R&D efficiency. In addition, it also has the Knative Adapter function, allowing Knative applications to be published on SAE2.0 very smoothly.

The author of a well-known open source project lost his job due to mania - "Seeking money online" No Star, No Fix 2023 The world's top ten engineering achievements are released: ChatGPT, Hongmeng Operating System, China Space Station and other selected ByteDance were "banned" by OpenAI. Google announces the most popular Chrome extension in 2023 Academician Ni Guangnan: I hope domestic SSD will replace imported HDD to unlock Xiaomi mobile phone BL? First, do a Java programmer interview question. Arm laid off more than 70 Chinese engineers and planned to reorganize its Chinese software business. OpenKylin 2.0 reveals | UKUI 4.10 double diamond design, beautiful and high-quality! Manjaro 23.1 released, codenamed “Vulcan”
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/3874284/blog/10352256