The best way to open DevOps in the serverless era

Head picture.png

Author | Xu Chengming (Jingxiao)
Source | Alibaba Cloud Native Official Account

A brief analysis of DevOps

In the traditional software development process, development and operation and maintenance are extremely divided. Operation and maintenance personnel do not care about how the code works, and developers do not know how the code works.

For Internet companies, their business is developing rapidly, and they need to be updated quickly to meet the differentiated needs of users or competing product strategies. They need to perform rapid product iterations and conduct agile development through small steps.

For this kind of scenario that publishes n times a week or even n times a day, an efficient collaboration culture is particularly important. DevOps came into being in this scenario, it broke the barriers between developers and operations and maintenance personnel.

1.jpg

DevOps is a culture, movement or practice that values ​​communication and cooperation between "software developers (Dev)" and "IT operation and maintenance technicians (Ops)". By automating the processes of "software delivery" and "architecture change", software can be built, tested, and released more quickly, frequently, and reliably.

1-1.jpg

The picture above is a complete software development life cycle. The main feature of the DevOps movement is to advocate comprehensive management of the entire life cycle of building software.

Responsibilities of DevOps Engineers :

  • Manage the entire life cycle of the application, such as requirements, design, development, QA, release, and operation;
  • Pay attention to the efficiency improvement of the whole process, discover the bottleneck points and solve them;
  • Solve problems through standardized, automated, and platform-based tools.

The focus of DevOps is to shorten the development cycle, increase deployment frequency, and release more reliably . By introducing the concept of DevOps into the development process of the entire system, it can significantly improve the efficiency of software development, shorten the software delivery cycle, and become more adaptable to today's rapidly developing Internet era.

Serverless analysis

2.jpg

The left side of the figure above is Google Trends, which compares the keyword trends of Serverless and Microservices. It can be seen that over time, the popularity of Serverless has gradually surpassed that of microservices . This shows that developers and companies all over the world are very fond of Serverless. .

What exactly is Serverless? The right side of the above figure is the software logic architecture diagram, there are applications written by development engineers, there are also servers for application deployment, and server maintenance operations, such as resource application, environment construction, load balancing, capacity expansion, monitoring, and logs , Alarms, disaster tolerance, security, permissions, etc. Serverless actually shields the maintenance work of the server, which is a black box for developers. These tasks are supported by the platform, and the business only needs to focus on the core logic.

In general, the serverless architecture is a "serverless" architecture, an architectural model in the cloud computing era, which allows developers to avoid the need to pay attention to the acquisition and operation and maintenance of computing resources in the process of building applications, reducing operating costs and shortening the time to go online .

DevOps changes in the serverless era

1. Features of Serverless

3.jpg

The left side of the figure above shows the domestic adoption of serverless technology in the 2020 China Cloud Native User Survey Report. The figure shows that nearly 30% of users have applied serverless in the production environment, and 16% of users have applied serverless in the production of their core business. Environment, 12% of users have also used Serverless in non-core business production environments, which shows that the domestic acceptance of Serverless is relatively high.

The right side of the above picture shows the results of a survey report conducted by consulting firm O'Reilly on companies in different industries in different regions of the world. The picture shows that DevOps personnel are the first to use the serverless architecture.

So when Serverless meets DevOps, what changes will happen? First, let's take a look at the summary of Serverless features in the cloud native architecture white paper:

  • Fully managed computing services  

Users only need to write their own code to build applications, and do not need to pay attention to the development, operation and maintenance of homogeneous and complex infrastructure.

  • Versatility

Able to build various types of universal applications on the cloud.

  • Automatic elastic scaling

Users do not need to make pre-capacity planning for resources. If the business has obvious peaks and valleys or temporary capacity requirements, the serverless platform can provide corresponding resources in a timely and stable manner.

  • Pay-as-you-go

Companies can make cost management more effective without having to pay for idle resources.

Serverless makes operation and maintenance behavior transparent to development. Developers only need to pay attention to the development of core business logic, and then lean the entire product development process and quickly adapt to market changes. These features of Serverless mentioned above are in a natural fit with the cultural philosophy and goals of DevOps.

2. Serverless development operation and maintenance experience

4.jpg

In the process of traditional application construction, DevOps personnel have many steps to manage the entire life cycle:

  • In the resource preparation stage, ECS must be purchased for machine initialization and other series of operations;
  • In the R&D and deployment phase, bypass systems such as business applications, monitoring systems, and log systems need to be deployed on ECS;
  • In the operation and maintenance phase, you not only need to operate and maintain your own applications, but also need to operate and maintain Iaas and other bypass monitoring, log, and alarm components.

And if you move to Serverless, what will the development experience be like?

  • In the resource preparation phase, no resource preparation is required, because Serverless is used on demand and paid for by volume, without paying attention to the underlying server;
  • In the R&D and deployment phase, you only need to deploy your own business to the corresponding Serverless platform;
  • In the operation and maintenance stage, completely free of operation and maintenance.

It can be seen that the Iaas and monitoring, logs, and alarms in the traditional application construction process are completely absent on Serverless. It is presented to users in the form of fully managed, free operation and maintenance.

Best Practices of DevOps in the Serverless Era

The experience described above is actually based on SAE, a serverless product of Alibaba Cloud. Serverless Application Engine (SAE) is a DevOps best practice provided in the Alibaba Cloud Serverless product matrix. Let me briefly introduce SAE:

1. Serverless Application Engine (SAE)

SAE is an application-oriented Serverless PaaS platform that supports mainstream application development frameworks such as Spring Cloud, Dubbo, and HSF. Users can modify with zero code, directly deploy applications to SAE, and use on-demand, pay-as-you-go, and second-level flexibility, which can give full play to the advantages of serverless and save users the cost of idle resources.

5.jpg

In terms of experience, SAE adopts a fully managed, free operation and maintenance approach. Users can focus on the development of core business logic, and the entire life cycle management of the application, such as monitoring, logs, and alarms, is completed by SAE. It can be said that SAE provides a more cost-effective and efficient one-stop application hosting solution. Users can achieve zero threshold, zero transformation, and zero container foundation to enjoy the technical dividends brought by Serverless.

Three characteristics of Serverless Application Engine (SAE):

  • 0 Code transformation : seamless migration of microservices, out-of-the-box use, support for War/Jar automatic mirroring;
  • 15s elastic efficiency : application end-to-end rapid expansion, coping with burst traffic;
  • 57% Cost reduction and efficiency improvement : multiple environments start and stop on demand, reducing costs and improving efficiency.

2. Build an efficient closed-loop DevOps system

An efficient closed-loop DevOps system has been built in SAE, covering the entire process of development, deployment, and operation and maintenance.

6.jpg

Medium and large enterprises generally use enterprise-level CICD tools (such as Jenkins or cloud efficiency) to deploy to SAE, so as to complete the entire process from source code to construction to deployment.

For individual developers or small and medium-sized enterprises, they are more inclined to use the Maven plug-in or IDEA plug-in to deploy to the cloud with one click, which facilitates local debugging and improves the overall user experience.

When deployed to SAE, visualized intelligent operation and maintenance operations can be performed, such as high-availability operation and maintenance (service governance, performance pressure measurement, current limit degradation, etc.), application diagnosis (thread diagnosis, log diagnosis, database diagnosis, etc.), and dataization Operations. The above operations are all ready-made functions that users can use out of the box after being deployed to SAE.

Through SAE, users can easily realize the overall development, operation and maintenance process, and feel the all-round experience and efficiency improvement brought by Serverless. Here are some SAE best practices:

3. Best practices in deployment mode: CI/CD

7.jpg

SAE currently supports three deployment methods, namely War, Jar and Mirror .

If users use applications such as Spring Cloud, Dubbo or HSF, they can directly package or fill in the corresponding URL address, and then they can be deployed directly to SAE. For non-Java language scenarios, it can be deployed through mirroring. In the future, we will also support other language packs to be deployed in an automated way.

8.jpg

In addition to direct deployment, SAE also supports three methods: local deployment, cloud-effect deployment, and self-built deployment .

Local deployment relies on the CloudToolkit plug-in, which supports IDEA/Eclipse. Users can deploy to SAE in IDEA with one click, without logging in, and easily perform automated operations.

Cloud effect deployment is an enterprise-level integrated CICD platform product provided by Alibaba Cloud. Through cloud effect, changes in the code base can be monitored. If a Push operation is performed, the entire release process of cloud effect will be triggered. For example, for code inspection or unit testing, the code is compiled, packaged, and built. After the build is completed, the corresponding building will be generated, and then it will call the SAE API, and then perform the overall deployment operation. This whole set of processes is also available out of the box. Users only need to perform visual configuration on the cloud effect console to string together the entire process.

Self-built deployment means that the user's company can also directly use SAE if it is built directly through Jenkins. As the largest open source CICD platform for Jenkins, we also provide strong support, and many users have successfully deployed to SAE through Jenkins.

4. Best Practices in Deployment Mode: Application Release

Three axes of application release include: grayscale, monitorable, and rollback . All changes within Alibaba need to strictly implement the above-mentioned "three strikes". As a cloud product, SAE is also a product integration of Alibaba's best practices for external output.

  • Can be grayscale : support multiple release strategies such as single batch, batch, canary, etc.; support grayscale by traffic, automatic/manual release between batches, batch interval and other release options;
  • Monitorable : Clearly compare the changes in basic monitoring and application monitoring indicators of different batches during the release process, expose problems in a timely manner, and locate change risks;
  • Rollback : During the release process, manual intervention is allowed to control the release process, such as abnormal termination and one-click rollback.

9.jpg

10.jpg

The above picture is a screenshot of the console. You can see that we support single batch, batch and grayscale releases in deployment.

The process of executing the release is carried out through the release side, and each release side has specific steps. First, build the image, then initialize the environment, and then create and update the deployment configuration. Users can clearly see the current running progress and status of the publishing end, which is convenient for troubleshooting.

5. Best practice of operation and maintenance: observable in all directions

SAE provides omni-directional observability and can observe any changes in the distributed system. When there is a problem in the system, you can easily locate, troubleshoot, and analyze problems; when the system is running smoothly, you can also expose risks in advance and predict possible problems. Through SAE, users can know their applications well.

11.jpg

Here are three aspects of observability: Metrics, Logging, and Tracing.

  • Metrics

Representing aggregated data, SAE provides the following basic monitoring indicators:

1) Basic monitoring: CPU, MEM, Load, Network, Disk, IO;
2) Application monitoring: QPS, RT, number of exceptions, HTTP status codes, JVM indicators;
3) Monitoring alarms: rich alarm source reporting, alarm convergence processing , A variety of alert channels are reached (such as email, SMS, phone, etc.).

12.jpg

  • Logging

Represents discrete data and provides the following functions:

1) Real-time log: real-time view of Stdout and Stderr;
2) File log: custom collection rules, persistent storage, and efficient query;
3) Events: release order change events, application life cycle events, event notification callback mechanism.

13.jpg

  • Tracing

It means that it can be checked according to the requested dimensions, and the following out-of-the-box functions are provided:

1) Request call chain stack query;
2) Automatic discovery of application topology;
3) Drill-down analysis of indicators in common diagnostic scenarios;
4) Transaction snapshot query;
5) Abnormal transaction and slow transaction capture.

14.jpg

6. Best practice of operation and maintenance: online debugging

15.jpg

Through SAE online debugging, you can access the target port of a single instance, which is equivalent to that the user can directly access a specific instance of an application in the cloud locally. The principle is to provide a port-mapping SLB for the instance. Through this capability, users can achieve the following functions:

  • SSH / SFTP access instance

You can directly connect to the specific instance of the application via SSH locally, or upload/download files via SFTP.

  • Java retmote debug

It is equivalent to configuring a breakpoint in IDEA, and then remotely connecting to the corresponding SAE instance, so that you can view the calling station and context information of the entire method through the breakpoint, and diagnose the online running application.

  • Connection examples of other diagnostic tools

Other diagnostic tools can also be connected to the SAE instance by means of online debugging, and then you can see some information about Java, such as stack or thread. Applicable scenarios : real-time observation, operation and maintenance and troubleshooting of online applications during runtime.

7. Best practice in development mode: terminal-cloud joint debugging

For the microservice scenario, we provide a very useful capability: device-cloud joint debugging. It is based on CloudToolkit plugin + springboard, which can achieve :

1) Subscribe to local services and register to the built-in registry of cloud SAE;
2) Local services can call each other with cloud SAE services. 

Applicable scenarios :

1) Microservice applications are migrated to cloud SAE, development joint debugging during the migration process;
2) Local development test verification.

16.jpg

The principle of this function is that it needs to be in the user's VPC, and then the ECS proxy server is used as a springboard. ECS can inter-modulate with SAE applications in the same VPC, and then this ECS can be connected to the local by means of reaction proxy .

The CloudToolkit plug-in will inject the address of the corresponding SAE registry and some context parameters of the microservice when the application starts, so that the user's local application can be connected to the SAE application through a springboard, so that the entire end-cloud joint debugging process can be carried out.

About the Author

Xu Chengming, whose name is Jingxiao, has participated in the back-end R&D work of EDAS and SAE in the aPaaS field, and has experienced changes in cloud native and serverless technology trends.

Guess you like

Origin blog.51cto.com/13778063/2664059