Changing the Future of Development | Exploring the Synergy of Serverless and AI

In recent years, serverless computing and artificial intelligence have profoundly changed the way applications are developed.

Serverless computing enables applications to be built and run without having to manage the underlying infrastructure, while artificial intelligence enables applications to make intelligent decisions based on data and examples. With cloud computing, developers have opened the door to a whole new world of application development and construction, and developers can build intelligent and scalable applications faster and more efficiently than ever before.

Application development begins with computer programming. Computer programming is often seen as a technical field that requires logic and problem-solving skills, but at the same time the creative process of programming is an art, and programmers can approach coding like an artist with a creative spirit and a desire to produce beautiful things .

When art and computer science come together, something unique and beautiful is created.

For example, digital art is an emerging field that uses computer programs to create stunning visual effects and animations. Game development is also a field where art and programming converge, where programmers work to create engaging worlds and experiences for players. Art and computer science have never been two separate fields, in fact, the two fields complement each other in unique and exciting ways.

The Amazon cloud technology developer community provides developers with global development technology resources. There are technical documents, development cases, technical columns, training videos, activities and competitions, etc. Help Chinese developers connect with the world's most cutting-edge technologies, ideas, and projects, and recommend outstanding Chinese developers or technologies to the global cloud community. If you haven't paid attention/favorite yet, please don't rush over when you see this, click here to make it your technical treasure house!

Abstraction is a basic concept of computer science, which is to simplify complex systems by actively hiding unnecessary details and focusing on essential features. Using abstractions allows us to build more efficient, scalable, and reliable systems. At its core, computer science is problem solving. Abstraction gives us a way to break down a problem into smaller, more manageable ones, which allows us to focus on solving specific aspects of the problem without getting bogged down in unnecessary details.

In computer programming, an interface is an example of an abstraction.

The interface defines the boundary of the model or module, defines the functions and responsibilities, and hides the internal implementation details of the module. Programmers only need to care about the unchanged interface, and do not need to care about how the internal changes of the module. Developers implement various deployments through the interface. The details are abstracted away to focus on the behavior it provides. This abstraction helps the code to be more flexible and adaptable to changing needs. In addition to interfaces, cloud computing now provides developers with a higher level of abstraction.

Serverless computing is a higher level computing abstraction.

Then why does this computing abstraction appear, and what problems does its appearance solve for developers?

We try to explain it through the evolution of computing abstraction of Amazon cloud technology:

Since the release of Amazon EC2 (EC2 is a virtual machine-based computing instance) in 2006, Amazon Cloud Technology has been accelerating cloud computing abstraction to adapt to less split applications and increasingly complex microservices.

In 2011, Amazon Cloud Technology released Elastic Beanstalk, which initially realized the automation of application deployment, including capacity planning, load balancing, and application health monitoring. This automated approach reduces the complexity of infrastructure deployment.

In 2014, Amazon Cloud Technology released two important services. They are Amazon ECS, a container service, and Amazon Web Services Lambda, a serverless computing service.

Since then, developers can invoke cloud computing resources through three layers: virtual machines, containers, and serverless . We believe that serverless computing provides a higher level of computing abstraction than virtual machines and containers, because it further simplifies the provisioning and management of infrastructure resources for developers, enabling automatic scaling and resource delivery on demand.

Although the emergence of serverless computing has achieved a higher level of abstraction in cloud computing, it further reduces the complexity of infrastructure deployment and helps developers focus more on writing business code. However, many developers, especially those focused on operations, still spend a lot of time and effort on the configuration and management of service categories such as databases, big data, machine learning, and security. They want to automate the entire lifecycle of infrastructure provisioning. Therefore, Amazon Cloud Technology also continues to invest resources in the deepening abstraction of other cloud resources.

In 2011, Amazon Web Technologies released CloudFormation, which enables developers to model and manage infrastructure resources in an automated and secure manner.

It enables developers to define and provision infrastructure resources using Infrastructure-as-Code (IaC) templates in JSON or YAML format. Then, Amazon Cloud Technology successively launched CLI, IDE tools and SDK. These services abstract the operation of the API, and they allow developers to manage infrastructure resources using familiar programming languages ​​and development tools.

In 2019, Amazon Cloud Technology released the Amazon Web Service cloud deployment kit (cdk). At this point, developers can use not only familiar programming languages ​​and development tools to define infrastructure resources, but also commands and code.

We believe that CDK implements a higher level of cloud resource abstraction. It can implement three levels of resource abstraction, namely cfn resources, higher-level abstract resources, and patterns:

Cfn resources refer to the basic cloud resources that can be provided by Amazon Cloud Technology, similar to the basic resource definition and configuration provided by CloudFormation. Higher-level abstraction resources are provided on top of Cfn resources, providing more concise definitions.

For example, ec2.Instance in the CDK provides a more concise definition than a Cfn machine instance. Patterns leverage multiple resources to provide a complete solution. For example, the AutoScalingGroup pattern in the CDK defines a complete architecture with an elastic load balancer, autoscaling group, launch configuration, and associated alarms. The CDK simplifies the process of defining and managing infrastructure resources by providing these three levels of resource abstraction. CDK makes infrastructure as code simpler than manually using CloudFormation YAML templates or scripts, enabling new levels of cloud resource abstraction.

Use a specific example to explain in detail how cdk abstracts resources:

First, developers can directly create S3 buckets using the CDK, which is an abstraction of Cfn resources. Developers can also use the CDK to add event handling mechanisms to this S3 bucket, which are operational abstractions like objects and methods in a high-level programming language. Developers can also use the CDK to define a typical scenario for performing specific functions and implementing specific patterns.

The continuous evolution of computing abstraction and the in-depth abstraction of more resources make application development from complex to simple, from heavy to light, helping developers to build a modern application. At the same time, the construction of modern applications is also inseparable from process optimization and best practices.

Based on the Amazon cloud technology development team's own experience, the best practices developers can learn from in the three phases of modern application construction:

1. Construction phase

  • Build resilient application architectures. According to different business scenarios, choose to transform the application container or build a new serverless application;
  • Automate resource management management through infrastructure as code for more efficient operations and maintenance;
  • Set up an automated release pipeline during development, use App Mesh to control network traffic for automatic application delivery, and ensure the safe and reliable release of new features.

2. Process Governance Phase

  • Improving service observability is used to implement microservice governance. You can use the cloud-hosted Prometheus service and add ADOT to fully monitor system operations;
  • Opting for a cloud-native database when the application is decomposed into multiple microservices, which helps in achieving elasticity and agility of the overall application architecture;
  • DevSecOps is important to application governance. Security teams should be integrated into development and operations teams to prevent security from becoming a bottleneck in the pipeline.

3. Build the platform

As the application continues to iterate and optimize, we find that the platform is the foundation of cloud native. Because we rarely see a development team that can start an application development process from scratch. Developers can use more platform-level services, such as databases, messages, API gateways, security, etc., as shared services for coordinated development. Developers can also use the easy-to-use, enterprise-standard platform-level services Shared Services Platform (SSP), Amazon Web Service Application Composer and Amazon Codecatelyst provided on the cloud.

  • Application Composer, a visual builder that enables developers to compose and configure serverless applications from Amazon Web Services, and supports deployment-ready infrastructure as code. At the same time, it is a tool that allows users to create custom applications by combining pre-built components or building new components from scratch.
  • Amazon Codecatelyst, is an integration service for software development teams employing continuous integration and deployment practices to adopt into their software development process. CodeCatalyst brings together all the tools a developer needs in one place. Developers can plan work, code collaboratively, and use CI/CD pipelines and tools to build, test, and deploy applications, which can help teams get started quickly and help developers focus on writing business code.

Programming languages ​​are the foundation of computer science , and their evolution is a fascinating journey for application development.

The first programming language, Fortran, was developed by IBM in 1950 for scientific and engineering applications. Dennis Ritchie developed the C language at Bell Labs in the 1970s. The C language became a widely used systems programming language and is still used today in operating systems, device drivers, and embedded systems. Object-oriented programming languages ​​such as C++ were developed in the 1980s. Scripting languages ​​were designed in the 1990s to be easy to learn and to write small programs quickly. Today, programming languages ​​such as Java, C#, and Swift are widely used to develop desktop and mobile applications. Python is popular in data science and machine learning.

In 2020, artificial intelligence (AI) has transformed many industries, and now it is changing the way we write code. AI can help developers write better code faster and with fewer errors. One of the ways AI helps programmers is through code completion. AI can also help detect bugs and optimize code. Another way AI is changing coding is through code generation.

For Amazon Cloud Technology, Amazon CodeWhisperer is an artificial intelligence coding tool that can generate entire lines of code and complete function code suggestions in the IDE in real time based on natural language comments and surrounding code, helping developers quickly write safe code.

For example: CodeWhisperer can automatically suggest appropriate code and unit tests based on the hint "parse the csv file to extract the second field and sort it in descending order". CodeWhisperer is available as an extension for VS Code and IntelliJ, and works natively in the Amazon Cloud9 and Amazon Lambda consoles.

Today, CodeWhisperer supports Python, Java, JavaScript, Typescript, C# and 10 other programming languages. CodeWhisperer is also a general-purpose coding tool, optimized for cloud services provided by Amazon Cloud Technology, and can provide high-quality suggestions.

Artificial intelligence coding tools belong to generative AI , which is one of the most popular technologies these days. Generative AI is AI that can create new content and ideas, including conversations, stories, pictures, videos, and music. Like all AI, generative AI is driven by machine learning models—very large models, pre-trained on massive datasets. Recent advances in machine learning (specifically the invention of Transformer-based neural network architectures) have resulted in models containing billions of parameters, or variables. The road to generative AI has been long, with many breakthroughs and challenges along the way.

In 2019, when cutting-edge machine learning models had around 300 million parameters, current state-of-the-art models have over 50 billion parameters. In other words, in just three years, the complexity of machine learning has increased by a factor of 1600. Many developers will work on the underlying model. Most of us won't be involved in creating the base models, but we may be involved in tuning those base models and doing hint engineering.

The training cost of large language and base models is very high, because we need large-scale, high-speed interconnected clusters of dedicated hardware to train the models. The training cost happens in a very centralized location, but close to us in terms of inference, so we have to look at how to optimize these models, and use dedicated hardware for inference to reduce the cost again.

The scale and generality of the underlying models set them apart from traditional machine learning models.

Traditional machine learning models typically perform specific tasks, such as analyzing text sentiment, image classification, and trend prediction. For traditional machine learning models, to achieve each specific task, users need to collect labeled data, train the model and deploy the model. For the basic model, users do not need to collect labeled data and train multiple models for each model, and can use the same pre-trained basic model to adjust various tasks. The base model can also be customized to perform domain-specific functions different from its business, using a fraction of the data and computation required to train a model from scratch. The potential of the base model is very exciting. But we're still at a very early stage. While ChatGPT has been the first broad generative AI experience to catch customers' attention, most people working on generative AI quickly realize that several companies have been working on the underlying models for years.

Hint engineering is the process of designing and optimizing hints, or input queries, for generative AI models. This is an important step in the development of a generative AI model because it determines the type of content the model will generate. In recent years, research and development in the field of cue engineering has advanced a lot, and we are now seeing more advanced techniques being used to generate high-quality content. One approach is to use human feedback to improve the quality of generated content, allowing the model to learn from its mistakes and produce more accurate and relevant content over time. Hint engineering is especially important for generative AI models used in specific domains, such as healthcare or finance. By designing hints tailored to these domains, developers can ensure that models produce accurate, domain-relevant content.

Amazon cloud technology has a full set of generative AI products, including:

  • Amazon Bedrock - The easiest way to build and scale generative AI applications with AI21 Labs, Anthropic, Stability AI, and Amazon FMs. Bedrock provides developers with a serverless generative AI experience. Developers can easily find suitable models, get started quickly, use their own data to privately customize FMs, and use familiar cloud tools and functions provided by Amazon cloud technology to easily integrate and deploy into your application without managing any infrastructure . Bedrock users can choose from some of the most cutting-edge FMs currently available. This includes AI21 Labs' Jurassic 2, Anthropic's Claude, Stability AI's Stable Diffusion, and Amazon's Titan.

  • Amazon EC2 Trn1n and Amazon EC2 Inf2 - Best price-performance infrastructure with training and inference in the cloud. Of course, Amazon Web Technologies has worked closely with Nvidia to provide developers with the H100 and V100 GPU products. Titan is Amazon's base model. These models are pre-trained on datasets containing vast amounts of information from diverse sources, enabling clients to incorporate broader context and generalize broadly across multiple domains and tasks. We will initially have two models: the first is a generative LLM for tasks such as summarization, text generation (e.g. creating blog posts), classification, open-ended question answering, and information extraction. The second is an embedded LLM that converts text input (words, phrases, and possibly even large chunks of text) into numerical representations (called embeddings) that contain the semantic meaning of the text. While this LLM does not generate text, it is useful for applications such as personalization and search, because by comparing embeddings, the model produces responses that are more relevant and contextual than word matching.

  • Services with generative AI built in, such as  Amazon CodeWhisperer .

Together, we explored the evolution of application development, as well as technological innovations in serverless and artificial intelligence. Whether you're a software developer, solutions architect, data scientist, or DevOps engineer, let's leverage new technologies to create the future of development.

Please continue to pay attention to the Build On Cloud WeChat official account to learn more about technology sharing and cloud development trends for developers!

Past recommendation

#Generative AI new world

# Architecture Model Best Practices

#GitOps Best Practices

Author Zheng Yubin

Senior developer evangelist of Amazon Cloud Technology, with 20 years of experience in the ICT industry and digital transformation practice, focusing on the field of cloud native and cloud security technology of Amazon Cloud Technology. With 18 years of experience as an architect, he is committed to providing consulting and technical implementation of data center construction and software-defined data center solutions for finance, education, manufacturing, and Fortune 500 corporate users.

Article source: https://dev.amazoncloud.cn/column/article/646482761dcde2352204b1a6?sc_medium=regulartraffic&sc_campaign=crossplatform&sc_channel=CSDN 

Guess you like

Origin blog.csdn.net/u012365585/article/details/130894266