Six innovative application functions under the Amazon Cloud Technology Big Language Model

Table of contents

Preface

AI innovative applications of Amazon Cloud Technology

Edit

Amazon CodeWhisperer

Advantages of Amazon CodeWhisperer Product

Get more done faster

  Code with confidence

Enhance code security

Use the favorites tool

Customize CodeWhisperer for better suggestions

How to use Amazon CodeWhisperer

step 1

Step 2

Specific usage video tutorial URL

Amazon SageMaker Neo

working principle    

Advantages

The main function

Amazon Bedrock

working principle

Choose from a range of leading base models

Build agents that can dynamically call APIs to perform complex business tasks

Use RAG to extend FM’s capabilities by connecting FM to company-specific data sources

Support data security and compliance standards

 Advantages

Why choose Amazon Bedrock?

Amazon OpenSearch Serverless

Advantages

Use Cases

Amazon QuickSight

The main function

Built for everyone

Amazon HealthScribe

AWS Health Services

AWS Healthcare and Life Sciences Industry Solutions

AWS Healthcare & Life Sciences Solutions Program

AWS Powering Healthcare and Life Sciences


Preface

        With the emergence of ChatGPT, the generative AI (Artificial Intelligence Generated Content, also known as AIGC) trend has swept the world with an unstoppable momentum. Business leaders from all walks of life to millions of programmers and developers are thinking about how to use generative AI technology to improve work efficiency, promote business innovation, and gain competitive advantages.

AI innovative applications of Amazon Cloud Technology

        Amazon Cloud Technology has been deeply involved in the fields of artificial intelligence and machine learning for more than 20 years and has accumulated more than 100,000 customers around the world. In practice, Amazon Cloud Technology is committed to continuously exploring breakthroughs with its own product and technological advantages, and using the power of AI to create new driving forces for industry development.

Amazon CodeWhisperer

Advantages of Amazon CodeWhisperer Product

Get more done faster

        Trained on billions of lines of code, CodeWhisperer can generate code suggestions in real time from code snippets to full functions based on your comments and existing code. Bypass time-consuming coding tasks and speed up builds using unfamiliar APIs.​   

  Code with confidence

        CodeWhisperer can flag or filter code suggestions similar to open source training data. Get the repository URL and license for relevant open source projects so you can more easily view them and add attribution.

Enhance code security

        Scan your code to detect hard-to-find vulnerabilities and get code recommendations to fix them immediately. Follow best practices for tracking security vulnerabilities, such as those outlined by the Open Worldwide Application Security Project (OWASP), or vulnerabilities that do not comply with cryptographic library best practices and other similar security best practices.

Use the favorites tool

        ​​​​CodeWhisperer fits the way you work. Choose from 15 programming languages, including Python, Java, and JavaScript, and your favorite integrated development environment (IDE), including VS Code, IntelliJ IDEA, AWS Cloud9, AWS Lambda Console, JupyterLab, and Amazon SageMaker Studio.

Customize CodeWhisperer for better suggestions

        ​​​​ You can customize CodeWhisperer to understand your internal libraries, APIs, packages, classes, and methods to generate more relevant recommendations and significantly speed up development.​  

How to use Amazon CodeWhisperer

step 1

        ​​​​Install the latest AWS Toolkit plugin in your integrated development environment (IDE). Supported IDEs include Visual Studio (VS) Code and JetBrains IDEs (IntelliJ, PyCharm, CLion, GoLand, WebStorm, Rider, PhpStorm, RubyMine and DataGrip) Install the latest CodeWhisperer extension. CodeWhisperer has AWS Cloud9 and AWS Lambda consoles built-in.

Step 2

        ​​​​​For VS Code and JetBrains IDE, open the AWS extension panel and select the Get Started button under Developer Tools > CodeWhisperer. In the popup that appears, select the "Log in with Builder ID" option. Sign up with your email address and log in with your AWS Builder ID.

Specific usage video tutorial URL

AI Code Generator - Amazon CodeWhisperer Resources - AWSGather more information about Amazon CodeWhisperer using the many resources we have available on AWS. icon-default.png?t=N7T8https://aws.amazon.com/cn/codewhisperer/resources/#Getting_started

Amazon SageMaker Neo

        Amazon SageMaker Neo enables developers to optimize machine learning (ML) models for inference on SageMaker in the cloud and on supported devices at the edge.

        ​ ​ ML inference is the process of using a trained machine learning model to make predictions. After training a model with high accuracy, developers often spend a lot of time and effort tuning the model to achieve high performance. When inferring in the cloud, developers often turn to large instances with high memory and processing power at a higher cost to achieve better throughput. To perform inference on edge devices with limited compute and memory, developers often spend months manually tuning models to achieve acceptable performance within the device hardware limitations.

        Amazon SageMaker Neo automatically optimizes machine learning models for inference on cloud instances and edge devices, running faster without losing accuracy. First, choose a machine learning model that has been built using DarkNet, Keras, MXNet, PyTorch, TensorFlow, TensorFlow-Lite, ONNX, or XGBoost and trained in Amazon SageMaker or anywhere else. Then, select your target hardware platform, which can be a SageMaker managed instance or an edge device based on an Ambarella, Apple, ARM, Intel, MediaTek, Nvidia, NXP, Qualcomm, RockChip, or Texas Instruments processor. With just one click, SageMaker Neo optimizes the trained model and compiles it into an executable file. The compiler uses machine learning models to apply performance optimizations to extract the best available performance for your model on cloud instances or edge devices. You can then deploy the model as a SageMaker endpoint or on a supported edge device and start making predictions.

        When inferring in the cloud, SageMaker Neo speeds up inference and saves costs by creating inference-optimized containers in SageMaker hosting. SageMaker Neo saves developers months of manual tuning time by automatically tuning models for selected operating systems and processor hardware when inferring at the edge.

        Amazon SageMaker Neo uses Apache TVM and partner-provided compilers and acceleration libraries to provide the best available performance for a given model and hardware target. Under the Apache Software License, AWS contributes compiler code to the Apache TVM project and runtime code to the Neo-AI open source project to enable processor vendors and device manufacturers to rapidly innovate on a common, compact runtime.​​    

working principle    

Advantages

  • Improve performance up to 25x

        Amazon SageMaker Neo automatically optimizes machine learning models to speed up processing by up to 25x without sacrificing accuracy. SageMaker Neo uses the toolchain best suited for your model and target hardware platform, while providing a simple, standard API for model compilation.

  • Less than 1/10 the runtime trace

        Amazon SageMaker Neo runtime consumes only 1/10 of the traces of deep learning frameworks such as TensorFlow or PyTorch. Instead of installing the framework on the target hardware, the compact Neo runtime library is loaded into the ML application. Unlike compact frameworks like TensorFlow-Lite, the Neo runtime can run models trained in any framework supported by the Neo compiler.

  • faster production time

        Amazon SageMaker Neo makes it easy to prepare models for deployment on virtually any hardware platform with just a few clicks in the Amazon SageMaker console. You get all the advantages of manual tuning without any effort.

The main function

  • Optimize inference without compromising accuracy

        Amazon SageMaker Neo uses research-led technology in the machine learning compiler to optimize models for the target hardware. SageMaker Neo automatically applies these systems' optimization techniques to speed up your models without sacrificing accuracy.

  • Supports commonly used machine learning frameworks

        Amazon SageMaker Neo converts models from framework-specific formats such as DarkNet, Keras, MXNet, PyTorch, TensorFlow, TensorFlow-Lite, ONNX, or XGBoost into a common representation, optimizing computation and generating hardware-specific representations for target SageMaker managed instances or edge devices. executable file.

  • Provides a compact runtime via a standard API

        Amazon SageMaker Neo runs on 1MB of storage and 2MB of memory, many times smaller than the storage and memory footprint of a framework, while providing a simple, universal API to run compiled models from any framework.

  • Supports common target platforms

        The Amazon SageMaker Neo runtime is supported on Android, Linux, and Windows operating systems and on processors from Ambarella, ARM, Intel, Nvidia, NXP, Qualcomm, and Texas Instruments. SageMaker Neo can also convert PyTorch and TensorFlow models into Core ML format for deployment on Apple devices on macOS, iOS, iPadOS, watchOS, and tvOS.

  • Inference-optimized containers for Amazon SageMaker managed instances

        When inferring in the cloud, Amazon SageMaker Neo provides inference-optimized containers, including MXNet, PyTorch, and TensorFlow integrated with the Neo runtime. Previously, SageMaker Neo might fail to compile models that used unsupported operators. SageMaker Neo now optimizes each model, that is, the compiler supports the operators in the model and uses the framework to run the rest of the uncompiled model. As a result, you can run any MXNet, PyTorch, or TensorFlow model in an inference-optimized container while getting better performance from compilable models.

  • Model Partitioning for Heterogeneous Hardware

        Amazon SageMaker Neo leverages partner-provided accelerator libraries to provide the best available performance for deep learning models on heterogeneous hardware platforms with hardware accelerators and CPUs. Acceleration libraries such as Ambarella CV Tools, Nvidia Tensor RT, and Texas Instruments TIDL support a specific set of functions and operators. SageMaker Neo automatically partitions the model so that parts with accelerator-supported operators can run on the accelerator, while the rest of the model runs on the CPU. In this way, SageMaker Neo takes full advantage of hardware accelerators, increasing the types of models that can run on the hardware while improving the performance of the models—that is, their operators are supported by the accelerator.

  • Support for Amazon SageMaker INF1 instances

        Amazon SageMaker Neo can now compile models for the Amazon SageMaker INF1 instance target. SageMaker Hosting provides managed services for inference on INF1 instances (based on AWS Inferentia chips). SageMaker Neo uses the Inferentia processor-specific Neuron compiler behind the scenes while providing a standard model compilation API, simplifying the task of preparing models for deployment on SageMaker INF1 instances while delivering the best available performance and cost savings benefits of INF1 instances.

Amazon Bedrock

        AWS has fully expanded its fully managed base model service Amazon Bedrock, including adding Cohere as a base model provider, adding the latest base models from Anthropic and Stability AI, and releasing a new feature Amazon Bedrock agent.​   

        ​​​​Cohere can be used to develop enterprise AI platforms and cutting-edge basic models that can generate, retrieve and summarize information more intuitively. Claude 2, the latest language model from AI company Anthropic, has been connected to Amazon Bedrock. Stability AI plans to release the latest version of its Vincentian graph model suite, Stable Diffusion XL 1.0 (SDXL 1.0), on Amazon Bedrock.

        The Amazon Bedrock agent function can help developers create fully managed artificial intelligence agents (AI Agents) and help enterprises accelerate the delivery of generative AI applications. These applications can manage and perform tasks by making API calls to company systems. Amazon Bedrock agent capabilities can extend the underlying model to understand user requests, break complex tasks into multiple steps, gather more information through conversations, and take actions to fulfill user requests. With the Amazon Bedrock agent feature, apps can automate tasks for internal or external customers, such as managing retail orders, processing insurance claims, and more. Generative AI applications serving e-commerce can not only answer simple questions, but also help users complete complex tasks such as updating orders and managing transactions.

working principle

Choose from a range of leading base models

  • Choose from a variety of base models

        You can access a variety of foundational models from Amazon and other leading AI companies, such as AI21 Labs, Anthropic, Cohere, Meta, and Stability AI, and quickly experiment with them in experimental environments. This series of basic models includes Amazon Titan, Jurassic-2, Claude 2, Command, Llama 2, and Stable Diffusion XL, which support different modes such as text, embedding, and image respectively.

  • Exclusive customized basic model

        Using the Amazon Bedrock console, you can use your data to fine-tune your models to complete your company's specific tasks without writing code. Simply select training and validation datasets stored in Amazon Simple Storage Service (Amazon S3) and adjust hyperparameters if needed to achieve the best possible model performance.

  • Single API

        No matter which model you choose, you can use a single API for inference. A single API gives you the flexibility to use different models from different model providers and keep up with the latest model versions with minimal changes to your code.

Build agents that can dynamically call APIs to perform complex business tasks

  • Automatically create prompts

        Amazon Bedrock works based on instructions provided by developers (such as "You are an insurance agent who specializes in handling open claims"), the API architecture required to complete the task, and information from knowledge bases such as Amazon OpenSearch serverless vector engine, Pinecone and Redis Enterprise Cloud) company data source details creation prompt. Automatically creating prompts can save you weeks of time when experimenting with prompts for different base models.

  • Planning

        The Amazon Bedrock agent can orchestrate user-requested tasks by breaking them down into smaller subtasks. For example, for "Send reminders to all policyholders with pending files," it would break down the task into: Get Claims for a specific time period, identify required paperwork, send reminders. The agent determines the correct order of tasks and handles any error conditions that arise along the way.

  • Retrieval enhancement generation

        Amazon Bedrock agents securely connect to your company’s data sources, automatically convert your data into numerical form, and enhance user requests with relevant information to generate more accurate and relevant responses. For example, if a user asks about documents required for a claim, the agent looks up the information from the appropriate knowledge base of your choice (such as Amazon OpenSearch Serverless Vector Engine, Pinecone, or Redis Enterprise Cloud) and provides the correct answer: "You need a driver's license , car photos and accident reports.”

Use RAG to extend FM’s capabilities by connecting FM to company-specific data sources

  • Connect the base model to the data source

        ​​​​​​With the Amazon Bedrock Knowledge Base, you can integrate underlying models with your organization's data sources to provide more accurate and relevant responses. By specifying a data source (such as Amazon Simple Storage Service (Amazon S3)) to extract the data, an embedding base model (such as Amazon Titan Embeddings) to convert the data to a vector format, and a target vector database to store the vector data. (such as Amazon OpenSearch serverless vector engine, Pinecone and Redis Enterprise Cloud), you can quickly add knowledge bases.

  • Implement automatic detection of data sources

        ​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​ away can identify the appropriate data sources, then retrieve relevant information based on user input, and contextually integrate the retrieved information into the user's query to provide more accurate responses.

  • Provide source attribution

        All information retrieved from the Amazon Bedrock knowledge base comes with source attribution, increasing transparency and minimizing artificial intelligence illusions.

Support data security and compliance standards

  • Built with comprehensive data protection and privacy protection

        Amazon Bedrock offers a variety of features that meet security and privacy requirements and is compatible with common compliance standards such as GDPR and HIPAA. In Amazon Bedrock, your content is not used to improve the base model, nor is it shared with third-party model providers. You can use Amazon PrivateLink with Amazon Bedrock to establish a private connection between your base model and your on-premises network without exposing traffic to the internet.

  • Protect your generative AI applications

        Amazon Bedrock supports encryption. Your data is always encrypted, both in transit and when stored at rest, and you can also use keys to encrypt your data. Using Amazon Key Management Service (Amazon KMS) keys, developers can create, own, and manage encryption keys, giving them complete control over how data used for custom base models is encrypted.

  • Implement governance and audit

        Amazon Bedrock provides comprehensive monitoring and logging capabilities to support your governance and auditing needs. You can use Amazon CloudWatch to track usage metrics and build custom dashboards with the metrics you need for auditing. Additionally, you can use Amazon CloudTrail to monitor API activity and troubleshoot issues when integrating other systems into your generative AI applications. You can also choose to store metadata, requests, and responses in your Amazon Simple Storage Service (Amazon S3) bucket. Finally, to prevent potential abuse, Amazon Bedrock has implemented an automatic abuse detection mechanism.

 Advantages

  • Choose a leading base model

        Amazon Bedrock provides an easy-to-use developer experience that works with a variety of high-performance FMs from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon. Whichever model you choose, you can quickly try out a variety of FMs on the playground and use a single API for inference, which gives you the flexibility to use FMs from different providers and keep your models up-to-date with minimal code changes. Version.

  • Customize yourself using developer data

        ​​ ​​​​​​​​​​​​​​​​​​​​​​​​​​​ With a visual interface, you can privately customize FM with your own data without writing any code. Simply select training and validation datasets stored in Amazon Simple Storage Service (Amazon S3) and adjust hyperparameters if needed to achieve the best possible model performance.

  • Fully managed agent that dynamically adjusts the API to perform tasks

        ​​​​Build agents capable of performing complex business tasks—from booking travel and processing insurance claims to creating ad campaigns, preparing tax returns, and managing inventory—by dynamically calling company systems and APIs. Amazon Bedrock's fully managed agent extends FM's inference capabilities to break down tasks, create an orchestration plan, and execute that plan.

  • Provides native support for RAG, extending FM’s capabilities with proprietary data

        ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​ ​​

  • Data Security and Compliance Certification

        Amazon Bedrock offers a variety of features that support security and privacy requirements, is HIPAA eligible, and is GDPR compliant. In Amazon Bedrock, your content is not used to improve the base model, nor is it shared with third-party model providers. Your data in Amazon Bedrock is always encrypted in transit and at rest, and you can encrypt your data using your own keys. You can use AWS PrivateLink with Amazon Bedrock to establish a private connection between FM and your Amazon Virtual Private Cloud (Amazon VPC) without exposing traffic to the internet.

Why choose Amazon Bedrock?

        Amazon Bedrock is a fully managed service that uses a single API to deliver high-performance foundational models (FMs) from leading AI companies such as AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon, as well as build generative AI applications A broad set of features required to simplify development while maintaining privacy and security. With Amazon Bedrock's comprehensive capabilities, you can easily try out a variety of popular FMs, personalize them with your data using techniques like fine-tuning and retrieval-augmented generation (RAG), and create creations that perform complex business tasks, from booking travel and processing insurance claims to hosting agents to create ad campaigns and manage inventory), all without writing a single line of code. Because Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using AWS services you're already familiar with.​  

Amazon OpenSearch Serverless

        Amazon OpenSearch Serverless is the serverless option in Amazon OpenSearch Service. As a developer, you can use OpenSearch Serverless to run petabyte-scale workloads without having to configure, manage, and scale an OpenSearch cluster. You get the interactive millisecond response times of OpenSearch Service with the simplicity of a serverless environment.

Advantages

  • Get started in seconds    

Use familiar open source ingestion and pipelines without application changes.

  • Scale on demand

OpenSearch Serverless can automatically provision and continuously adjust to quickly ingest data and respond in milliseconds as usage patterns and needs change.

  • cut costs

Automatically scale resources to provide the capacity your application needs, paying only for what you use without impacting data ingestion.

  • Store and search vector embeddings

Power your generative artificial intelligence (AI) applications with simple, scalable, and performant vector searches.​ 

Use Cases

  • Achieve flexibility with variable workloads

Seamlessly scale application resources without provisioning required compute power and memory.

  • Comply with sensitive service level agreements (SLAs)

Pre-initialize application resources and achieve response times in seconds.

  • Create development and testing environments

Quickly create development and test environments that automatically scale based on unpredictable usage and reduce time to market.

  • Build ML-enhanced search experiences

Power generative AI applications by hosting vector and text searches to generate more precise, accurate search results.

Amazon QuickSight

        Amazon QuickSight allows everyone in your enterprise to understand your data by asking questions using natural language, exploring through interactive dashboards, or automatically finding patterns and outliers powered by machine learning.        

        QuickSight supports millions of dashboard views for customers every week, allowing its end users to make better data-driven decisions.

The main function

  • Enable BI for everyone with QuickSight Q

Ask conversational questions about your data and use Q's machine learning (ML)-powered engine to get relevant visualizations, eliminating the need for time-consuming data preparation by authors and administrators.​ 

  • Perform advanced analytics with ML Insights

Discover hidden insights in your data, perform accurate predictions and what-if analysis, or add easy-to-understand natural language narratives to your dashboards by leveraging AWS’s machine learning expertise.

  • Embed analytics to make your app stand out

Easily embed interactive visualizations and dashboards, complex dashboard authoring, or natural language query capabilities into your applications to differentiate your user experience and unlock new monetization opportunities.

Built for everyone

  • Enterprise'sEnd users ask business questions about their data in natural language and get accurate answers through relevant visualizations. QuickSight Q uses machine learning to interpret the intent of the question and analyze the right data to quickly provide accurate answers to business questions.

  • Business AnalystsCan seamlessly create serverless Pixel-Perfect dashboards in minutes—securely connect to petabytes of data in Amazon S3 and query using Amazon Athena , shared simultaneously with tens of thousands of users in Amazon QuickSight, all without any client software or server infrastructure.

  • Developers can use robust AWS APIs to deploy and scale embedded analytics to hundreds of thousands of users in their applications. Share data visualizations and insights with everyone in your enterprise, whether via the web, mobile, email, or embedded apps.

  • Because QuickSight automatically scales to workloads,administrators can deliver consistent performance. QuickSight provides updates every two weeks, ensuring all users have the latest functionality without any downtime, version conflicts or compatibility issues with traditional BI solutions. QuickSight is also the first BI service to offer pay-per-session pricing, making deployment cost-effective at scale. 

Amazon HealthScribe

        AWS Healthcare and Life Sciences Industry Solutions

        From the operating table to the hospital bed, accelerating innovation in all aspects

        ​​​​​ Healthcare and life sciences organizations are reimagining how they collaborate, make data-driven clinical and surgical decisions, support precision medicine, and reduce the cost of care. To help healthcare and life sciences organizations achieve their business and technology goals, AWS Healthcare and Life Sciences Industry Solutions offers a set of AWS services and AWS Partner solutions used by thousands of customers around the world.        

AWS Health Services

  • AWS HealthScribe

Within the app, clinical notes are automatically generated by analyzing patient-clinician conversations.

  • AWS HealthLake

Provides a complete view of health data for an individual or patient population.

  • AWS HealthImaging

Store, transform and analyze medical images at petabyte scale in the cloud.

  • AWS HealthOmics

Transform genomic, transcriptomic, and other omics data into insights.

AWS Healthcare and Life Sciences Industry Solutions

  • healthcare solutions

Transform the healthcare industry with purpose-built solutions.

  • life science solutions

Discover life sciences solutions that can help you bring therapies to market faster.

  • Genomics Solutions

Achieve breakthroughs with genomics solutions that unlock profound insights.

AWS Healthcare & Life Sciences Solutions Program

  • AWS Health Equity Program

Provide AWS credits and technical expertise to selected organizations to help them address health inequities affecting underserved or underrepresented communities around the world.

  • AWS Diagnostics R&D Program

Provide AWS credits and technical expertise to selected organizations in four program areas: early disease screening, diagnostics, prognosis, and public health genomics.

  •  AWS Healthcare Accelerator

A four-week technology, business, and mentorship accelerator opportunity open to healthcare startups seeking to leverage AWS to help solve the most significant challenges facing the healthcare industry.

AWS Powering Healthcare and Life Sciences

  • Purpose-built health services and solutions

Learn new ways to leverage purpose-built services and solutions to reduce costs, increase operational and clinical efficiencies, and ultimately improve patient care, supporting healthcare and life sciences organizations of all sizes.

  • A network of trusted health partners

Innovate faster by making it easy for customers to start building on AWS, leveraging the vast network of industry-leading AWS Partners and the AWS Marketplace, a comprehensive digital catalog of third-party software, services, and data.

  • Experts at the intersection of health and technology

Work with a dedicated team of healthcare and life sciences industry experts to support your organization's digital transformation and innovation initiatives. AWS hygiene experts have an average of 18 years of experience.

  • Industry-leading security and reliability

Improve security and simplify compliance with more than 130 HIPAA-compliant services. AWS enables customers to enjoy the scale and reliability of an extremely global cloud infrastructure.

Je suppose que tu aimes

Origine blog.csdn.net/lbcyllqj/article/details/134221123
conseillé
Classement