Explore the unknown and build AI native applications immediately! WAVE SUMMIT Workshop is waiting for you to participate

Do you want to master the secrets of large model development? Are you eager to get the opportunity to practice operation? If your heart is full of enthusiasm and anticipation, then the Workshop specially set up by WAVE SUMMIT 2023 will be your starting point for knowledge! This Workshop focuses on AI development and large-scale model application, and invites outstanding AI software engineers and product experts to share their technical experience and lead each participant to understand the latest developments in AI. Here, you can learn the unique method of proficiency in AI development, and master the key technologies of large-scale model application. WAVE SUMMIT 2023Workshop will provide you with unique learning, communication and exploration opportunities to help you accelerate the development and application of AI technology.

Course Details

Full-process practice of large model application development and deployment

Activity time

13:00-13:45

Event Location

Exhibition area Workshop

Activities

The AI ​​Studio application center is a model effect experience display platform. Developers can deploy the model as an application that can be experienced directly through simple code operations. On the AI ​​Studio platform, developers can develop large-scale model applications in one stop. The AI ​​Studio platform covers model content resources in the entire AI field, and can enter the online development environment with one click. It is supported by powerful computing power in the cloud, and supports online deployment of models as applications for visual experience. In this workshop, the AI ​​Studio product manager will take you from 0 to 1 to understand the whole process of getting started with large model application development and build innovative applications.

  • Introduction to AI Studio project development
  • Project development based on AI Studio SDK and flying paddle open source kit
  • Gradio-based application interface development and deployment

Hands-on experience: AI Studio selected application experience
Hands-on operation:

  • AI Studio project new
  • Quickly install AI Studio SDK and flying paddle model kit
  • Notebook rapid development of prompting chaining and model reasoning
  • Interface development and debugging based on gradio
  • application deployment

Speaker introduction

picture

Xiong Siyan

Baidu Flying Paddle AI Studio Product Leader

picture

Shi Yixin

Baidu Paddle Product Manager

Al Studio zero code large model development practice

Activity time

15:40-16:25

Event Location

Exhibition area Workshop

Activities

The AI ​​Studio large model community is newly launched. In the large model community, communicate with many large model enthusiasts in real time, experience and produce large model related applications. At the same time, developers can create large-scale model applications in just three simple steps. In this process, there is no need to have a code base, as long as they have ideas and relevant data, they can create applications "painlessly". In this workshop, the product manager of AI Studio will introduce the large model community and demonstrate how to create a generative dialogue application based on Wenxin large model.

  • AI Studio large model community introduction
  • Advanced Prompting Development Paradigms
  • Common mistakes and how to deal with them in trivia and role-playing

Hands-on experience: AI Studio community experience

Operation on the machine:

  • Quickly Build AI Native Applications
  • Prompt Engineering Quick Start
  • Application effect iterative optimization
  • Data preprocessing and mounting
  • Common pitfalls and how to deal with them

Speaker introduction

picture

Xiong Siyan

Baidu Flying Paddle AI Studio Product Leader

picture

Shi Yixin

Baidu Paddle Product Manager

Hands-on experience with the voice AI development tool - NVIDIA NeMo code combat

Activity time

13:00-13:45

Event Location

"Joining Flying Paddle Innovation Acceleration" sub-forum

Activities

NVIDIA NeMo is an open source tool library for building conversational artificial intelligence applications. It uses speech and text data as input, and uses artificial intelligence and natural language processing models to understand semantics, so as to realize the mutual conversion between speech and text. NeMo toolkit can be used to build a conversational solution for voice and text interaction between humans and machines, and can be used to build application scenarios such as intelligent voice assistants, chat robots, intelligent voice translation, voice-controlled smart homes, and driverless car voice command interaction. This event will focus on how to use NeMo to train a custom voice AI model, and at the same time experience the inference of the voice AI model in the edge computing device Jetson NX.

  • Introducing NVIDIA NeMo
  • Build a Speech Dataset for NeMo
  • Using NeMo to train ASR speech recognition model method
  • Using NeMo to train TTS speech synthesis model method

Hands-on experience:
Implementing voice AI model inference on Jetson NX edge computing devices

Operation on the machine :

  • Using Jetson NX as the experimental hardware platform
  • Use Jupyter Lab as an experiment development tool
  • Online remote network connection corresponds to the IP port to log in
  • Hands-on experience with NeMo code

Speaker introduction

picture

Yipeng Li

NVIDIA Enterprise Developer Community Manager

Has many years of data analysis modeling, artificial intelligence natural language processing development experience. He has rich practical experience and insights in the field of conversational AI technology such as automatic speech recognition, natural language processing, and speech synthesis. He has developed an intelligent question answering system based on entity extraction in legal, financial, and insurance documents, and an intelligent retrieval system for scientific research documents based on NLP knowledge extraction and the establishment of KG knowledge graphs.

Practice of using LLM high-speed inference engine on Intel® Xeon® CPU

Activity time

13:00-13:45

Event Location

"Industry-Education Integration Talent Co-education" sub-forum

Activities

xFasterTransformer is a large language model (LLM) inference engine highly optimized for the Intel Xeon CPU platform. It is compatible with mainstream model formats such as huggingface and NVIDIA FT, supports OPT, LLAMA, ChatGLM, Falcon and other mainstream large language models inference in FP16/BF16/INT8 data formats, and can make full use of hardware features on the Xeon CPU platform for acceleration , and has good cross-node scalability. This event will introduce the core architecture and technical implementation of xFasterTransformer, demonstrate its excellent inference performance on the Intel Xeon SPR platform, and experience how to use xFasterTransformer to accelerate large-scale model inference calculations on the Xeon CPU platform.

  • Introduction to xFasterTransformer architecture and implementation
  • Intel SPR/HBM hardware platform introduction

Hands-on experience: Realize high-speed reasoning on a large model of the CPU platform on a remote SPR-HBM server. Hands -on experience: Use xFasterTransformer to accelerate LLM reasoning on a remote Intel Xeon SPR-HBM as the experimental hardware platform.

Speaker introduction

picture

Meng Chen

Intel Senior AI Software Engineer

Engaged in the research and development of deep learning computing framework and high-performance computing, with profound AI knowledge accumulation. As a contributor of PaddlePaddle, he has rich practical experience in the development and performance optimization of distributed training and reasoning.

Kind tips

  • Space is limited, so it is recommended to arrive 5-10 minutes in advance.
  • Please bring your laptop to the venue. If the battery life is short, it is recommended to bring a charger.
  • Please register a Flying Paddle AI Studio account and complete your personal information in advance.

workshop schedule

insert image description here

Please follow WAVE SUMMIT 2023 official website and social media channels to get the latest summit information and registration notice. Let us start the future of AI together and explore the infinite possibilities of large models. We look forward to your arrival, start your AI journey, and share the feast of knowledge and innovation!

Guess you like

Origin blog.csdn.net/PaddlePaddle/article/details/132229655