Microsoft AIGC in a Day - Exploring Artificial Intelligence and Industry Application Practice Salon - After-participation

Let’s take a look at the promotional poster first

Insert image description here

Activity introduction summary

  • Event theme: Exploring Artificial Intelligence and Industry Application Practice Salon

Microsoft Power Platform joins hands with GPT to delve into AI + low-code development from the application level. A technology feast exploring artificial intelligence and industry application practices is coming!

On September 16, " Exploring Artificial Intelligence and Industry Application Practice Salon ", several technical experts from the fields of AI and low-code will jointly explore the unlimited potential of low-code development through ** 技术分享, ** , 案例实操and other forms !带来 AI + 低代码开发的最新技术动态与实践技巧学习如何利用GPT技术提升应用开发效率与用户体验

The event is divided into 2 venues:

  • From 10 a.m. to 5:30 p.m. at Venue A , many big names will talk about technology in the form of sharing;
  • Venue B is a Microsoft & NVIDIA hands-on workshop from 1:30 to 5:30 p.m., where two experts will lead hands-on experience of AIGC;

After-participation

Since it was my first time to go to Shanghai-Microsoft, I accidentally followed the navigation to the back door of Microsoft. I was still wondering how to get in. After waiting for a few minutes, a cleaning lady came over, and I followed her in.

Insert image description here

In fact, there is a front door, and there are sign-ins and so on at the front door.

Insert image description here
The front door is very fun. You can open it to take photos, sign in, and give things away.
Insert image description here

Follow the directions of the arrow and walk to venue A.
Insert image description here

I arrived more than half an hour early, but the staff was still debugging.
Insert image description here

A venue record

Because there is too much content to share and there are many key points, I will only write down some parts that I personally think are key.

Post-GPT era: Prompt is code:

  • GPT-1: GPT-1 is the first time to use pre-training methods to achieve efficient language understanding training;
  • GPT-2: GPT-2 uses transfer learning technology to apply pre-training information in a variety of tasks to improve language understanding capabilities;
  • DALL.E: DALL.E goes to another mode;
  • GPT-3: GPT-3 mainly focuses on generalization ability and few-shot (small sample) generalization;
  • GPT-3.5: GPT-3.5-instruction following and tuning are the biggest breakthroughs;
  • GPT-4: Engineering has begun
  • Plugin: The Plugin in March 2023 is ecological (primary);;
  • Function: Function Calling in June 2023 is ecological (advanced), and Prompt is code;

Three paradigms of human-machine collaboration:

  • Embedding Embedding Mode: Human and AI Writing Work
  • Co-pilot mode: humans do most of the work
  • AI Agents autonomous agent mode: AI completes most of the work

Insert image description here

The biggest problem with LLM:

question:当前,LLM的最大问题就是缺乏最新的知识和特定领域的知识.

Solution: For this problem, the industry has two main solutions: fine-tuning and retrieval enhancement generation.

Large model fine-tuning technical route:

  • Full Fine Tuning FFT (Full Fine Tuning)
  • Effective parameter fine-tuning PEFT (Parameter-Efficient Fine Tuning) [This method is commonly used]

Effective parameter fine-tuning PEFT (Parameter-Efficient Fine Tuning):
1.Prompt Tuning

The parameters of the base model remain unchanged. For each specific task, a small model with a small number of parameters is trained and called as needed when specific tasks are performed;

2.Prefix Tuning

Without changing the large model, adding appropriate conditions in the Prompt context can guide the large model to perform better.

3.LoRA

Hypothesis: The large language models we see now are all over-parameterized. Behind the over-parameterization, there is a low-dimensional essential model. To adapt to specific downstream tasks, a specific model must be trained. .

4.QloRA

QLoRA is a quantified version of LoRA. It is further quantified on the basis of LoRA, reducing the parameters originally expressed in 16 bits to 4 bits, which can ensure the effect of the model. It is a good, great cut costs.

What is more interesting is the comparison between RAG and FT:

  • RAG : This approach integrates retrieval (or search) capabilities into LLM text generation. It combines a retrieval system (obtaining relevant document fragments from a large corpus) and LLM (using the information in these fragments to generate answers). Essentially, RAG helps the model "find" external information to improve its response.

  • Fine-tuning : This is the process of taking a pre-trained LLM and training it further on a smaller specific dataset to adapt it to a specific task or improve its performance. Through tuning, we adjust the model's weights based on the data, making it more suitable for the unique needs of the application.

Also shared a development tip, you can set the key of OPENAI as an environment variable, and then call it in the code:
Insert image description here

Insert image description here

This will also improve development efficiency
Insert image description here

AIGC technical classification

AIGC technical classification:

  • Text: summary, Q&A, content completion;
  • Image: image editing, image generation, especially Al drawing;
  • Audio: conversion of speech to text, as well as imitation and automatic generation;
  • Video: generation, imitation, editing, post-processing;
  • Programming: generate code, debug bugs, and answer prompts;
  • Chatbot: automated intelligent customer service;
  • Learning platform: a platform or ecosystem that provides computing power, environment, and model framework;
  • Search: Further refining and automatically casting a wider net;
  • Games: Illustration design and character prototype generation lower the entry barrier;
  • Data: structural design, original collection, summary and refinement, and discovery of patterns;
  • Vertical industries: medical, engineering, law, education, individual entrepreneurship, etc.;

AIGC becomes a special assistant in financial copywriting work:

  • Preface to the new book
  • Cooperation plan
  • Business notification
  • Research outline
  • Event promotion
  • exam questions

Which type of people can use it better?

content generation

  • bank copywriting job
  • Call & customer service and other center customers' questions, automatically generate responses
  • Generate personalized UI for the website

Summary ability

  • Customer Service: Summary of customer conversation logs
  • Financial reports, analyst articles, etc.
  • Public opinion monitoring, social media, trend summary

code generation

  • Natural language and SQL interchange
  • Code comments
  • document
  • simple logic question
  • Syntax BUG

Semantic retrieval

  • Search for reviews of a specific product or service
  • Information Development & Knowledge Mining

おすすめ

転載: blog.csdn.net/qq_17623363/article/details/132940599