Examples of LLM applications LLM use cases and tasks

You might think that LLMs and generative AI are mainly concerned with the task of chatting. After all, chatbots are very much in the spotlight and in the limelight. Next word prediction is the basic concept behind many different features, starting with basic chatbots.
insert image description here

However, you can use this conceptually simple technique to perform various other tasks in text generation. For example, you can ask the model to write an article based on a prompt,
insert image description here

Or summarize the dialogue you provide as a prompt, and the model uses this data, along with its understanding of natural language, to generate a summary.
insert image description here

You can use the model to perform a variety of translation tasks, from traditional translation between two different languages, such as French and German, or English and Spanish.
insert image description here

Or translate natural language to machine code. For example, you can ask the model to write some Python code that will return the mean of each column in the DataFrame, and the model will generate code that you can pass to the interpreter.
insert image description here

You can use LLMs for small, focused tasks like information retrieval. In this example, you ask the model to identify people and places mentioned in news articles. This is called named entity recognition, a word classification. The knowledge understanding encoded in the model parameters enables it to perform this task correctly and return the requested information to you.
insert image description here

Finally, an active area of ​​development is the enhancement of LLMs by connecting them to external data sources or using them to call external APIs. You can use this feature to provide your model with information it did not know during pre-training and enable your model to interact with the real world.
insert image description here

You will learn more about how to do this in week 3 of the course. Developers have found that as the size of the underlying models grows from hundreds of millions of parameters to billions, or even hundreds of billions, the language understanding the models possess increases. This language understanding stored in the model parameters is what processes, reasones about, and ultimately solves the task you give it,
insert image description here

But it is also true that smaller models can be fine-tuned to perform well on specific focus tasks. You will learn more about how to do this in week 2 of the course. The rapid growth in capabilities LLMs have demonstrated over the past few years is largely due to the architecture that powers them. Let's move on to the next video for a closer look.

reference

https://www.coursera.org/learn/generative-ai-with-llms/lecture/7zFPm/llm-use-cases-and-tasks

Guess you like

Origin blog.csdn.net/zgpeace/article/details/132379705