LangChain, a powerful tool for LM (large model) application development, takes you into the AI world

Original text:LangChain, a powerful tool for LLM (large model) application development, takes you into the AI ​​world - Jianshu

LangChain component diagram

What is LangChain?

First of all, LangChain is a framework that allows developers to develop LLMs (large language models) applications.

It can be understood that it is a scaffold developed for various LLMs, which encapsulates and links the various components of LLMs. "Linking" LLMs-related components together simplifies the development of LLMs applications and facilitates developers to quickly develop complex LLMs applications.

To give an inappropriate example, from the perspective of a Java engineer, LangChain is more like a framework like Spring or SpringBoot, which helps developers develop applications faster.

LangChain framework components

Models (I/O): Various types of model integration.

OutlineSummary

    · Prompts: Template, dynamically select and manage model inputs

    · Language models: Call language models through common interfaces

    · Output parsers: Extract information from model output

Models(I/O)

Prompts component: Contains Prompt templates and Example selectors.

Prompts

Prompt templates:

  · Instructions for language models

  · A set of several shot examples to help the language model generate better responses

  · A question about language models

Examples respectively: TemplateFormat, MessageTemplate, FewShotPromptTemplate, Example selectors

TemplateFormat:

TemplateFormat

MessageTemplate:

MessageTemplate

FewShotPromptTemplate:

FewShotPromptTemplate

Example selectors:

Example selectors

Language models:

  ·   LLMs

  ·   Chat models

LLMs: Models that take text strings as input and return text strings.

gpt-3.5-turbo:

gpt-3.5-turbo

Streaming:

Streaming

Chat models: Chat models are a variant of language models.

Caching:

Caching

outputparser

  · Get ​​format instructions

  · Analysis

  · Analysis with hints

Examples respectively: DateTimeParser, EnumParser, ListParser, OutputParser

DateTimeParser:

DateTimeParser

EnumParser:

EnumParser

ListParser:

ListParser

OutputParser:

OutputParser

Memory: Memory involves retaining state concepts during user interaction with the language model. User interaction with the language model is captured in the concept of ChatMessages, so this boils down to ingesting, capturing, transforming and extracting knowledge from a sequence of chat messages. Generally speaking, for each type of memory, there are two ways of understanding how to use it. These are independent functions that extract information from a sequence of messages, and then you have a way to use this type of memory in a chain. Memory can return multiple pieces of information (for example, the last N messages and a summary of all previous messages).

Memory

OutlineSummary

    · ConversationBufferMemory

    · ConversationBufferWindowMemory

    · ConversationTokenBufferMemory

    · ConversationSummaryMemory

ConversationBufferMemory:

ConversationBufferMemory

ConversationBufferWindowMemory:

ConversationBufferWindowMemory

ConversationTokenBufferMemory:

ConversationTokenBufferMemory

ConversationSummaryMemory:

ConversationSummaryMemory

Chains:

Chains

OutlineSummary

    · LLMChain

    · SequentialChain

               · SimpleSequentialChain

               · SequentialChain

    · RouterChain

LLMChain:

LLMChain

SimpleSequentialChain: A general sequence chain can use the output of the previous chain as the input of the next chain. Generally sequence chains have unique input and output variables.

SimpleSequentialChain

SequentialChain: A sequence chain contains multiple chains, the results of some of which can be used as input to another chain. Sequence chains can support multiple input and output variables.

SequentialChain flow chart

SequentialChain

RouterChain: The routing chain is similar to a while else function. According to the input value, the corresponding route (path) is selected for subsequent links. The entire routing chain generally has one input and one output.

RouterChain flow chart

RouterChain(1)

RouterChain(2)

Agents: Some applications require flexible call chains to LLM and other tools based on user input. The proxy interface provides flexibility for such applications. The agent has access to a set of tools and determines which tools to use based on user input. An agent can use multiple tools, using the output of one tool as input to the next.

Agents

OutlineSummary

    · Action agent: at each time step, use the output of all previous actions to decide the next action

    · Plan and execute agent: decide the complete sequence of operations in advance, then execute all operations without updating the plan

Examples respectively: MathAndWikiAgent, PythonREPLAgent, MultiFunctionsAgent

MathAndWikiAgent:

MathAndWikiAgent

PythonREPLAgent:

PythonREPLAgent

MultiFunctionsAgent:

MultiFunctionsAgent

Index: Indexing refers to structuring a document in an optimal way so that language models (LLMs) can optimally interact with it. This module contains

Utility functions for managing documents.

index

OutlineSummary

    · Embeddings: Embeddings are numerical representations of information (such as text, documents, images, audio, etc.). Embedding allows information to be converted into vector form so that computers can better understand and process it.

    · Text Splitters: When you need to process longer text, it is necessary to split the text into multiple chunks. Text splitter is a tool for splitting long text into smaller pieces.

    · Vectorstores: Vectorstores store and index vector embeddings from natural language processing models, used to understand the meaning and context of text strings, sentences, and entire documents, resulting in more accurate and relevant search results. See available vector databases.

The code example is as follows:

index

The application example langchain-chatglm-6B flow chart is as follows:

langchain-chatglm-6B(1)

langchain-chatglm-6B(2)

evaluation:

OutlineSummary

    ·Example generation: Example generation

    · Manual evaluation (and debugging): Manual evaluation (and debugging)

    · LLM-assisted evaluation: LLM-assisted evaluation

Example generation:

Example generation(1)

Example generation(2)

Manual evaluation (and debugging):

Manual evaluation (and debugging)

LLM-assisted evaluation:

LLM-assisted evaluation

Last edited on: 2023.06.30 18:53:55

Guess you like

Origin blog.csdn.net/javastart/article/details/134482642