The main module of langchain (3): Chain

langchain

1. Concept

What is LangChain?

Origin: LangChain came about when Harrison was talking to some people in the field who were building complex LLM applications, and he was developing methods

I saw some parts that can be abstracted. An application may need to prompt LLM multiple times and parse its output, thus requiring a lot of copy-pasting to be written.

LangChain makes this development process easier. Once launched, it was widely adopted in the community, with not only many users but also many contributors participating.

Work with open source.

There is also the problem of the large model itself, which cannot perceive real-time data and cannot interact with the current world.

LangChain is a framework for developing large language models.

Main features:

\1. Data awareness: Ability to connect language models with other data sources.

\2. Agency: allows the language model to interact with its environment. You can do various things by writing tools, and write and update data.

Main values:

1. It components the functions needed to develop LLM and provides many tools for easy use.

2. There are some ready-made chains that can complete specific functions, which can also be understood as improving the ease of use of the tool.

2. Main modules

Insert image description here

LangChain provides standard, extensible interfaces and external integrations for the following modules, listed from least to most complex:

Model I/O

Interface with language models

Data connection

Interface with application-specific data

Chain assembly (Chains)

Construct call sequence

Agents

Let chain assembly choose which tools to use based on high-level instructions

Memory

Persist application state across multiple runs of chained assembly

Callbacks

Record and stream the intermediate steps of any chain assembly

3.Chain

Chains allow us to combine multiple components together to create a single and coherent application. For example, we could create a chain that accepts user input, formats it using a PromptTemplate, and then passes the formatted response to LLM. We can build more complex chains by combining multiple chains together or combining chains with other components.

Several commonly used Chains are currently built in:

• LLMChain:

This is a simple chain consisting of a PromptTemplate and an LLM, which formats the prompt template using the provided input key values, passes the formatted string to the LLM, and returns the output of the LLM.

1.Model configuration

from langchain.chat_models import ChatOpenAI
from langchain import LLMChain
from langchain.prompts.chat import (
    ChatPromptTemplate,
    HumanMessagePromptTemplate,
)

api_base_url = "http://192.168.175.6:8000/v1"  
api_key= "EMPTY"
LLM_MODEL = "Baichuan-13b-Chat"
model = ChatOpenAI(
    streaming=False,
    verbose=True,
    # callbacks=[callback],
    openai_api_key=api_key,
    openai_api_base=api_base_url,
    model_name=LLM_MODEL
)

2. Set template

from langchain import PromptTemplate

template = """\
你是一个新公司的命名咨询顾问.
为制作 {product} 的公司起好的名字? 使用中文回答问题,不少于5个名字
"""

chat_prompt = PromptTemplate.from_template(template)
chain = LLMChain(prompt=chat_prompt, llm=model)
print(chain.run("五颜六色的袜子"))
  1. Rainbow Socks Shop 2. Colorful Sock Art 3. Colorful Socks Language 4. Colorful Sock Fashion 5. Brilliant Socks

• SimpleSequentialChain

Each step has a single input/output, the output of one step is the input of the next step.

from langchain.prompts import ChatPromptTemplate
from langchain.chains import SimpleSequentialChain

# 第一个Prompt和Chain
first_prompt = ChatPromptTemplate.from_template(
    "你是一个新公司的命名咨询顾问.为制作 {product} 的公司起2个好的名字? 使用中文回答问题"
)
chain_one = LLMChain(llm=model, prompt=first_prompt)

# 第二个Prompt和Chain
second_prompt = ChatPromptTemplate.from_template(
    "为下面的公司写一个20字的简短描述:{company_name}"
)
chain_two = LLMChain(llm=model, prompt=second_prompt)
# 把第一个Chain和第二个Chain合在一起
overall_simple_chain = SimpleSequentialChain(chains=[chain_one, chain_two],
                                             verbose=True
                                            )
overall_simple_chain.run("智能手机")

Insert image description here

• Sequential Chains:

Not all chains have fixed inputs and outputs. Sometimes the middle chain requires multiple inputs and ultimately multiple outputs. In this case, consider using SequentialChain.

# 这是一个LLMChain,给定一个剧本的标题和它所处的时代,它的任务是写一个概要。
template = """你是一位剧作家。给定剧本的标题和它所处的时代,你的任务是为该标题写一个概要。

标题: {title}
时代: {era}
剧作家: 这是上述剧本的概要:"""
prompt_template = PromptTemplate(input_variables=["title", "era"], template=template)
synopsis_chain = LLMChain(llm=model, prompt=prompt_template, output_key="synopsis")

# 这是一个LLMChain,给定一个剧本的概要,它的任务是写一个剧本的评论。
template = """你是一位专业的剧本评论家。给定剧本的概要,你的任务是为该剧本写一篇评论。

剧本概要:
{synopsis}
你对上述剧本的评论:"""
prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)
review_chain = LLMChain(llm=model, prompt=prompt_template, output_key="review")


# 这是整体链,我们按顺序运行这两个链。
from langchain.chains import SequentialChain
overall_chain = SequentialChain(
    chains=[synopsis_chain, review_chain],
    input_variables=["era", "title"],
    # 这里我们返回多个变量
    output_variables=["synopsis", "review"],
    verbose=True)

overall_chain({
    
    "title":"海滩上的日落悲剧", "era": "维多利亚时代的英格兰"})

Insert image description here

• RouterChain:

Sometimes a single serial Chain cannot meet our needs. At this time, consider using RouterChain.

It dynamically selects the next chain to be executed in a series of chains. This pattern is typically used to handle complex logic flows where the next step to execute depends on the current input or state.

#例如,如果你正在构建一个问题回答系统,你可能有多个链,每个链专门处理一种类型的问题
# (例如,一个处理物理问题,一个处理数学问题等)。
# 然后,你可以使用一个"RouterChain"来检查每个问题的特性,并将问题路由到最适合处理该问题的链。
from langchain.chains.router import MultiPromptChain
from langchain.chains import ConversationChain
from langchain.chains.llm import LLMChain
from langchain.prompts import PromptTemplate

physics_template = """你是一位非常聪明的物理教授。 \
你擅长以简洁易懂的方式回答物理问题。 \
当你不知道问题的答案时,你会承认你不知道。

这是一个问题:
{input}"""

math_template = """你是一位非常好的数学家。你擅长回答数学问题。 \
你之所以这么好,是因为你能够将难题分解成各个组成部分, \
回答组成部分,然后将它们组合起来回答更广泛的问题。

这是一个问题:
{input}"""

prompt_infos = [
    {  "name": "物理", "description": "适合回答物理问题","prompt_template": physics_template,},
    {  "name": "数学", "description": "适合回答数学问题","prompt_template": math_template,},
]


destination_chains = {}
for p_info in prompt_infos:
    name = p_info["name"]
    prompt_template = p_info["prompt_template"]
    prompt = PromptTemplate(template=prompt_template, input_variables=["input"])
    chain = LLMChain(llm=model, prompt=prompt)
    destination_chains[name] = chain

# 默认的Chain
default_chain = ConversationChain(llm=model, output_key="text")

destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
print(destination_chains.keys())
print(destinations)

dict_keys(['Physics', 'Math']) ['Physics: Suitable for answering physics questions', 'Mathematics: Suitable for answering math questions']

from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
# 物理: 适合回答物理问题', '数学: 适合回答数学问题
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)

router_prompt_template = """\
给定一个原始的文本输入到语言模型中,选择最适合输入的模型提示。
你将得到可用提示的名称和提示最适合的描述。如果你认为修改原始输入最终会得到更好的语言模型响应,你也可以修改原始输入。

<< 格式化 >>
返回一个markdown代码片段,其中包含一个格式化为如下样式的JSON对象:
​```json
{
   
   {
   
   {
   
   {
    "destination": string \\ 使用的提示名称或"DEFAULT"
    "next_inputs": string \\ 可能修改过的原始输入
}}}}
​```

记住:"destination" 必须是下面指定的候选提示名称之一,或者如果输入不适合任何候选提示,它可以是"DEFAULT"。
记住:"next_inputs" 可以是原始输入,如果你认为不需要任何修改。

<< 候选提示 >>
{destinations}

<< 输入 >>
{
   
   {input}}

<< 输出 >>
"""
router_template = router_prompt_template.format(destinations=destinations_str)
router_prompt = PromptTemplate(
    template=router_template,
    input_variables=["input"],
    output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(model, router_prompt)

# 构建RouterChains
chain = MultiPromptChain(
    router_chain=router_chain,
    destination_chains=destination_chains,
    default_chain=default_chain,
    verbose=True,
)
print(chain.run("什么是黑体辐射?"))

Insert image description here

print(chain.run("计算下7乘以24,然后再乘以60等于多少?"))

Insert image description here

print(chain.run("什么是彩虹?"))

Insert image description here

Guess you like

Origin blog.csdn.net/qq128252/article/details/132846929