TruLens Langchain 集成示例代码

本示例代码将指导您创建简单 LLM 链,并学习如何记录它并获得 LLM 响应的反馈。

1. 安装依赖

pip install trulens_eval==0.18.3 openai==1.3.7

2. 设置 API 密钥

此示例代码需要 OpenAI 和 Huggingface 密钥。

import os

import openai
from dotenv import load_dotenv, find_dotenv

_ = load_dotenv(find_dotenv())

openai.api_key = os.environ['OPENAI_API_KEY']
openai.base_url = os.environ['OPENAI_BASE_URL']
os.environ["HUGGINGFACE_API_KEY"] = os.environ['HUGGINGFACE_API_KEY']

3. 导入 LangChain 和 TruLens

from IPython.display import JSON

# Imports main tools:
from trulens_eval import TruChain, Feedback, Huggingface, Tru
from trulens_eval.schema import FeedbackResult
tru = Tru()
tru.reset_database()

# Imports from langchain to build app. You may need to install langchain first
# with the following:
# ! pip install langchain>=0.0.170
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import ChatPromptTemplate, PromptTemplate
from langchain.prompts import HumanMessagePromptTemplate

4. 创建简单 LLM 应用程序

此示例使用 LangChain 框架和 OpenAI LLM。

full_prompt = HumanMessagePromptTemplate(
    prompt=PromptTemplate(
        template=
        "Provide a helpful response with relevant background information for the following: {prompt}",
        input_variables=["prompt"],
    )
)

chat_prompt_template = ChatPromptTemplate.from_messages([full_prompt])

llm = OpenAI(temperature=0.9, max_tokens=128)

chain = LLMChain(llm=llm, prompt=chat_prompt_template, verbose=True)

5. 发送您的第一个请求

prompt_input = '现在几点了?'
llm_response = chain(prompt_input)

display(llm_response)

6. 初始化反馈函数

使用 HuggingFace 定义语言匹配反馈函数。

# Initialize Huggingface-based feedback function collection class:
hugs = Huggingface()

# Define a language match feedback function using HuggingFace.
f_lang_match = Feedback(hugs.language_match).on_input_output()
# By default this will check language match on the main app input and main app
# output.

7. 使用 TruLens 链式工具记录日志

tru_recorder = TruChain(chain,
    app_id='Chain1_ChatApplication',
    feedbacks=[f_lang_match])
with tru_recorder as recording:
    llm_response = chain(prompt_input)

display(llm_response)

8. 检索记录和反馈

# The record of the ap invocation can be retrieved from the `recording`:

rec = recording.get() # use .get if only one record
# recs = recording.records # use .records if multiple

display(rec)
# The results of the feedback functions can be rertireved from the record. These
# are `Future` instances (see `concurrent.futures`). You can use `as_completed`
# to wait until they have finished evaluating.

from concurrent.futures import as_completed

for feedback_future in  as_completed(rec.feedback_results):
    feedback, feedback_result = feedback_future.result()

    feedback: Feedback
    feedbac_result: FeedbackResult

    display(feedback.name, feedback_result.result)

9. 在仪表板中探索

tru.run_dashboard() # open a local streamlit app to explore

# tru.stop_dashboard() # stop if needed

10. 命令行启动仪表板

或者,您可以在同一文件夹中从命令行运行 trulens-eval 来启动仪表板。

trulens-eval

11. (Optional) 直接在 NoteBook 中查看结果

tru.get_records_and_feedback(app_ids=[])[0] # pass an empty list of app_ids to get all

完结!

猜你喜欢

转载自blog.csdn.net/engchina/article/details/134897934