Try running the integration of langchain and huggingface on Colab

Environment preparation:

Obtain [HUGGINGFACEHUB_API_TOKEN], log in to huggingface official website and complete registration.

Official website: https://huggingface.co/

Get token: https://huggingface.co/settings/tokens

Install huggingface_hub on colab

!pip install -q huggingface_hub

【1】Creating a Question-Answering Prompt TemplateCreating a Question-Answering Prompt Template

from langchain import PromptTemplate

template = """Question: {question}

Answer: """
prompt = PromptTemplate(
        template=template,
    input_variables=['question']
)

# user question
question = "What is the capital city of China?"

【2】Use huggingface_hub model "google/flan-t5-large" to answer the question. The huggingfaceHub class will connect to HuggingFace's inference API and load the specified model.

from langchain import HuggingFaceHub, LLMChain

# initialize Hub LLM
hub_llm = HuggingFaceHub(
        repo_id='google/flan-t5-large',
    model_kwargs={'temperature':0},
    huggingfacehub_api_token='your huggingfacehub_api_token'
)

# create prompt template > LLM chain
llm_chain = LLMChain(
    prompt=prompt,
    llm=hub_llm
)

# ask the user question about the capital of France
print(llm_chain.run(question))

【3】Output results

beijing

 

Guess you like

Origin blog.csdn.net/qq_23938507/article/details/131323125
try