LangChain installation and getting started case

1. Introduction

LangChain is a framework for developing applications driven by language models

Official website

https://www.langchain.com/

Chinese official website

https://www.langchain.com.cn/

python langchain

https://python.langchain.com.cn/docs/get_started/introduction

 https://python.langchain.com/docs/get_started/introduction

2. Module

There are six core modules in langchain, namely

  1. Model input and output (Model I/O): interface with the language model

    For models, the following three different types of models can be used in LangChain

    LLMs

    Large language models (LLMs) take text strings as input and return text strings as output.

    chat model

    The chat model takes as input a list of chat messages and returns chat messages.

    text embedding model

    Text embedding takes text as input and returns a list of floats.

  2. Retrieval: Interfacing with application-specific data
  3. Pipeline: Build a sequence of calls
  4. Agent: Let the pipeline choose which tools to use based on high-level directives
  5. Memory: Maintain application state during pipeline runs
  6. Callbacks: Record and stream intermediate steps of any pipeline

 3. Installation

1. Permanently set the pip source to domestic source

pip config set global.index-url https://mirrors.aliyun.com/pypi/simple

2. Install langchain. If you don’t want to permanently set the domestic source, you can specify a temporary source.

pip install langchain -i https://mirrors.aliyun.com/pypi/simple

4. Implement a simple program based on the LLM model

1. What is a large language model (LLM)?

Large language models (LLM) refer to deep learning models trained using large amounts of text data, which can generate natural language text or understand the meaning of language text. Large language models can handle a variety of natural language tasks, such as text classification, question and answer, dialogue, etc., and are an important way to artificial intelligence.

Simple understanding:

2. Current large language model (LLM) examples

  • GPT3/3.5/4(MoE), openAI Corporation
  • LLaMA (Meta data leakage, open source community carnival)
  • chatGLM (Chinese corpus), open source from Tsinghua University

3. The current development history of large-scale language models

GPT3 June 11,2020 released

In November 2022, OpenAI released a new model of GPT3.5 API (text-davinci-003)

In December 2022, Internet popularity exploded

In December 2022, the first version of Langchain was released

4. Several problems encountered when using large language models

So LangChain appeared

 

 5. Application of simple LLM model

Now that we have LangChain installed and our environment set up, we can start building our language model LLM application.

We can run an open source LLM model locally, but a better-performing LLM model requires huge GPU resources, which cannot be supported by ordinary computers at home. Of course, if you have an a100 graphics card, you can.

Since we cannot build it locally, we can find models that have been built by third parties and provide good APIs for us to use, such as

  • Baidu's Wenxin ERNIE
  • Ali's general meaning Qianwen
  • Openai
  • Replicate

These companies have APIs through which they can call the models they have built.

The API calling model of foreign openai is used here, so we first need to install their SDK:

pip install openai -i https://mirrors.aliyun.com/pypi/simple

openai api key acquisition reference
https://zhuanlan.zhihu.com/p/626463325

We need to set environment variables in the terminal.

export OPENAI_API_KEY="xxxx"


Alternatively, you can do it inside the Jupiter tutorial (or Python script):

import os
os.environ["OPENAI_API_KEY"] = "xxxx"

For example, we want to build a service that generates company names based on company products. First you need to import the LLM wrapper.

from langchain.llms import OpenAI


The wrapper can then be initialized with any parameters. In this example, we probably want the output to be more random, so we'll initialize it with temperature.

llm = OpenAI(temperature=0.9)

If you do not want to set environment variables, you can pass the key directly via openai_api_keynamed parameters when initializing the OpenAI LLM class:

from langchain.llms import OpenAI

llm = OpenAI(openai_api_key="...")


Then call LLM based on the input

text = "What would be a good company name for a company that makes colorful socks?"
print(llm(text)) # Feetful of Fun

Complete code

import os
from langchain.llms import OpenAI

os.environ["OPENAI_API_KEY"] = "xxxx"

llm = OpenAI(temperature=0.9)
text = "What would be a good company name for a company that makes colorful socks?"
print(llm(text))

The above case demonstrates the most basic function of LLM: generating new text based on input text

The predict method in the actual llm module is specifically used to generate corresponding text based on the input text.

So the above code can also be implemented as follows

import os
from langchain.llms import OpenAI

os.environ["OPENAI_API_KEY"] = "xxxx"

llm = OpenAI()

llm.predict("What would be a good company name for a company that makes colorful socks?",temperature=0.9)

Guess you like

Origin blog.csdn.net/qq_34491508/article/details/134175347