ChatGPT AI chat robot practice

background

In the ChatGPT era, the threshold for developing new AI applications has been greatly reduced. You don’t need to study machine learning, deep learning and other models, and prepare GPU hardware. Under the new trend, with GPT-3, Stable Diffusion, etc. The emergence of pre-trained large-scale basic models and the capabilities of these models are provided in the form of open APIs. Even without any theoretical knowledge of machine learning, you only need a day or two to make a model that can solve practical problems. AI applications.

API

The interface of a large language model is actually very simple. For example, OpenAI only provides two interfaces, Complete and Embedding. Among them, Complete allows the model to automatically continue writing according to your input, and Embedding can convert the text you input into a vector.

business description

In the past, a template was required to implement a chat machine. The downside of this is that the answers are exactly the same every time. Of course, we can design multiple templates to express the same meaning in rotation, but there are only three or four templates at most, and the overall experience is still quite dull.

With a generative language model like GPT, we can let AI automatically write copy according to our needs. As long as we submit our requirements to the Completion interface provided by Open AI, he will automatically write such a text for us.

ChatGPT parameters

Completion interface provided by Open AI, parameter list:

  • engine, which engine OpenAI uses, select text-davinci-003
  • prompt, the prompt for input
  • max_tokens, the maximum number of tokens allowed in the content generated by the call, and a token is a unit in a character sequence after word segmentation.
  • n, AI generates several pieces of content for you to choose from. In such a scenario where customer service content is automatically generated, of course we set it to 1.
  • stop, the output of the model stops when it encounters something.

accomplish


import logging
import openai
from telegram import Update
from telegram.ext import filters, MessageHandler, ApplicationBuilder, CommandHandler, ContextTypes

# 设置日志记录
logging.basicConfig(
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    level=logging.INFO)

# 设置 OpenAI API key
openai.api_key = "你的Open AI Key"

# 定义函数,处理 "/start" 命令
async def start(update: Update, context: ContextTypes.DEFAULT_TYPE):
    await context.bot.send_message(chat_id=update.effective_chat.id, text="我是一个机器人,请和我聊天吧!")

# 定义函数,使用 OpenAI 生成回复
async def generate_response(update: Update, context: ContextTypes.DEFAULT_TYPE):
    # 获取用户的消息
    message = update.message.text
    
    # 使用 OpenAI 生成回复
    response = openai.Completion.create(
        engine="text-davinci-003",
        prompt=f"{
      
      message}\n",
        max_tokens=128,
        n=1,
        stop=None,
        temperature=0.5,
    ).choices[0].text
    
    # 将回复发送给用户
    await context.bot.send_message(chat_id=update.effective_chat.id, text=response)

# 定义函数,处理未知命令
async def unknown(update: Update, context: ContextTypes.DEFAULT_TYPE):
    await context.bot.send_message(chat_id=update.effective_chat.id, text="抱歉,我不明白您的命令。")
    
if __name__ == '__main__':
    # 设置 Telegram 机器人
    application = ApplicationBuilder().token('你的Telegram Token').build()
    
    # 添加 "/start" 命令处理器
    start_handler = CommandHandler('start', start)
    application.add_handler(start_handler)
    
    # 添加消息处理器,使用 OpenAI 生成回复
    generate_response_handler = MessageHandler(filters.TEXT & (~filters.COMMAND), generate_response)
    application.add_handler(generate_response_handler)    
    
    # 添加未知命令处理器
    unknown_handler = MessageHandler(filters.COMMAND, unknown)
    application.add_handler(unknown_handler)

    # 启动机器人,并等待消息的到来
    application.run_polling()

Guess you like

Origin blog.csdn.net/qq_19968255/article/details/130496226