Based on python, openai can combine contextual question and answer, including html online version

This article uses the OpenAI GPT (Generative Pre-Training) chat robot model to implement a chat function that can automatically reply to questions.

code explanation

First, we import the relevant libraries, such as openai, Path, timeetc.

Next, in order for the model to work properly, we need to openaiset api_key, and some initial variables, such as text, turns, last_resultto record the chat history.

Afterwards, we define a function chatgptthat takes the question entered by the user and returns the answer generated by the GPT model. In the function, in addition to specifying davinci-003the model , we also set parameters such as temperature, max_tokens, frequency_penalty, presence_penaltyetc. to control the randomness and number of words of the result, so as to achieve the best answer effect.

Finally, if __name__ == '__main__':under , we initialize two lists to store the questions entered by the user and the answers automatically generated by the GPT model, then whilein the loop, receive the questions entered by the user, and call chatgptthe function , and finally store the questions and answers in the corresponding list, and finally save the content to a file.

Code Usage Instructions

  • To use this code, you need to apply for OpenAI's api_key first, enter it into the code, and then run the program,
  • Enter your question and get the answer from the GPT model;
  • If you enter exit, you will directly exit the current dialogue;
  • At the end of the program, the content of the question and answer will be recorded in a file for viewing next time.

ini configuration file

Create a file in the directory config.iniwith the following content

[openai]

ai_account_key = sk-AsqirFnBSHKvalmEe1AnT3BlbkFJe2rX0xxxxxxxxxxx

dialog mode code

Click to view the code
import openai
from pathlib import Path
import time
import configparser

ANSI_COLOR_GREEN = "\x1b[32m"
ANSI_COLOR_RESET = "\x1b[0m"


# 从ini文件中读取api_key
config = configparser.ConfigParser()
config.read('config.ini')
openai.api_key = config['openai']['ai_account_key']


text = ""  # 设置一个字符串变量
turns = []  # 设置一个列表变量,turn指对话时的话轮
last_result = ""


def chatgpt(question):
    global text
    global turns
    global last_result

    prompt = text + "\nHuman: " + question

    try:
        response = openai.Completion.create(
            model="text-davinci-003",  # 这里我们使用的是davinci-003的模型,准确度更高。
            prompt=prompt,  # 你输入的问题
            temperature=0.9,  # 控制结果的随机性,如果希望结果更有创意可以尝试 0.9,或者希望有固定结果可以尝试 0.0
            max_tokens=2048,  # 这里限制的是回答的长度,你可以可以限制字数,如:写一个300字作文等。
            top_p=1,
            # [控制字符的重复度] -2.0 ~ 2.0 之间的数字,正值会根据新 tokens 在文本中的现有频率对其进行惩罚,从而降低模型逐字重复同一行的可能性
            frequency_penalty=0,
            # [控制主题的重复度] -2.0 ~ 2.0 之间的数字,正值会根据到目前为止是否出现在文本中来惩罚新 tokens,从而增加模型谈论新主题的可能性
            presence_penalty=0
        )

        result = response["choices"][0]["text"].strip()
        last_result = result
        turns += [question] + [result]  # 只有这样迭代才能连续提问理解上下文

        if len(turns) <= 10:  # 为了防止超过字数限制程序会爆掉,所以提交的话轮语境为10次。
            text = " ".join(turns)
        else:
            text = " ".join(turns[-10:])

        return result
    except Exception as exc:  # 捕获异常后打印出来
        print(exc)


if __name__ == '__main__':

    # 将问题和回复记录下来,待结束后保存到文件中
    question_list = []
    answer_list = []
    while True:
        question = input(ANSI_COLOR_GREEN +
                         "\n请输入问题,若输入exit退出\n" + ANSI_COLOR_RESET)
        question_list.append(question)
        if question == "exit":
            break
        answer = chatgpt(question)
        answer_list.append(answer)
        print("AI: " + answer)
    # 保存到文件中
    timestamp = time.strftime("%Y%m%d-%H%M-%S", time.localtime())
    file_name = 'output/chat ' + timestamp + '.md'
    f = Path(file_name)
    f.parent.mkdir(parents=True, exist_ok=True)
    with open(file_name, "w", encoding="utf-8") as f:
        for q, a in zip(question_list, answer_list):
            f.write(f"question: {q}\nanswer: {a}\n\n")
    print(ANSI_COLOR_GREEN + "对话内容已保存到文件中: " + file_name + ANSI_COLOR_RESET)


Single question and answer mode code

Click to view the code
import openai
from pathlib import Path
import time
import configparser


ANSI_COLOR_GREEN    = "\x1b[32m"
ANSI_COLOR_RESET    = "\x1b[0m"



def get_ai_answer(prompt, save=True):
    # 去除字符串前后的空白符
    prompt = prompt.strip()
    # 发起请求
    if len(prompt) != 0:
        print(f'已发起请求,问题描述{len(prompt)}个长度,请稍等...')
        # 从ini文件中读取api_key
        config = configparser.ConfigParser()
        config.read('config.ini')
        openai.api_key = config['openai']['ai_account_key']
        # Get my answer
        response = openai.Completion.create(
            prompt=prompt,
            model="text-davinci-003",
            temperature=0.9,
            max_tokens=2048,  #返回结果的长度
            top_p=1,
            frequency_penalty=0.0,
            presence_penalty=0.0)

        # Print my answer
        # print(response)
        answer = response["choices"][0]["text"].strip()
        print(answer)

        # 将内容写到以时间戳为名的md文件
        if save:
            timestamp = time.strftime("%Y%m%d-%H%M-%S", time.localtime())
            file_name = 'output/' + timestamp + '.md'
            f = Path(file_name)
            f.parent.mkdir(parents=True, exist_ok=True)
            text = f'# Q\n{prompt}\n# A\n{answer}\n'
            f.write_text(text, encoding='utf-8')
            print(ANSI_COLOR_GREEN  +"对话内容已保存到文件中: " + file_name + ANSI_COLOR_RESET)
        return answer


if __name__ == '__main__':
    prompt = '''
你今年几岁了
    '''
    get_ai_answer(prompt)


gitee online version

In addition, I wrote an openai gpt3 online version in html that can directly talk to you . To use this page, you need to prepare the openai apikey in advance.

Project source code https://gitee.com/x223222981/chat-gpt.js

Guess you like

Origin blog.csdn.net/weixin_42063567/article/details/129085186