How to stop Chatgpt content output in real time?

        When we develop Chatgpt applications in streaming mode, we usually need a function that can stop the output of Chatgpt content. As shown below:

        If we use OpenAI modules to develop such code, we generally write the implementation code like this:

import time
import openai

# 设置OpenAI API访问密钥
openai.api_key = "YOUR_API_KEY"

class ChatGPTStream:
    def __init__(self):
        self.stop_flag = False

    def callback(self, response):
        # 在回调函数中处理ChatGPT的响应
        # 这里只是简单地打印响应内容,您可以根据需求进行其他处理
        print(response['choices'][0]['text'])

    def generate(self):
        # 创建Completion对象
        completion = openai.Completion.create(
            engine='text-davinci-003',
            prompt='What is your question?',
            temperature=0.7,
            max_tokens=100
        )

        # 获取生成的回调URL
        callback_url = completion['choices'][0]['callback_url']

        # 使用长轮询来获取ChatGPT的响应
        while not self.stop_flag:
            response = openai.Completion.fetch(callback_url)
            self.callback(response)
            time.sleep(3)  # 每个循环等待3秒

    def start(self):
        self.stop_flag = False
        self.generate()

    def stop(self):
        self.stop_flag = True

# 示例使用方法
stream = ChatGPTStream()
stream.start()

# 点击按钮来停止生成请求
stream.stop()

        It can be seen that this method is not a real-time stop in the true sense. We just paused to openai.Completion.fetchobtain the output content, but OpenAI will continue to send content continuously, resulting in the actual consumption of tokens.

        So, how to stop the content output of Chatgpt in real time?

        Here are two ideas:

  • If we directly connect EventSourceto OpenAI through the protocol, we can also use eventSource.close()the method to truly interrupt the content output in real time.
  • If we use WebSocketthe protocol, such as the method adopted by Bmob AI SDKWebSocket , we can directly close the connection, which can also achieve the purpose of interrupting the content output in real time.

        Of course, no matter what the method is, you must give priority to feasibility, analyze the advantages and disadvantages of the two protocols, and find out the most beneficial implementation method for you. It doesn't matter whether a cat is black or white, as long as it can catch mice, it is a good cat.

        Friends who are interested in AI applications are welcome to contact me (WeChat: xiaowon12) to exchange ideas about AI.

 

おすすめ

転載: blog.csdn.net/m0_74037076/article/details/131944676