ChatGPT (GPT3.5) OpenAI official API officially released

        The email sent by the OpenAI community at 4 am today introduced the release of the official ChatGPT API. The official introduction document addresses are " OpenAI API " and " OpenAI API ".

        The official API model names of ChatGPT (GPT3.5) are "gpt-3.5-turbo" and "gpt-3.5-turbo-0301". The API call price is 10 times cheaper than the GPT text-davinci-003 model. The calling fee is US$0.002/1000tokens, equivalent to about 0.1 yuan for 4000~5000 words. This word count includes the word count of questions and returned results.

1 API call method

1.1 Call parameters

        The ChatGPT (GPT3.5) official API call method is as follows, which is basically the same as the GPT3 model call, and there are mainly 7 parameters for input. It is expected that these two models will be integrated into the RdFast smart creation robot applet and the RdChat desktop program this evening . You can experience it at any time, so stay tuned.

  1. model: model name, gpt-3.5-turbo or gpt-3.5-turbo-0301
  2. messages: Questions or content to be completed, which will be highlighted below.
  3. temperature: Control the randomness of the result, 0.0 means the result is fixed, and the randomness can be set to 0.9.
  4. max_tokens: The maximum number of words returned (including questions and answers), usually Chinese characters occupy two tokens. Assuming it is set to 100, if there are 40 Chinese characters in the prompt question, then the returned result will include at most 10 Chinese characters. The maximum number of tokens allowed by the ChatGPT API is 4096, that is, the max_tokens maximum setting is 4096 minus the number of tokens in question.
  5. top_p: set it to 1.
  6. 6frequency_penalty: set it to 0.
  7. presence_penalty: set it to 0.
  8. stream。

        It should be noted that the above input parameters add stream, that is, whether to use control flow to output.

        If the value of stream is False, the returned result is consistent with the GPT3 interface in Section 1, and all text results are returned, which can be read through response["choices"][0]["text"]. However, the more the number of words, the longer the waiting time to return, the time can refer to 4 words/second when the control flow is read.

        If the value of steam is True, the returned result is a Python generator, which needs to be iterated to obtain the result, with an average of about 4 words per second (134 words in 33 seconds, 157 words in 39 seconds). The reading program is as follows. It can be seen that the end field of the read result is "<|im_end|>".

1.2 messages

        The messages field consists of two parts, role and content, as follows:

  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]

        In the gpt-3.5-turbo model, the role role includes three types: system system, assistant assistant and user user. The System role is equivalent to telling ChatGPT which role to answer the question, and the specific role and question content need to be specified in the content. The main difference of gpt-3.5-turbo-0301 is that it pays more attention to the content of the problem, and does not pay special attention to the specific part of the role. The gpt-3.5-turbo-0301 model is valid until June 1st, and gpt-3.5-turbo will continue to be updated.

        The assistant assistant and user user are equivalent to specifying the role, and the content can be directly written into the question of concern.

2 Reference procedures

        A sample reference program is as follows:

# -*- coding: utf-8 -*-
"""
Created on Wed Dec 21 21:58:59 2022

@author: Administrator
"""

import openai

def openai_reply(content, apikey):
    openai.api_key = apikey
    response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo-0301",#gpt-3.5-turbo-0301
    messages=[
    {"role": "user", "content": content}
    ],
    temperature=0.5,
    max_tokens=1000,
    top_p=1,
    frequency_penalty=0,
    presence_penalty=0,
    )
    # print(response)
    return response.choices[0].message.content


if __name__ == '__main__':
    content = '你是谁?'
    ans = openai_reply(content, '你的APIKEY')
    print(ans)
 

3 API call effect

Guess you like

Origin blog.csdn.net/suiyingy/article/details/129293288