Form: Input a question, the model will generate a result, one question and one answer form
Function: Create a chat interface Address: POST https://api.openai.com/v1/chat/completions (Beta)
Request parameter (Request body):
model: string
The model that must be used, only gpt-3.5-turbo and gpt -3.5-turbo-0301 Two values
messages: array must
be the content that needs to be passed in, which includes two fields, role and cent, for example:
PowerShell
"messages": [{"role": "user", "content": "Hello!"}]
temperature: number optional, default 1
number between 0 and 2
, the larger the number, the more random the answer, open, for example,
the smaller the number of 1.8, the more fixed the answer, focusing, such as 0.2, it is recommended not to modify top_p
at the same time as top_p
: Optional Default 1
is similar to temperature, the larger the number, the more random the answer,
the smaller the open number, the more fixed the answer
It is recommended not to modify
n at the same time with top_p: number Optional Default 1
The number of results generated
stream: boolean Optional Default false
If it is set to true, the result is a data stream. Like the official website chatgpt, a character is generated and a character is returned. The server needs to support server-sent events
stop: string or array optional default null
up to 4 sequences, the API will stop generating more tokens
max_tokens: integer optional default inf
result maximum number of tokens that can be generated, default can return 4096-input tokens
presence_penalty number optional default 0
Numbers between -2.0 and 2.0. Positive values penalize new tokens based on whether they have appeared in the text so far, increasing the likelihood that the model talks about new topics.
frequency_penaley: number optional default 0
number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text, making the model less likely to repeat the same line verbatim.
logit_bias: map optional default null
probability of occurrence of specified marker when modification is complete.
Accepts a json object that maps tokens (specified by token ids) to associated bias values between -100 and 100. Mathematically, bias is added to the logic generated by the model before sampling. The exact effect will vary for each model, but values between -1 and 1 should decrease or increase the likelihood of selection; values like -100 or 100 should result in forbidden or exclusive selection of the associated token .
user: string optional
Unique end user identity, which can help openai detect abuse
To call the official API interface, you need to obtain the APIkey, and obtain the address: chat.xingtupai.com
Request example:
curl:
curl https://api.openai.com/v1/chat/completions \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer YOUR_API_KEY' \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}'
python:
deployment source code: https://github.com/openai/openai-python
call example:
Python
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
node.js:
deployment source code: https://github.com/openai/openai-node
call example:
JavaScript
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
Example parameters:
{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Hello!"}]
}
return result:
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "\n\nHello there, how may I assist you today?",
},
"finish_reason": "stop"
}],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 12,
"total_tokens": 21
}
}
Error result:
{
"error": {
"message": "'doctor' is not one of ['system', 'assistant', 'user'] - 'messages.0.role'",
"type": "invalid_request_error",
"param": null,
"code": null
}
}