Communicate with chatGPT using Python’s requests library

Preface

In the field of artificial intelligence, natural language processing models such as OpenAI GPT-3.5 Turbo have a wide range of applications. Although the official Python library is provided to interact with these models, some people prefer to use the requests library to customize requests and process responses. For example, many third-party LLMs now provide http request formats similar to chatGPT, which only require a little effort. You can use it directly after adjusting it. This article will introduce how to use Python’s requests library to communicate with OpenAI GPT-3.5 Turbo.

text

Step 1: Get API Key

First, you need to register on the OpenAI official website and obtain an API key. This key will be used for authentication, ensuring that only authorized users can access OpenAI's services.

Step 2: Prepare the request

Before you are ready to send a request, you need to construct an HTTP request that contains the necessary information. This includes API endpoint URLs, request headers, request data, etc. Make sure your request headers include appropriate authorization information so OpenAI can verify your identity.

Step 3: Send a request

Use the requests library to send a POST request to the OpenAI GPT-3.5 Turbo API endpoint. The data portion of the request should contain your hints and other parameters to tell the model what task you are asking for and what you want to generate. Please make sure to set appropriate parameters such as max_tokens as recommended by OpenAI to limit the length of generated text.

Step 4: Process the response

Once you send the request, you will receive a response from the OpenAI server. You can use response.json() to parse the response and extract the resulting text. Make sure to check the response's HTTP status code to ensure the request was successful.

Sample code

import requests

# 默认地址需要境外访问
url = 'https://api.openai.com/v1/chat/completions'

# 替换为您自己的API密钥
api_key = 'sk-xxxxxxxxxx'


def send_message(message):
    headers = {
        "Authorization": f"Bearer {api_key}",
        "Content-Type": "application/json",
    }

    data = {
        "model": "gpt-3.5-turbo",
        "messages": [
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": f"{message}"}
        ]
    }
    response = requests.post(url, headers=headers, json=data, verify=False)
    if response.status_code == 200:
        return response.json()["choices"][0]["message"]['content']
    else:
        print(f"Error: {response.status_code}")
        return None


resp = send_message('hello')
print(resp)

The focus of this article is how to use the requests library to communicate with OpenAI GPT-3.5 Turbo.

in conclusion

Using Python's requests library to communicate with OpenAI GPT-3.5 Turbo is a flexible way to customize requests and handle responses based on your needs. This gives developers more control, allowing them to better integrate natural language processing models into their applications.

This article introduces the basic steps for communicating with OpenAI GPT-3.5 Turbo. We hope to provide useful information to developers who prefer to use the requests library, so that you can quickly verify other third-party platforms with similar interface formats.

Guess you like

Origin blog.csdn.net/u012960155/article/details/132637063