Detailed explanation of Golang/Python calling openAI API

learning target:

  • Introduction to OpenAI APIs

  • Learn how to use OpenAI's API with Golang

  • OpenAI's common parameters and their descriptions

  • Understanding Tokens in OpenAI API

  • The OpenAI API provides several different endpoints and modes

  • Examples of complex and practical applications


Learning Content:

  "OpenAI API introduction:

The OpenAI API is a service provided by OpenAI that allows developers to programmatically interact with OpenAI's powerful natural language processing models. These models are based on deep learning techniques and can be used for various tasks including text generation, automatic summarization, translation, dialogue generation, etc.

Here are some important notes about the OpenAI API:

  1. Model: OpenAI API is based on the GPT (Generative Pre-Training) architecture and provides multiple pre-trained language models, such as GPT-3, GPT-2, etc. These models are trained on large-scale datasets to provide powerful text generation and understanding capabilities.

  2. Access method: You can interact with the model by sending HTTP requests to the OpenAI API. The API supports multiple programming languages ​​and development environments. You can use the HTTP client library to send requests, and perform text generation and other tasks through the endpoints provided by the API.

  3. Authorization and Authentication: Before using the OpenAI API, you need to obtain an API key and use it as a credential for authentication. You can pass an API key to the OpenAI API for authentication by adding an Authentication header in the request header.

  4. Request and Parameters: You can specify the desired task and model behavior by providing appropriate parameters in the request body. For example, you can provide a prompt (prompt) to start text generation, or specify the maximum length of generation, temperature and other parameters.

  5. Paid: The OpenAI API is a paid service. Using the API for text generation and other operations consumes resources and is billed according to OpenAI's pricing model. Make sure to understand the relevant pricing and fee information before using the API.

Please note that OpenAI may update the functionality, pricing and availability of its APIs and may have specific usage rules and restrictions. It is recommended that you consult OpenAI's official documentation for the latest API descriptions and related details.

"Learn how to use OpenAI's API through Golang:

To call OpenAI's API in Golang, you can follow the steps below:

1. Install the required libraries :

First, you need to install a `go` HTTP request library, such as `net/http` or `github.com/go-resty/resty`. Install the required libraries with one of the following commands:

   
   go get -u net/http
   go get -u github.com/go-resty/resty
   

2. Get the OpenAI API key:

To get an OpenAI API key, you need to visit OpenAI's official website and register an account. Once registered, you can obtain an API key by following these steps:

  1. Log in to the OpenAI official website: Open the OpenAI official website and log in with your registered account.

  2. Navigate to the API section: After logging in, navigate to the API section of the OpenAI website. You can find and access relevant documentation and information about the API.

  3. Create an API key: Follow the instructions to create a new API key. This usually involves providing your identity verification information and relevant development project information.

  4. Obtain an API key: Once you have created an API key, OpenAI will provide you with a unique API key. This key is usually a string that you need to use to authenticate when calling the OpenAI API.

Note that the steps above are general guidelines, and the actual process may vary due to updates or changes on the OpenAI official website. For CRRC or other specific organizations, you may need to follow their specific process or contact their relevant team to obtain an OpenAI API key.

3. Create an HTTP request:

In Golang, you can create HTTP requests using one of the above libraries. Here is sample code using `net/http` library:

 package main
   
   import (
       "fmt"
       "io/ioutil"
       "net/http"
   )
   
   func main() {
       apiKey := "YOUR_OPENAI_API_KEY"
       url := "https://api.openai.com/v1/engines/davinci-codex/completions"
   
       reqBody := `{
           "prompt": "Hello, world!",
           "max_tokens": 10
       }`
   
       req, err := http.NewRequest("POST", url, bytes.NewBuffer([]byte(reqBody)))
       if err != nil {
           fmt.Println("Error creating request:", err)
           return
       }
   
       req.Header.Set("Authorization", "Bearer "+apiKey)
       req.Header.Set("Content-Type", "application/json")
   
       client := &http.Client{}
       resp, err := client.Do(req)
       if err != nil {
           fmt.Println("Error sending request:", err)
           return
       }
   
       defer resp.Body.Close()
       respBody, err := ioutil.ReadAll(resp.Body)
       if err != nil {
           fmt.Println("Error reading response:", err)
           return
       }
   
       fmt.Println("Response:", string(respBody))
   }

   In the above code, replace `YOUR_OPENAI_API_KEY` with your own OpenAI API key. `reqBody` defines the JSON data of the request, which includes hints and the maximum number of tokens to generate . Make the appropriate changes according to your needs.

4. Send the request and process the response:

Send the request by calling `client.Do(req)`, and read the content of the response with `ioutil.ReadAll`. You can do further processing and parsing as needed.

Please note that this is just a basic sample code for you to understand how to call OpenAI's API in Golang. Depending on the specific requirements and functions of the OpenAI API, you may need to perform more configuration and parameter settings. Make sure to consult the documentation of the OpenAI API for more details, and make appropriate adjustments as needed.

》Common parameters and descriptions of OpenAI

1. Some common parameters of OpenAI API and their descriptions:

1. `engine` (string): Specifies the model engine to use, such as "davinci", "curie" or "gpt-3.5-turbo". Different model engines have different performance and capabilities.

2. `prompt` (string): It is used to prompt the model to generate the starting content of the text. One or more sentences can be provided as prompts.

3. `max_tokens` (int): Limits the maximum number of tokens to generate text. Tokens are the basic unit of text that the model processes. By controlling the number of tokens, the length of the generated text can be controlled.

4. `temperature` (float): Controls the diversity of the generated text. Higher temperature values ​​(greater than 1.0) will make the output more random and varied, while lower temperature values ​​(less than 1.0) will make the output more deterministic and consistent.

5. `max_response_length` (int): Limits the maximum length (in characters) of generated responses. The generated text can be truncated using this parameter to avoid returning an overly long response.

6. `stop` (string or array): Specify one or more strings as stop tags. When the model generates text containing stop tags, it will stop generating and return the result.

7. `n` (int): Specifies the number of candidate responses to generate. The model generates several possible responses, ranked according to their probabilities. By default, `n` has a value of 1, which returns the response with the highest probability.

These are some of the commonly used parameters in the OpenAI API to control the behavior of the model and the resulting text output. You can make appropriate parameter settings according to your specific needs and tasks. Note that different models may support different parameters, and specific parameter settings and default values ​​may vary. It is recommended to consult OpenAI's official documentation for the latest parameter descriptions and related details.

2. Golang calls the OpenAI API and sets all parameters:

How to do it with sample code:

package main

import (
    "fmt"
    "log"
    "os"

    openai "github.com/openai/openai-go/v2"
)

func main() {
    apiKey := "YOUR_API_KEY"
    openai.SetAPIKey(apiKey)

    engine := "davinci"
    prompt := "Once upon a time"
    maxTokens := 50
    temperature := 0.8
    maxResponseLength := 200
    stop := []string{"\n", "The end"}
    n := 5

    // 创建 OpenAI 请求
    req := &openai.CompletionRequest{
        Model:           engine,
        Prompt:          &prompt,
        MaxTokens:       &maxTokens,
        Temperature:     &temperature,
        MaxResponseSize: &maxResponseLength,
        Stop:            stop,
        N:               &n,
    }

    // 发送请求并获取响应
    client := openai.NewCompletionClient()
    response, err := client.CreateCompletion(req)
    if err != nil {
        log.Fatal(err)
    }

    // 解析响应并获取生成的文本
    generatedTexts := make([]string, len(response.Choices))
    for i, choice := range response.Choices {
        generatedTexts[i] = choice.Text
    }

    // 输出生成的文本
    fmt.Println(generatedTexts)
}

In the above sample code, replace `YOUR_API_KEY` with your own OpenAI API key. Then, you can set the values ​​of `engine`, `prompt`, `maxTokens`, `temperature`, `maxResponseLength`, `stop` and `n` parameters according to your needs.

Using OpenAI's Go SDK (`openai-go`), we can create a `CompletionRequest` object and set parameters as fields of that object. Then, we send the request and get the response by calling the `CreateCompletion()` method.

Finally, we parse the response and extract the generated text, store it in the `generatedTexts` slice, and output the result.

Please make sure you have installed the `openai-go` package using the `go get` command before running the sample code.

"Understand tokens in OpenAI API:

In the OpenAI API, "token" refers to the smallest processing unit of text. In a language model, text is split into a series of tokens, where each token can be a word, punctuation mark, space, or other character. When the model generates text, it actually processes and predicts token by token.

Understanding tokens is important for using the OpenAI API, because some API parameters (such as `max_tokens`) are calculated in terms of tokens. By controlling the number of tokens, you can control the length of the generated text.

You can use OpenAI's `tiktoken` Python package to count the number of tokens in text. Here is a simple sample code:

import openai
from openai import tiktoken

openai.api_key = 'YOUR_API_KEY'

text = "Hello, how are you?"

token_count = tiktoken.count(text)
print("Token count:", token_count)

Replace `YOUR_API_KEY` with your OpenAI API key. In the above example, we use the `tiktoken.count()` function to count the number of tokens in a given text and print the output.

In addition, OpenAI also provides an online tool to view the number of tokens, called "tiktoken". You can find a link to "tiktoken" in the OpenAI documentation and use it to count the number of tokens for arbitrary text. This tool is very useful for debugging and checking token counts.

1. In the OpenAI API, the number of tokens (tokens) is usually related to the cost of the API:

Each time you call the API, the number of tokens you will be billed depends on the text input you request and the text output generated by the model.

The cost of an API request is based on two factors:

1. Input Token: This is the token amount of the text input you send to the API. In general, longer input texts take up more tokens and thus incur higher fees.

2. Output Tokens: This refers to the number of tokens of the response text generated by the model. The longer the text generated, the higher the number of tokens taken, and therefore higher fees.

You can use OpenAI's `tiktoken` Python package to count tokens in text to get an idea of ​​how many tokens your input and output are taking. Depending on the number of tokens used in the API request, the corresponding fee details can be found on OpenAI's pricing page.

Note that fees may vary for different models and endpoints. In addition, OpenAI may adjust pricing and billing based on demand and market conditions. Therefore, it is recommended to consult OpenAI's official documentation and pricing information for the most accurate and up-to-date billing details.

》The OpenAI API provides several different endpoints and modes:

OpenAI API provides several different endpoints and modes to meet different natural language processing tasks and needs. Here are some common endpoints and patterns:

1. Completions endpoint:

`davinci`, `curie` and `babbage` are the three main models in the OpenAI API, each of which can be used to generate text completions. You can send a request to the Completions endpoint, providing a text snippet as a prompt, and the model will generate the continuation or completion text.

2. Chat endpoint:

Chat mode is suitable for dialogue generation tasks. You can use the Chat endpoint to build a multi-turn dialogue system that interacts with models and generates continuous dialogue responses. You can provide a list of historical messages with previous conversation content and the model's responses so that the model understands the context and generates an appropriate reply.

The Chat endpoint of the OpenAI API currently provides three different models, which are:

1). `gpt-3.5-turbo`: This is the latest and most powerful Chat model provided by OpenAI API. It is based on the GPT-3.5 architecture and has excellent dialog generation capabilities. The model performs well across multiple tasks and applications, generating coherent and accurate responses.

2). `davinci`: This is the previous Chat model, based on the GPT-3 architecture. Although there are already more advanced models (such as gpt-3.5-turbo), `davinci` can still be used for dialogue generation and provide high-quality responses.

3). `curie`: This is the Chat model before GPT-3.5-turbo, which is based on the GPT-3.5 architecture. Although not as powerful as gpt-3.5-turbo, `curie` is still an effective dialogue generation model and plays a role in several applications.

These Chat models have a wide range of applications in different dialogue generation tasks, and you can choose the appropriate model according to your specific needs. Note that as OpenAI continues to improve and release new models, new models may become available. It is recommended to consult OpenAI's official documentation for the latest list of models and related details.

3. Translation endpoint:

The OpenAI API also provides endpoints for machine translation. You can send a request containing the text to be translated, specifying the source and target languages, and the model will return the translated text.

These endpoints and modes are some common options provided by the OpenAI API for different natural language processing tasks. You can choose the appropriate endpoint and mode according to your needs, and follow the guidelines provided by the API documentation to construct the request and parse the response.

Note that OpenAI may update and add more endpoints and modes in the future to provide more functionality and flexibility. Make sure to check out OpenAI's official documentation for the latest information on endpoints and modes, and how to use them properly.

 4. Golang calls different endpoints and modes of OpenAI API:

You need to construct the appropriate HTTP request and send it to the appropriate endpoint. The following is a sample code that demonstrates how to call the Completions endpoint and Chat endpoint of the OpenAI API in Golang.

First, make sure you have installed Golang's HTTP request library, such as `net/http`.

1. Completions endpoint sample code:
package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io/ioutil"
    "net/http"
)

func main() {
    apiKey := "YOUR_OPENAI_API_KEY"
    url := "https://api.openai.com/v1/engines/davinci-codex/completions"

    reqBody := map[string]interface{}{
        "prompt":     "Once upon a time",
        "max_tokens": 50,
    }

    reqJSON, err := json.Marshal(reqBody)
    if err != nil {
        fmt.Println("Error marshaling request:", err)
        return
    }

    req, err := http.NewRequest("POST", url, bytes.NewBuffer(reqJSON))
    if err != nil {
        fmt.Println("Error creating request:", err)
        return
    }

    req.Header.Set("Authorization", "Bearer "+apiKey)
    req.Header.Set("Content-Type", "application/json")

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        fmt.Println("Error sending request:", err)
        return
    }

    defer resp.Body.Close()
    respBody, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        fmt.Println("Error reading response:", err)
        return
    }

    fmt.Println("Response:", string(respBody))
}

In the above code, replace `YOUR_OPENAI_API_KEY` with your own OpenAI API key. `reqBody` defines the requested JSON data, including the prompt (prompt) and the maximum number of tokens generated (max_tokens). Make the appropriate changes according to your needs.

2. Chat endpoint sample code:
package main

import (
    "bytes"
    "encoding/json"
    "fmt"
    "io/ioutil"
    "net/http"
)

func main() {
    apiKey := "YOUR_OPENAI_API_KEY"
    url := "https://api.openai.com/v1/chat/completions"

    reqBody := map[string]interface{}{
        "messages": []map[string]string{
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Who won the world series in 2020?"},
        },
    }

    reqJSON, err := json.Marshal(reqBody)
    if err != nil {
        fmt.Println("Error marshaling request:", err)
        return
    }

    req, err := http.NewRequest("POST", url, bytes.NewBuffer(reqJSON))
    if err != nil {
        fmt.Println("Error creating request:", err)
        return
    }

    req.Header.Set("Authorization", "Bearer "+apiKey)
    req.Header.Set("Content-Type", "application/json")

    client := &http.Client{}
    resp, err := client.Do(req)
    if err != nil {
        fmt.Println("Error sending request:", err)
        return
    }

    defer resp.Body.Close()
    respBody, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        fmt.Println("Error reading response:", err)
        return
    }

    fmt.Println("Response:", string(respBody))
}

in the above

In the code, you also need to replace `YOUR_OPENAI_API_KEY` with your own OpenAI API key. `reqBody` defines the JSON data of the request, and the `messages` list contains the history messages of the conversation. You can add more messages as needed.

These sample codes demonstrate how to use Golang to call the different endpoints and modes of the OpenAI API. You can modify these codes according to your specific needs and add appropriate error handling and other logic. Remember to consult the official documentation of the OpenAI API for a detailed description of requests and responses.

》 Examples of complex and practical applications:

Example 1:

If you want to modify the code to analyze the sentiment of the specified comment and print out the comment content and sentiment analysis results, you can modify the code as follows:

import openai

def analyze_sentiment(comment):
    # 设置OpenAI API密钥
    openai.api_key = 'YOUR_API_KEY'

    # 调用OpenAI GPT-3.5模型进行情感分析
    response = openai.Completion.create(
        engine='text-davinci-003',
        prompt=f"分析以下评论的情感:{comment}\n情感分析结果",
        max_tokens=1,
        temperature=0,
        n=1,
        stop=None,
        logprobs=None
    )

    # 解析API响应,获取情感分析结果
    sentiment = response.choices[0].text.strip()

    return sentiment

# 要分析的评论
comment = "这部电影太精彩了!"

# 进行情感分析
result = analyze_sentiment(comment)
print(f"评论: {comment}")
print(f"情感分析结果: {result}")

After this modification, the code will use the specified comment for sentiment analysis, and print out the comment content and sentiment analysis results. Make sure to replace `YOUR_API_KEY` with your OpenAI API key.

Example 2:

When multiple `#` symbols are used, they can be used to divide the input text into different parts, thus affecting the generation behavior of the model. Here is an example:

import openai

def generate_text(prompt):
    openai.api_key = 'YOUR_API_KEY'

    response = openai.Completion.create(
        engine='text-davinci-003',
        prompt=prompt,
        max_tokens=50,
        temperature=0.8,
        n=1,
        stop=None,
        logprobs=None
    )

    generated_text = response.choices[0].text.strip()

    return generated_text

prompt = """
输入一句话,让模型继续生成下一句:
#在一个遥远的星系中,
#在宇宙的尽头,
#人类探险家发现了一颗神秘的行星。
"""

output = generate_text(prompt)
print(output)

In the above example, we have used three `#` symbols to divide the input text into three parts. The parts are: In a galaxy far, far away, at the end of the universe, human explorers have discovered a mysterious planet. The model will generate text starting after the last `#` sign. For example, the generated text might be:


Human explorers have discovered a mysterious planet. There are strange creatures on the planet, they have extraordinary intelligence and power. Humans began to explore this mysterious planet, hoping to uncover its secrets.
 

By using multiple `#` symbols, we can control and guide the generation behavior of the model to a certain extent, so as to generate text that meets our expectations. Please note that the generated results in the examples are estimated based on the behavior of the model, and the actual results may vary due to factors such as model training and data.

Example 3:

In the example you provided, three `#` symbols are used to divide the input text into three parts. The first section is "Analyze the sentiment of the following review:", the second section is the content of the review "This movie is amazing!", and the third section is "Sentiment Analysis Results:".

This segmented structure helps the model understand your intent and how your input is organized. When performing sentiment analysis, the model will generate sentiment-related text results based on the given comment content.

Here is a sample code that does sentiment analysis using the `prompt` you provided:

import openai

def analyze_sentiment(comment):
    openai.api_key = 'YOUR_API_KEY'

    response = openai.Completion.create(
        engine='text-davinci-003',
        prompt=comment,
        max_tokens=1,
        temperature=0,
        n=1,
        stop=None,
        logprobs=None
    )

    sentiment = response.choices[0].text.strip()

    return sentiment

prompt = """
分析以下评论的情感:
###
这部电影太精彩了!
###
情感分析结果:
"""

result = analyze_sentiment(prompt)
print(result)

In the above code, we use the `prompt` provided by you as input for sentiment analysis and print out the result. Make sure to replace `YOUR_API_KEY` with your valid OpenAI API key.

Note: In this example, the model will directly perform sentiment analysis on the entire `prompt` and generate a sentiment-related text result. The specific sentiment analysis results will depend on the training and data of the model, and possible post-processing steps.


Originality is not easy, I hope everyone can like it and support me. Your support is my motivation.

Unauthorized reprinting is prohibited

Guess you like

Origin blog.csdn.net/canduecho/article/details/131382102