OpenAI Components | Accelerate the integration of OpenAI APIs in projects

In today's fast-paced technological world, it has become a trend to integrate OpenAI's capabilities into projects. OpenAI  provides a series of powerful natural language processing APIs. However, the process of integrating these APIs is complex and time-consuming, often requiring a lot of effort and expertise. To simplify the integration process, we developed  the OpenAI component , a powerful component that simplifies the integration of OpenAI APIs in projects.

Figure 1: Theme image

This article mainly introduces how developers can easily add OpenAI functions to projects with the help of OpenAI components without paying attention to implementation details. 

This paper mainly consists of three parts. The first part, "ESP Component Registry", describes how to add the appropriate components to an ESP-IDF project. The second part focuses on the details of "OpenAI Components". The last part introduces the update of "ESP-BOX ChatGPT" routine. 

ESP Component Registrar 

ESP Component Registrar is an open source component platform with a large collection of open source components that can give your IoT projects a powerful boost. You only need to do a quick search and click, you can easily get the required components, and quickly integrate into the IDF project. This efficient integration speeds up project development cycles, allowing you to focus on developing more groundbreaking IoT solutions without worrying about complex setup steps

Figure 2: ESP Component Registrar

The steps are as follows: 

1. Find the component you need in the ESP component registry.

2. Read the documentation and changelog to determine the required component version.

3. Run the following command in the terminal to integrate the component into your existing project (Note: Please modify the component name and version before running the command).

idf.py add-dependency "espressif/Component name^verison" 

OpenAI components 

In order to provide developers with as many  OpenAI API  functions as possible, we have developed a simple but powerful ESP-IDF  component . This component supports a variety of OpenAI functions (except file operations and fine-tuning (fune-tuning) functions), and the API  documentation is detailed to help developers get started quickly.

Example of use 

The first step is to instantiate the object and provide a secure "API key" as a parameter. OpenAPI keys can be obtained through  the OPENAI  website. To access OpenAI services, you must first create an account, purchase tokens, and obtain a unique API key.

openai = OpenAICreate(key); 

 After creating the OpenAI object, the code calls the chatCompletion API to set the necessary parameters, send a message (indicating that this is not the last message), and get the resulting response for use or processing in the next step. 

chatCompletion = openai->chatCreate(openai); 
chatCompletion->setModel(chatCompletion, "gpt-3.5-turbo"); 
chatCompletion->setSystem(chatCompletion, "Code geek"); 
chatCompletion->setMaxTokens(chatCompletion, CONFIG_MAX_TOKEN); 
chatCompletion->setTemperature(chatCompletion, 0.2); 
chatCompletion->setStop(chatCompletion, "\r"); 
chatCompletion->setPresencePenalty(chatCompletion, 0); 
chatCompletion->setFrequencyPenalty(chatCompletion, 0); 
chatCompletion->setUser(chatCompletion, "OpenAI-ESP32"); 
OpenAI_StringResponse_t *result = chatCompletion->message(chatCompletion, "Hello!, World", false); // 调用 chatCompletion API 
char *response = result->getData(result, 0); 

Similarly, after creating the OpenAI object, the code calls the audioTranscriptionCreate API to set the necessary parameters, such as audio file and language, then start transcribing the audio, and finally get the transcription result for the next step to use or process. 

audioTranscription = openai->audioTranscriptionCreate(openai); 
audioTranscription->setResponseFormat(audioTranscription, OPENAI_AUDIO_RESPONSE_FORMAT_JSON); 
audioTranscription->setLanguage(audioTranscription,"en"); 
audioTranscription->setTemperature(audioTranscription, 0.2); 
char *text = audioTranscription->file(audioTranscription, (uint8_t *)audio, audio_len, OPENAI_AUDIO_INPUT_FORMAT_WAV); // 调用转录 API 

To explore more of the API and its capabilities, see the documentation

ESP-BOX ChatGPT example 

Compared with the old version , the updated version of the ESP-BOX ChatGPT example integrates OpenAI components. For specific development details, read the blog . Note that in the new version, we use  the esp_tinyuf2  component to store the Wi-Fi and OpenAI keys in non-volatile storage (NVS), which is more secure. 

During the initial boot phase, after the first binary is executed, the user enters security credentials such as Wi-Fi and OpenAI keys. Once the credentials are entered, the system reboots and the ChatGPT binary takes over control. This file enables ChatGPT to function with the security credentials provided during the initial startup phase. Please refer to the figure below for the general process.

Figure 3: ChatGPT_demo routine flow and simple authentication

In addition, users can also use  ESP-Launchpad to try the new version of the ESP-BOX ChatGPT routine. This method does not need to compile the project locally, and it is more convenient to experience the new functions in the routine.

Guess you like

Origin blog.csdn.net/espressif/article/details/132449193