OpenAI Development Series (5): Implementing OpenAI API calls in Jupyter's local environment

The full text has more than 2,000 words in total, and the expected reading time is about 10 minutes | Full of dry goods, it is recommended to collect!

The goal of this article: Provide a detailed method of calling OpenAI API in Jupyter's local environment, and give a specific description of the structure of OpenAI's official website

image-20230810162205783

As the pioneer of this round of large language model technology advancement, OpenAI's series of large-scale models have always maintained a leading position in terms of effect. Various models launched by it, such as text model, dialogue model, embedded model, coding model, graphic model and speech model, have built a comprehensive and rich model ecosystem.

In addition, OpenAI's large language model can be reasoned and fine-tuned online. Compared with open source models that need to be deployed locally, it has obvious advantages in hardware requirements, operational difficulty, and maintenance costs. In many practical development scenarios, OpenAI's online large model has become the preferred solution.

This article starts with how to implement the process of calling the OpenAI API locally!

1. The structure of the official website

OpenAI official website must be the most authoritative help document. Detailed parameter explanations and application examples of various models are given, and pages for online call and test model parameters are also provided. At the same time, because the billing needs to be performed according to the actual calling of the API, it is also necessary to check the billing rules for calling the API, check the account balance, and recharge in time!

OpenAI API official website address

1.1 Documentation

image-20230717164853336

To focus on:

For large models, the maximum number of cumulative input texts for multiple rounds of dialogue is limited, and exceeding this limit will cause the previous texts to be gradually forgotten. On the Documentation page, you need to clearly understand the MAX TOKENS of each model, indicating the maximum text limit for input.

image-20230717165131175

The MAX TOKENS of most models are 4096 tokens, this type of model is called 4k model, the MAX TOKENS of some models are 16384 tokens, this is a type of newly updated model (model updated on June 13), and the suffix With 16k logo.

A rough estimation rule is: 4K tokens are approximately equal to 8000 words

1.2 API reference

This page explains in detail the parameter explanation of each model API, which is an indispensable instruction manual for developers

image-20230717165528130

1.3 Playground

The Playground page is equivalent to a large model call application on the web page, where you can directly select different types of models, select different parameters, and input different prompts to test the output results of the model, and this page can also view the code corresponding to the operation on the web page, and the overall operation process It is very convenient, and it is also very suitable for beginners to perform functional testing of large models with zero code.

image-20230717165719091

1.4 Billing Rules

OpenAI's online large model submits calculation applications to the OpenAI online model through personal API-Key online verification, and obtains the returned results in real time. Therefore, when the OpenAI online large model is actually called, it will be billed according to the different models called, the number of calls, and the number of input and output texts. Therefore, it is necessary to pay attention to the expenditure when using it. Specific billing rules can be viewed on the Pricing page

image-20230717170509137

image-20230717170258106

Among them, the 4K Model means the model of MAX TOKENS=4096 tokens, and the 16K model means the model of MAX TOKENS=16384 tokens.

The cost of the 16K model is twice that of the 4K model. In fact, it is because the 16K model needs a larger "hidden space" capacity than the 4K model, the model is more complex, and the cost of calling is also higher. Therefore, it is recommended that if you are not conducting ultra-large-scale multi-round dialogues, you should try to choose the 4K model.

1.5 Call Restrictions

In the process of actually calling the API, for the protection of computing resources, OpenAI also limits the maximum number of requests per minute (RPM: requests-per-minute) and the maximum Token communication volume per minute (TPM: tokens) of each model API. -per-minute), these limits can be viewed in the Rate limits page of the personal center

image-20230717170646665

If you want to relax the limit, you can fill out the application form and apply to the official to increase the upper limit of the limit.

Application address

1.6 Account Balance

It is necessary to keep an eye on the account balance and the current usage amount. You can view the current account balance and account consumption in the past period of time on the Usage page of the personal center

image-20230717170917465

For each new account registered, the system will give a credit of 5 knives by default, and keep the usage period of about 4 months.

For commercial development, you can set the maximum total monthly consumption amount on the Billing–>Usage limits page. The default is 120 dollars. If the monthly API usage exceeds the limit, OpenAI will stop responding to the API Key call. This setting can effectively prevent fee overruns due to API abuse

image-20230717171154796

1

Among them, the soft limit means that when the API usage expenditure exceeds a certain preset amount, an email will be sent to remind you.

1.7 Account top-up

The OpenAI account recharge method is similar to the recharge method when ChatGPT upgrades PLUS, both bind the bank card first and then deduct the fee. Just bind the bank card that can be used for payment on the Billing–>Payment methods page. OpenAI will deduct fees based on the monthly consumption amount. If you don’t know how to pay, see the content at the end of the article

2

2. Call the OpenAI API locally in Jupyter

2.1 Environment configuration

Configure the environment variables first, as already mentioned in the previous article, see the first three sections of this article:

OpenAI Development Series (4): Master OpenAI API Calling Method

Note that after configuring the environment variables, you need to restart the computer . After the environment variables take effect, you can start Jupyter to test whether you can call the OpenAI large model API in the Jupyter environment.

For domestic users, it is impossible to directly access OpenAI. Jupyter needs to be started in a proxy environment, that is, Jupyter needs to be able to access the network through a proxy. The specific setting method is as follows:

  • Step 1: Start the magic and set the global proxy model

Taking my own magic as an example, the proxy port is as follows:

image-20230717172418370

Most proxies are proxied by means of a local loopback, namely 127.0.0.1, so the current magic proxy address and port are: 127.0.0.1:15732.

  • Step 2: Configure the Jupyter agent

The easiest way to let Jupyter access the network through a proxy is to use the cmd command line to start jupyter

image-20230717172758128

Then start Jupyter on the command line, and enter the proxy environment settings before each startup, that is, start Jupyter in the following order:

image-20230717172902758

If an error is reported:

image-20230717173008620

Solution:

The new version of Anaconda does not automatically add environment variables when it is installed. Open the environment variable configuration page, then click Path in the system variable and click Edit:

image-20230717173511590

Then add two variables C:\ProgramData\anaconda3\condabin and C:\ProgramData\anaconda3\Scripts (the addresses of these two variables need to be the installation location when you install Anaconda yourself)

image-20230717173636341

Test again:

image-20230717173836594

  

2.2 Call test

After successfully starting Jupyter, you can test whether the OpenAI large model can be successfully called. The test code is as follows:

completions_reponse = openai.Completion.create(
  model="text-davinci-003",
  prompt="this message is a test",
)
print(completions_reponse)

The result is as follows: success

image-20230810160159987

3. Summary

The content of this article is not much. It mainly explains the composition of OpenAI's official website, and introduces how to call OpenAI's online large model in the local Jupyter environment. There are two points to note:

  • If the environment variable of OpenAI API Keys is configured, it will take effect after restarting the computer
  • During the research and development process, magic needs to be turned on throughout the process

Finally, thank you for reading this article! If you feel that you have gained something, don't forget to like, bookmark and follow me, this is the motivation for my continuous creation. If you have any questions or suggestions, you can leave a message in the comment area, I will try my best to answer and accept your feedback. If there's a particular topic you'd like to know about, please let me know and I'd be happy to write an article about it. Thank you for your support and look forward to growing up with you!

Guess you like

Origin blog.csdn.net/Lvbaby_/article/details/131775615