OpenAI releases poison in the middle of the night: GPT-4 new model is released, GPT-3.5 supports 16K context, and the price is reduced by 75%

When I woke up, I found that OpenAI Twitter has been updated, and the update is very strong. This is really poisonous late at night.

7b66cde340b2c3484be26f6efe46afab.png

Let's take a look at what are the major updates of OpenAI this time?

function call capability

  1. Chat Completions APIA new function call capability was introduced in .

  2. gpt-4-0613and gpt-3.5-turbo-0613versions have been updated and improved.

  3. Developers can now describe functions to models and intelligently call these functions through the output JSON object.

  4. This new approach reliably connects GPT's capabilities with external tools and APIs, enabling more reliable access to structured data from models.

Release more new models

  1. GPT-4: Includes an updated and improved model that supports function calls.

  2. gpt-4-32k-0613: gpt-4-061same improvement as , while supporting longer context lengths for better understanding of large texts.

  3. GPT-3.5 Turbo: gpt-3.5-turbo-0613version has GPT-4the same function call function as GPT, and realizes the ability to control model reply more reliably through system messages.

  4. gpt-3.5-turbo-16k: Compared with the standard version, the context length is extended to 4 times, supporting about 20 pages of text in a single request.

Older models are phased out

gpt-4Beginning of upgrades and phase-outs for the initial release and models released in March gpt-3.5-turbo.

Automatically gpt-3.5-turbo、gpt-4和gpt-4-32kupgrade applications that use the stable model name ( ) to the new model above on June 27th.

Developers who need more time to transition can continue to use the old model, specified gpt-3.5-turbo-0301, gpt-4-0314or gpt-4-32k-0314as a parameter in API requests “model”.

Requests specifying these old model names will not work starting September 13th.

Significant price reductions, up to 75% off

Embedding Models:  text-embedding-ada-002The cost of models has been reduced by 75%, with a markup price of 1000 $0.0001.

GPT-3.5 Turbo: gpt-3.5-turbo25% reduction in input markup price. Now, developers only pay a price $0.0015per 1,000 input tokens and $0.0021,000 output tokens, which equates to about 700 pages of text per dollar.

gpt-3.5-turbo-16kThe price of is per 1000 input tokens $0.003and per 1000 output tokens $0.004.

0df5e79aba74e3c62567798a36af061f.png

Guess you like

Origin blog.csdn.net/flysnow_org/article/details/131218903