GPT-4 Turbo comes out, API is more cost-effective, 128K context window leads a new era

GPT-4 Turbo has a cheaper API and a longer 128K context window.

Search and follow "Python Learning and Research Basecamp" on WeChat, join the reader group, and share more exciting things

picture

1. Introduction

Just eight months after releasing GPT-4 (click to learn about GPT-4) , OpenAI has launched an updated model, GPT-4 Turbo, with a contextual window that can fit a 300-page book in a single prompt and offers cheaper API access.

"GPT-4 is here, Python API uses the latest version of GPT" (click to view GPT-4 API)

【GPT-4 Turbo】:https://openai.com/blog/new-models-and-developer-products-announced-at-devday

picture

2. What are the new features of GPT-4 Turbo?

The following are the main features of GPT-4 Turbo:

  • 128K context window (16x larger than GPT-4).

  • Compared with GPT-4, the price of input tokens is reduced by 3 times, and the price of output tokens is reduced by 2 times.

  • It has knowledge as of April 2023 (GPT-4 has a knowledge cutoff of January 2022).

【GPT-4 Turbo Price】: https://openai.com/pricing#gpt-4-turbo

3. How to access?

For paying users, GPT-4 Turbo is now the default model used in ChatGPT. If you have an OpenAI account and have already gained access to GPT-4, you can gpt-4-11-6-previewaccess the new model by switching to Models on the Playground.

picture

Screenshot of OpenAI Playground

GPT-4 Turbo is available to all paying developers and can be gpt-4-1106-previewtried by passing it in the API. Here is an example of a chat completion request using JavaScript:

import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const completion = await openai.chat.completions.create({
    messages: [{ role: "system", content: "You are a helpful assistant." }],
    model: "gpt-4-1106-preview",
  });

  console.log(completion.choices[0]);
}

main();

Here's how to do it using Python:

from openai import OpenAI
client = OpenAI()

completion = client.chat.completions.create(
  model="gpt-4-1106-preview",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

4. API Pricing

As a developer, the lower pricing is one of the most exciting updates. OpenAI has reduced the price of input tokens by 3x and the price of output tokens by 2x. This makes the new model more accessible to smaller developers and startups.

API price for GPT-4 Turbo:

picture

OpenAI GPT-4 Turbo pricing

Previous GPT-4 API prices:

picture

OpenAI GPT-4 pricing

Tokens are word fragments used in natural language processing. For English text, 1 token is approximately equal to 4 characters or 0.75 words.

In addition, access fees to ChatGPT API are billed separately from ChatGPT Plus subscription fees. Users can monitor their usage on the usage page of their OpenAI account.

picture

OpenAI usage interface

5. Automatic switching tools

In the latest ChatGPT user interface, the drop-down menu has disappeared. It was replaced with only three options: GPT-4, GPT-3.5, and plugins.

picture

ChatGPT model selection

GPT-4 Turbo now automatically selects the right tool for users.

“We heard the feedback from our users. That model selector is really annoying.” — Sam Altman

For example, if the user asks the AI ​​to generate an image, it will now intelligently use the Dall-E 3 to generate the image.

6. Final Thoughts

Overall, it’s great to see OpenAI’s rapid innovation in language models. They are undoubtedly exciting and offer a wide range of possibilities for innovative GPT-based applications.

However, it’s also interesting to think about OpenAI’s strategic approach. Initially, OpenAI released their API, allowing developers to build and innovate, effectively taking on the risk of early adoption and user engagement. This move by OpenAI proved to be a smart move as it not only fostered a diverse ecosystem of applications but also provided them with insights into the most requested features.

Now, OpenAI appears to be selectively integrating these popular features directly into their platform, effectively curating the best products and services developed by the community.

Recommended book list

"Deep Learning for Mobile Devices—Based on TensorFlow Lite, ML Kit and Flutter"

"Deep Learning for Mobile Devices—Based on TensorFlow Lite, ML Kit and Flutter" details the basic solutions related to the development of deep learning for mobile devices, mainly including using the device's built-in model to perform face detection, developing intelligent chat robots, and identifying plants Species, generating real-time subtitles, building artificial intelligence authentication systems, using AI to generate music, chess engines based on reinforced neural networks, building super-resolution image applications, etc. In addition, this book also provides corresponding examples and codes to help readers further understand the implementation process of related solutions.

This book is suitable as a textbook and teaching reference book for computer and related majors in colleges and universities, and can also be used as a self-study book and reference manual for relevant developers.

"Deep Learning for Mobile Devices—Based on TensorFlow Lite, ML Kit and Flutter" icon-default.png?t=N7T8https://item.jd.com/14001258.html

Highlights

"Using Ray to create efficient deep learning data pipelines"

"Using Ray to Easily Perform Python Distributed Computing"

"AI Programming, Detailed Comparison of GitHub Copilot vs. Amazon CodeWhisperer"

"Understand the new deep learning library Rust Burn in one article"

"Teach you step by step how to build a perceptron from scratch using Python"

"5 Ways to Use the ChatGPT Code Interpreter in Data Science"

Search and follow "Python Learning and Research Basecamp" on WeChat, join the reader group, and share more exciting things

Visit [IT Today’s Hot List] to discover daily technology hot spots

Guess you like

Origin blog.csdn.net/weixin_39915649/article/details/135355561