ChatGPT builds AI website practice

1. Overview
ChatGPT is a large-scale language model based on the GPT-3.5 architecture, which can perform tasks such as natural language processing and dialogue generation. As an intelligent chat robot, ChatGPT has a wide range of application scenarios, such as online customer service, intelligent assistant, personalized recommendation, etc. Today I will share with you how to use ChatGPT's API model to quickly build an AI website.

2. Content
In actual combat, I found that the biggest advantage of ChatGPT lies in its natural and smooth dialogue interaction ability. ChatGPT can automatically understand the user's intentions and questions, and give targeted answers and suggestions. At the same time, it can also generate richer responses based on existing contextual information, thereby achieving a more natural and human-like interaction effect.

Below I will share some practical experience on how to use ChatGPT, the general process is as follows:

insert image description here

Before using ChatGPT, we need to preprocess the data. The purpose of preprocessing is to transform the raw text into a format that the model can understand. Specifically, the preprocessing steps that need to be performed include: word segmentation, tokenization, vectorization, etc. These steps can all be implemented using common NLP tool libraries, such as NLTK, spaCy, transformers, etc. After preprocessing the data, we need to use the training data to train the ChatGPT model. Usually, we will use some excellent deep learning frameworks to implement model training, such as PyTorch, TensorFlow, etc. During model training, we need to set some hyperparameters, such as learning rate, batch size, model depth, etc. After the model training is complete, we need to evaluate the model. The purpose of the evaluation is to understand the performance and performance of the model, so as to decide whether further tuning and optimization are needed. Common model evaluation indicators include: accuracy rate, recall rate, F1 value, etc. After completing the model training and evaluation, we need to apply ChatGPT to the actual scene. Usually, we need to integrate ChatGPT into our applications, such as online customer service, smart assistant, etc. During the deployment process, we need to consider some issues, such as performance, reliability, security, etc.

3. How to use ChatGPT to quickly implement an AI website?
Using ChatGPT to implement an AI website, the general steps are as follows:

insert image description here

First, you need to determine for which purpose and for which audience your AI website will be used. Your goal may be to provide online customer service, intelligent question and answer, speech recognition, automatic translation and other functions. Your audience could be your customers, readers, visitors, etc. By clarifying your goals and audience, you can better plan your website architecture and design. To build an AI website, you need to choose a web development framework. Commonly used web development frameworks include Django, Flask, Express, etc. These frameworks provide many common functions and templates, which can help you develop websites more quickly and improve development efficiency. Integrating ChatGPT is a key step in realizing an AI website. You can use programming languages ​​such as Python or JavaScript to call ChatGPT API and embed it in your web application. In this way, your website can provide better user experience and service through ChatGPT. For example, users can interact with ChatGPT to get answers to questions, perform voice interactions, and more.

In order for users to better interact with ChatGPT, you need to create a user-friendly interface. You can use HTML, CSS, JavaScript and other technologies to design and create your user interface. You need to take into account the user's needs and experience, and make sure your interface is clean, easy to use, beautiful, etc. In order for ChatGPT to accurately answer users' questions, you need to train ChatGPT. You can use natural language processing techniques to train ChatGPT so that it can understand and respond to user questions. You can use open source datasets and algorithms to train ChatGPT, and optimize the model to improve accuracy and efficiency.

Before you deploy your website to production, you need to test and optimize it. You should review all features and make sure they are working properly, and you should also optimize performance and user experience to increase user satisfaction. You can use automated testing tools to test your website and performance analysis tools to identify bottlenecks and optimization points. You can collect user feedback and make improvements to continuously improve your website.

When you're ready to deploy your website to production, you need to choose a suitable web server and database. Commonly used web servers include Apache, Nginx, etc., and commonly used databases include MySQL, PostgreSQL, etc. You also need to choose a suitable cloud service provider, such as AWS, Google Cloud, etc., and deploy your application on the cloud server. Once your website is deployed to production, you need to perform regular maintenance and upgrades. You should regularly back up your data and update your applications to ensure security and stability. You should also continuously optimize your user experience and functionality to meet the needs and expectations of your users.

4. Rapid implementation based on promptable
If you do not understand the algorithm, you can quickly create a prompt in promptable, then deploy to generate a PromptID, and directly call the interface of OpenAI through this PromptID to get the output result of the model. The operation is as follows:

4.1 Write a hook module
to call the interface of OpenAI to obtain the output results. The implementation code is as follows:

import {
    
     addMessage, getHistoryString } from "@/utils/chatHistory";
import React, {
    
     useEffect } from "react";

export const useChatGpt = (message, promptId, chatHistory) => {
    
    
  // Send user meesage to api, meesage and prompt in body
  // then update state value with response
  //   console.log("Hook api call", message, promptId);
  const [data, setData] = React.useState("");
  const [isLoading, setIsLoading] = React.useState(false);
  const [isError, setIsError] = React.useState(false);
  const [history, setHistory] = React.useState(chatHistory);
  const [isSuccess, setIsSuccess] = React.useState(false);

  const fetchData = async () => {
    
    
    setIsLoading(true);
    try {
    
    
      const response = await fetch("/api/chatgpt", {
    
    
        method: "POST",
        headers: {
    
    
          "Content-Type": "application/json",
        },
        body: JSON.stringify({
    
    
          message,
          promptId,
          chatHistory: getHistoryString(chatHistory),
        }),
      }).then((res) => res.json());
      if (response.reply) {
    
    
        console.log("Hook api call response", response.reply);
        setData(response.reply);
        setIsSuccess(true);
        setHistory(addMessage(chatHistory, response.reply, "agent"));
      } else {
    
    
        setIsError(true);
      }
    } catch (error) {
    
    
      setIsError(true);
    }
    setIsLoading(false);
  };

  useEffect(() => {
    
    
    setIsError(false);
    setIsSuccess(false);
    if (message) {
    
    
      fetchData();
    }
  }, [message]);

  useEffect(() => {
    
    
    setHistory(chatHistory);
  }, [chatHistory]);

  useEffect(() => {
    
    
    if (promptId) {
    
    
      setIsError(false);
      setIsSuccess(false);
      setHistory([]);
    }
  }, [promptId]);

  return {
    
    
    data,
    isLoading,
    isError,
    history,
    isSuccess,
  };
};

4.2 Write a page component
By writing a page component, it is used to interact with the background interface service. This module is used to call the model and get the output result. The implementation code is as follows:

import {
    
     useChatGpt } from "@/hook/useChatGpt";
import {
    
     addMessage } from "@/utils/chatHistory";
import {
    
     Button, TextField } from "@mui/material";
import React, {
    
     useEffect } from "react";
import {
    
     ChatHistoryFrame } from "./ChatHistoryFrame";

const promptId = "xxxxxx"; // 通过Prompt自动生成获取ID

export const ChatContainer = () => {
    
    
  const [pendingMessage, setPendingMessage] = React.useState("");
  const [message, setMessage] = React.useState("");
  const [chatHistory, setChatHistory] = React.useState([]);
  const {
    
     isLoading, history, isSuccess, isError } = useChatGpt(
    message,
    promptId,
    chatHistory
  );

  useEffect(() => {
    if (isSuccess || isError) {
      setMessage("");
    }
  }, [isSuccess, isError]);

  return (
    <div id="chat-container">
        <h1>MOVIE to emoji</h1>
      </a>
      <ChatHistoryFrame chatHistory={chatHistory} isLoading={isLoading} />
      <div id="chat-input">
        <TextField
          type="text"
          onChange={
    
    (e) => {
            setPendingMessage(e.target.value);
          }}
        />
        <Button
          style={
    
    {
            backgroundColor: "black",
            width: "80px",
          }}
          variant="contained"
          onClick={
    
    () => {
            setMessage(pendingMessage);
            setChatHistory(addMessage(history || [], pendingMessage, "user"));
          }}
        >
          Send
        </Button>
        <Button
          style={
    
    {
    
    
            color: "black",
            width: "80px",
            borderColor: "black",
          }}
          variant="outlined"
          onClick={
    
    () => {
    
    
            setMessage("");
            setChatHistory([]);
          }}
        >
          Clear
        </Button>
      </div>
    </div>
  );
};

4.3 ChatGPT core module
Write a core module that calls the ChatGPT interface logic to interact with the API to obtain output results. The specific implementation details are as follows:

import {
    
     PromptableApi } from "promptable";
import {
    
     Configuration, OpenAIApi } from "openai";
import GPT3Tokenizer from "gpt3-tokenizer";

const configuration = new Configuration({
    
    
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const tokenizer = new GPT3Tokenizer({
    
     type: "gpt3" });

const chatgpt = async (req, res) => {
    
    
  const {
    
     message, promptId, chatHistory } = req.body;
  console.log("api call entry", message, promptId);
  if (!message) {
    
    
    res.status(400).json({
    
     error: "Message is required" });
    return;
  }
  if (!promptId) {
    
    
    res.status(400).json({
    
     error: "Prompt ID is required" });
    return;
  }
  // call prompt ai api and openai api
  const reply = await getReply(message, promptId, chatHistory || "");
  res.status(200).json({
    
     reply });
  return;
};

const getReply = async (message, promptId, chatHistory) => {
    
    
  // get prompt from prompt ai api based on promptId
  if (!promptId) {
    
    
    throw new Error("Prompt ID is required");
  }
  const promptDeployment = await PromptableApi.getActiveDeployment({
    
    
    promptId: promptId,
  });
  console.log("prompt deployment", promptDeployment);
  if (!promptDeployment) {
    
    
    throw new Error("Prompt deployment not found");
  }
  // replace prompt with message

  const beforeChatHistory = promptDeployment.text.replace("{
    
    {input}}", message);

  const numTokens = countBPETokens(beforeChatHistory);
  const afterChatHistory = beforeChatHistory.replace(
    "{
    
    {chat history}}",
    chatHistory
  );

  const finalPromptText = leftTruncateTranscript(
    afterChatHistory,
    4000 - numTokens
  );

  const revisedPrompt = {
    
    
    ...promptDeployment,
    text: finalPromptText,
  };

  console.log("revised prompt", revisedPrompt);
  // call openai api
  const response = await openai.createCompletion({
    
    
    model: revisedPrompt.config.model,
    prompt: revisedPrompt.text,
    temperature: revisedPrompt.config.temperature,
    max_tokens: revisedPrompt.config.max_tokens,
    top_p: 1.0,
    frequency_penalty: 0.0,
    presence_penalty: 0.0,
    stop: revisedPrompt.config.stop,
  });
  console.log("openai response", response.data);
  if (response.data.choices && response.data.choices.length > 0) {
    
    
    return response.data.choices[0].text;
  } else {
    
    
    return "I'm sorry, I don't understand.";
  }
};

function countBPETokens(text) {
    
    
  const encoded = tokenizer.encode(text);
  return encoded.bpe.length;
}

function leftTruncateTranscript(text, maxTokens) {
    
    
  const encoded = tokenizer.encode(text);
  const numTokens = encoded.bpe.length;
  const truncated = encoded.bpe.slice(numTokens - maxTokens);
  const decoded = tokenizer.decode(truncated);
  return decoded;
}

export default chatgpt;

4.4 Using dependencies
Finally, the dependency packages used by our project are as follows:

"dependencies": {
    
    
    "@emotion/react": "^11.10.5",
    "@emotion/styled": "^11.10.5",
    "@mui/material": "^5.11.6",
    "@next/font": "13.1.6",
    "eslint": "8.32.0",
    "eslint-config-next": "13.1.6",
    "gpt3-tokenizer": "^1.1.5",
    "next": "13.1.6",
    "openai": "^3.2.1",
    "promptable": "^0.0.5",
    "react": "18.2.0",
    "react-dom": "18.2.0"
  }

4.5 Writing Prompt
After completing the background logic writing of the core module, you can access the Prompt background and obtain the ID by writing the Prompt. The operation is as follows:
insert image description here

4.6 Deploying Prompt
After the Prompt is written, we can deploy the Prompt. After the deployment is successful, a PromptID will be generated, as shown in the following figure:

insert image description here

Here, there is a reference code implementation in the deployment prompt, as follows:

import axios from 'axios'


// 这里面的xxxxxxx是部署Prompt自动生成的ID,这里我用xxxxxxx替换了
const {
    
     data } = await axios.get('https://promptable.ai/api/prompt/xxxxxxx/deployment/active')

const prompt = data.inputs?.reduce((acc, input) => {
    
    
  // Replace input.value with your value!
  return acc.replaceAll(`{
     
     {
     
     ${
     
     input.name}}}, ${
     
     input.value}`)
}, data.text)

const res = await axios.get('https://openai.com/v1/completions', {
    
    
  data: {
    
    
    // your prompt
    prompt,

    // your model configs from promptable
    config: {
    
    
      ...data.config,
      // add any other configs here
    }
  }
})

// Your completion!
console.log(res.data.choices[0].text)

4.7 AI website preview realized by ChatGPT
Finally, we develop an AI website based on the latest gpt-3.5-turbo model of OpenAI, the effect is as follows:

insert image description here

Here, in order to save the token fee, it is temporarily output by clicking the "Stop Dialogue" button. Because the interface using OpenAI is calculated according to the token, an English letter counts as one token, and a Chinese character counts as two tokens. The fee details are as follows:
insert image description here

5. Summary
This article introduces how to use ChatGPT to implement an AI website. By choosing a suitable web development framework, integrating ChatGPT, creating a user interface, training ChatGPT, testing and optimization, deploying to production environment, and performing maintenance and upgrades, you can build a powerful AI website and provide better user experience experience and service.

Guess you like

Origin blog.csdn.net/weixin_47059371/article/details/130407943