With pictures and texts, we will teach you step by step how to use the OpenAI interface based on React+md to achieve the ChatGPT typewriter effect.

Preliminary preparation

  • Front-end project
  • Backend interface (OpenAI interface is enough)

Start a new React project

  • If you have existing projects, you can skip this step and go directly to the next step~
  • Next.js is a full-stack React framework. It's versatile and allows you to create React apps of any size - from static blogs to complex dynamic apps. To create a new Next.js project, run:
npx create-next-app@latest

Download dependencies

cd xiaojin-react-chatgpt

npm i

Run the project
npm run dev


Introduce antd

Install and introduce antd
npm install antd --save

Basic page preparation

  • Let’s first use simple code to achieve the effect
  • Modify the src\app\page.js code as follows
"use client";
import { useState } from "react";
import { Input, Button } from "antd";

const { TextArea } = Input;
export default function Home() {
  let [outputValue, setOutputValue] = useState("");
  return (
    <main className="flex min-h-screen text-black flex-col items-center justify-between p-24">
      <h2>Chat GPT 打字机效果</h2>
      <TextArea rows={17} value={outputValue} />
      <Button>发送请求</Button>
    </main>
  );
}

The page effect is as follows

Interface preparation

  • Register an OpenAI account (or use other interfaces)
Interface documentation example

Chat Chat-Chat Completion Object-Request Parameter Description
export interface RequestModel {
    /**
     * 默认为 0 -2.0 到 2.0 之间的数字。正值根据文本目前的存在频率惩罚新标记,降低模型重复相同行的可能性。  有关频率和存在惩罚的更多信息。
     */
    frequency_penalty?: number;
    /**
     * 修改指定标记出现在补全中的可能性。
     *
     * 接受一个 JSON 对象,该对象将标记(由标记器指定的标记 ID)映射到相关的偏差值(-100 到 100)。从数学上讲,偏差在对模型进行采样之前添加到模型生成的 logit
     * 中。确切效果因模型而异,但-1 和 1 之间的值应减少或增加相关标记的选择可能性;如-100 或 100 这样的值应导致相关标记的禁用或独占选择。
     */
    logit_bias?: null;
    /**
     * 默认为 inf
     * 在聊天补全中生成的最大标记数。
     *
     * 输入标记和生成标记的总长度受模型的上下文长度限制。计算标记的 Python 代码示例。
     */
    max_tokens?: number;
    /**
     * 至今为止对话所包含的消息列表。Python 代码示例。
     */
    messages: Message[];
    /**
     * 要使用的模型的 ID。有关哪些模型可与聊天 API 一起使用的详细信息,请参阅模型端点兼容性表。
     */
    model: string;
    /**
     * 默认为 1
     * 为每个输入消息生成多少个聊天补全选择。
     */
    n?: number;
    /**
     * -2.0 和 2.0 之间的数字。正值会根据到目前为止是否出现在文本中来惩罚新标记,从而增加模型谈论新主题的可能性。
     * [查看有关频率和存在惩罚的更多信息。](https://platform.openai.com/docs/api-reference/parameter-details)
     */
    presence_penalty?: number;
    /**
     * 指定模型必须输出的格式的对象。  将 { "type": "json_object" } 启用 JSON 模式,这可以确保模型生成的消息是有效的 JSON。  重要提示:使用
     * JSON 模式时,还必须通过系统或用户消息指示模型生成
     * JSON。如果不这样做,模型可能会生成无休止的空白流,直到生成达到令牌限制,从而导致延迟增加和请求“卡住”的外观。另请注意,如果
     * finish_reason="length",则消息内容可能会被部分切断,这表示生成超过了 max_tokens 或对话超过了最大上下文长度。  显示属性
     */
    response_format?: { [key: string]: any };
    /**
     * 此功能处于测试阶段。如果指定,我们的系统将尽最大努力确定性地进行采样,以便使用相同的种子和参数进行重复请求应返回相同的结果。不能保证确定性,您应该参考
     * system_fingerprint 响应参数来监控后端的更改。
     */
    seen?: number;
    /**
     * 默认为 null 最多 4 个序列,API 将停止进一步生成标记。
     */
    stop?: string;
    /**
     * 默认为 false 如果设置,则像在 ChatGPT 中一样会发送部分消息增量。标记将以仅数据的服务器发送事件的形式发送,这些事件在可用时,并在 data: [DONE]
     * 消息终止流。Python 代码示例。
     */
    stream?: boolean;
    /**
     * 使用什么采样温度,介于 0 和 2 之间。较高的值(如 0.8)将使输出更加随机,而较低的值(如 0.2)将使输出更加集中和确定。
     * 我们通常建议改变这个或`top_p`但不是两者。
     */
    temperature?: number;
    /**
     * 控制模型调用哪个函数(如果有的话)。none 表示模型不会调用函数,而是生成消息。auto 表示模型可以在生成消息和调用函数之间进行选择。通过 {"type":
     * "function", "function": {"name": "my_function"}} 强制模型调用该函数。  如果没有函数存在,默认为
     * none。如果有函数存在,默认为 auto。  显示可能的类型
     */
    tool_choice: { [key: string]: any };
    /**
     * 模型可以调用的一组工具列表。目前,只支持作为工具的函数。使用此功能来提供模型可以为之生成 JSON 输入的函数列表。
     */
    tools: string[];
    /**
     * 一种替代温度采样的方法,称为核采样,其中模型考虑具有 top_p 概率质量的标记的结果。所以 0.1 意味着只考虑构成前 10% 概率质量的标记。
     * 我们通常建议改变这个或`temperature`但不是两者。
     */
    top_p?: number;
    /**
     * 代表您的最终用户的唯一标识符,可以帮助 OpenAI
     * 监控和检测滥用行为。[了解更多](https://platform.openai.com/docs/guides/safety-best-practices/end-user-ids)。
     */
    user?: string;
    [property: string]: any;
}

export interface Message {
    content?: string;
    role?: string;
    [property: string]: any;
}
Prepare interface parameters
 const data = {
      model: "XXX",
      messages: [
        {
          role: "user",
          content: "写一篇1000字关于春天的作文",
        },
      ],
      prompt: "写一篇1000字关于春天的作文",
      temperature: 0.75,
      stream: true,
    };

Solution 1: Use fetch to process the stream to achieve the typewriter effect

Use streams to process Fetch
  • mdn document

  • The Fetch API allows you to fetch resources across the network, providing a modern API to replace XHR. It has a bunch of advantages, the really nice thing is that browsers have recently added the ability to use fetch responses as a readable stream.

  • The same goes for the Request.body and Response.body properties, which expose the body content as a readable stream getter.

Chat Chat-Chat Completion Object-Interface Return Parameter Description
parameter type describe
id string Unique identifier for chat completion
choices array List of chat completion options. If n is greater than 1, there can be multiple options
created integer Unix timestamp (seconds) when the chat was created
model string Model for chat completion
system_fingerprint string This fingerprint represents the backend configuration under which the model is run
object string Object type, always chat.completion
usage object Usage statistics for completing requests
completion_tokens integer Number of completed tokens generated
prompt_tokens integer Number of tokens in the prompt
total_tokens integer Total number of tokens used in the request (prompt + completed)
Call code example
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });

    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      if (done) {
        console.log("***********************done");
        console.log(value);
        break;
      }
      console.log("--------------------value");
      console.log(value);
    }
  • In the function, we use response.body.getReader() to lock the reader to the stream, and then follow the same pattern we saw before - read each chunk with the reader, before running the read() method again, Check whether done is true. If it is true, the processing ends. If it is false, read the next chunk and process it.
  • Get transferred data through loop
Write page logic code
  • We temporarily use fixed parameters to simulate
  • Write a simple demo to demonstrate
"use client";
import { useState } from "react";
import { Input, Button } from "antd";
const { TextArea } = Input;
export default function Home() {
  let [outputValue, setOutputValue] = useState("");
  const send = async () => {
    const url = "http://xxxxxx/v1/chat/completions";
    const data = {
      model: "chatglm2-6b",
      messages: [
        {
          role: "user",
          content: "写一篇1000字关于春天的作文",
        },
      ],
      prompt: "写一篇1000字关于春天的作文",
      temperature: 0.75,
      stream: true,
    };
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });

    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      if (done) {
        console.log("***********************done");
        console.log(value);
        break;
      }
      console.log("--------------------value");
      console.log(value);
    }
  };
  return (
    <main className="flex min-h-screen text-black flex-col items-center justify-between p-24">
      <h2>Chat GPT 打字机效果</h2>
      <TextArea rows={17} value={outputValue} />
      <Button onClick={send}>发送请求</Button>
    </main>
  );
}

Click the button to view the print results

  • We can see that the printed buffer strings are all printed, and we need to parse them to know the final result.
parsing buffer

const encode = new TextDecoder("utf-8");
const reader = response.body.getReader();
 while (true) {
      const { done, value } = await reader.read();
      const text = encode.decode(value);
      if (done) {
        console.log("***********************done");
        console.log(text);
        break;
      }
      console.log("--------------------value");
      console.log(text);
    }
View analysis

We can see that the parsing result format is as follows

data: {"id": "chatcmpl-3zmRJUd4TTpm9xP9NbQVHw", "model": "chatglm2-6b", "choices": [{"index": 0, "delta": {"content": "希望"}, "finish_reason": null}]}
Observe the returned data
  • We can find that the returned data is some strings, and the number is inconsistent each time, but the data structure is fixed. We need regularity to parse, use regularity to parse the returned data each time into an array, and then merge the characters skewers~~
  • If other friends have better methods, please leave a message~

Use regular expressions to parse data

We write a function~~and then print the data

 const getReaderText = (str) => {
    let matchStr = "";
    try {
      let result = str.match(/data:\s*({.*?})\s*\n/g);
      result.forEach((_) => {
        const matchStrItem = _.match(/data:\s*({.*?})\s*\n/)[1];
        const data = JSON.parse(matchStrItem);
        matchStr += data?.choices[0].delta?.content || '';
      });
    } catch (e) {
      console.log(e);
    }
    return matchStr;
  };

Assign data to text box


Initial realization of simple typewriter effect

Basic version of typewriter effect code (almost no dependencies)
"use client";
import { useState } from "react";
import { Input, Button } from "antd";
const { TextArea } = Input;
export default function Home() {
  let [outputValue, setOutputValue] = useState("");
  const getReaderText = (str) => {
    let matchStr = "";
    try {
      let result = str.match(/data:\s*({.*?})\s*\n/g);
      result.forEach((_) => {
        const matchStrItem = _.match(/data:\s*({.*?})\s*\n/)[1];
        const data = JSON.parse(matchStrItem);
        matchStr += data?.choices[0].delta?.content || "";
      });
    } catch (e) {
      console.log(e);
    }
    return matchStr;
  };

  const send = async () => {
    const url = "http://xxx.xxx.xxx.xxx:xxx/v1/chat/completions";
    const data = {
      model: "chatglm2-6b",
      messages: [
        {
          role: "user",
          content: "帮我写一篇2000字关于春天的英文文章",
        },
      ],
      temperature: 0.75,
      stream: true,
    };
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });
    const encode = new TextDecoder("utf-8");
    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      const decodeText = encode.decode(value);
      if (done) {
        console.log(decodeText);
        break;
      }
      setOutputValue((str) => (str += getReaderText(decodeText)));
    }
  };
  return (
    <main className="flex min-h-screen text-black flex-col items-center justify-between p-24">
      <h2>Chat GPT 打字机效果</h2>
      <TextArea rows={24} value={outputValue} />
      <Button onClick={send}>发送请求</Button>
    </main>
  );
}

autoscroll
import { useState } from "react";


const ref = useRef();

// 文本框赋值后添加:
ref.current &&
      (ref.current.resizableTextArea.textArea.scrollTop =
        ref.current.resizableTextArea.textArea.scrollHeight);


html

<TextArea rows={24} value={outputValue} ref={ref}/>

What if you want a slower typewriter effect?
  • Because multiple words are parsed at one time, sometimes it doesn't look like one word for another. We can use the following solution to solve it.
  • Solution: Save the obtained data strings, cut all strings, set a time interval of 50 milliseconds through setTimeout, and update the dom every 50 milliseconds.
  • The following only shows the case. It is not recommended to write like this. I finally removed this code~~~

Complete custom speed typewriter code

"use client";
import { useState, useRef, useEffect } from "react";
import { Input, Button } from "antd";
import "./index.css";
const { TextArea } = Input;
let testDataString = "";
export default function Home() {
  const ref = useRef();
  let [outputValue, setOutputValue] = useState("");

  const getReaderText = (str) => {
    let matchStr = "";
    try {
      let result = str.match(/data:\s*({.*?})\s*\n/g);
      result &&
        result.forEach((_) => {
          const matchStrItem = _.match(/data:\s*({.*?})\s*\n/)[1];
          const data = JSON.parse(matchStrItem);
          matchStr += (data?.choices[0].delta?.content || '');
        });
    } catch (e) {
      console.log(e);
    }
    return matchStr;
  };
  const writing = (index) => {
    const data = testDataString.split("");
    if (index === 0 && data.length > 0) {
      setOutputValue(data[index]);
    }
    if (index < data.length - 1) {
      setOutputValue((str) => (str += data[index]));
    }
    ref.current &&
      (ref.current.resizableTextArea.textArea.scrollTop =
        ref.current.resizableTextArea.textArea.scrollHeight);
    setTimeout(writing, 100, ++index);
  };
  const send = async () => {
    setOutputValue("");
    const url = "http://xxx.xxx.xxx.xxx:xxx/v1/chat/completions";
    const data = {
      model: "chatglm2-6b",
      messages: [
        {
          role: "user",
          content: "hello",
        },
      ],
      temperature: 0.75,
      stream: true,
    };

    testDataString = "";
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });

    const encode = new TextDecoder("utf-8");
    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      const decodeText = encode.decode(value);
      if (testDataString.length === 0) {
        testDataString += getReaderText(decodeText);
        writing(0);
      } else {
        testDataString += getReaderText(decodeText);
      }
      if (done) {
        console.log(decodeText);
        break;
      }
    }
  };
  return (
    <main className="chat-container flex min-h-screen text-black flex-col items-center justify-between p-24">
      <h2>Chat GPT 打字机效果</h2>
      <TextArea rows={3} value={outputValue} ref={ref} />
      <Button onClick={send}>发送请求</Button>
    </main>
  );
}

Code block support (to be added)

Download dependencies
npm i @uiw/react-md-editor
Add key code
import MDEditor from '@uiw/react-md-editor';

html

<MDEditor.Markdown source={outputValue}  className="markdown-body" ref={ref}/>
Configuration style
  • I randomly searched for a style case on the Internet and copied and pasted it directly into the project. You can refer to it~~
  • Click here to go directly:github-markdown-css
Check the effect

The core code is as follows
"use client";
import { useState, useRef, useEffect } from "react";
import MDEditor from '@uiw/react-md-editor';
import { Input, Button } from "antd";
import "./index.css";
import './md.css'
const { TextArea } = Input;
let testDataString = "";
export default function Home() {
  const ref = useRef();
  let [outputValue, setOutputValue] = useState("");

  const getReaderText = (str) => {
    let matchStr = "";
    try {
      let resultList = str.match(/data:\s*({.*?})\s*\n/g);
      resultList &&
      resultList.forEach((_) => {
          const matchStrItem = _.match(/data:\s*({.*?})\s*\n/)[1];
          const data = JSON.parse(matchStrItem);
          matchStr += (data?.choices[0].delta?.content || '');
        });
    } catch (e) {
      console.log(e);
    }
    return matchStr;
  };

  const send = async () => {
    setOutputValue("");
    const url = "http://xxx.xxx.xxx.xxx:xxx/v1/chat/completions";
    const data = {
      model: "chatglm2-6b",
      messages: [
        {
          role: "user",
          content: "请实现一个登陆功能",
        },
      ],
      temperature: 0.75,
      stream: true,
    };

    testDataString = "";
    const response = await fetch(url, {
      method: "POST",
      body: JSON.stringify(data),
      headers: {
        "Content-Type": "application/json",
      },
    });

    const encode = new TextDecoder("utf-8");
    const reader = response.body.getReader();
    while (true) {
      const { done, value } = await reader.read();
      const decodeText = encode.decode(value);
      if (done) {
        console.log(decodeText);
        break;
      }
      setOutputValue((str) => (str += getReaderText(decodeText)));
      console.log(ref.current.mdp.current)
       ref.current &&
        (ref.current.mdp.current.scrollTop =
          ref.current.mdp.current.scrollHeight);
    }
  };
  return (
    <main className="chat-container flex min-h-screen text-black flex-col items-center justify-between ">
      <h1>Chat GPT 打字机效果</h1>
      <MDEditor.Markdown source={outputValue}  className="markdown-body" ref={ref}/>
      <Button onClick={send}>发送请求</Button>
    </main>
  );
}

Option 2: axios request method (this method is not suitable for the browser side and can be used in nodejs code)

  • When calling the axios stream type request on the browser side, use the XMLHttpRequest object to implement the request. The XMLHttpRequestResponseType type does not support stream, and the following warning will be reported:
The provided value 'stream' is not a valid enum value of type XMLHttpRequestResponseType.
A complete case of using axios stream to call OpenAI in nodejs

Next, I will show you how to write axios.

const axios = require("axios");
let testDataString = "";
const getReaderText = (str) => {
  let matchStr = "";
  try {
    let resultList = str.match(/data:\s*({.*?})\s*\n/g);
    resultList &&
      resultList.forEach((_) => {
        const matchStrItem = _.match(/data:\s*({.*?})\s*\n/)[1];
        const data = JSON.parse(matchStrItem);
        matchStr += data?.choices[0].delta?.content || "";
      });
  } catch (e) {
    console.log(e);
  }
  return matchStr;
};
const url = "http://10.169.112.194:7100/v1/chat/completions";
const data = {
  model: "chatglm2-6b",
  messages: [
    {
      role: "user",
      content: "请实现一个登陆功能",
    },
  ],
  temperature: 0.75,
  stream: true,
};
const encode = new TextDecoder("utf-8");
axios
  .post(url, data, {
    responseType: "stream",
    headers: { "Content-Type": "application/json" },
  })
  .then((response) => {
    response.data.on("data", (value) => {
      const currentString = getReaderText(encode.decode(value));
      testDataString += currentString;
      console.log(currentString);
    });
    response.data.on("end", () => {
      console.log(testDataString);
    });
  });

call effect

code repository

That’s all for today~
  • Friends, ( ̄ω ̄( ̄ω ̄〃 ( ̄ω ̄〃)ゝSee you tomorrow~~
  • Everyone, please be happy every day

Everyone is welcome to point out areas that need correction in the article~
There is no end to learning and win-win cooperation

Insert image description here

Welcome the brothers and sisters passing by to give better opinions~~

Guess you like

Origin blog.csdn.net/tangdou369098655/article/details/134368140