【chatgpt】Learn the open source project chatgpt-web, and build your own chatgpt service, with rich functions and typing effects

foreword


The original link of this article is:
https://blog.csdn.net/freewebsys/article/details/130438873

1. Open source chatgpt project


Project address:
https://github.com/Chanzhaoyu/chatgpt-web

There is also a project carried by gitee. I don’t know if it is the author. It is also updated recently:
https://gitee.com/boomer001/chanzhaoyu-chatgpt-web

The project started in February 23, and the project already has a very high star 20K.
The front end of the project is developed using Vue.
The back end of the project is developed using nodejs, a full-stack project.

And it is also the official docker image, which can be configured directly.

2. The project can run directly using docker-compose


Starting from the second step, how to apply for apikey is not discussed here, and you can solve it by yourself.

The official also has a simple configuration using docker-compose. After finishing this, just configure the apikey:

docker-compose.yaml configuration file:

services:
  app:
    restart: always
    image: chenzhaoyu94/chatgpt-web # 总是使用 latest ,更新时重新 pull 该 tag 镜像即可
    ports:
      - 3002:3002
    #volumes: #要是想自己修改页面,可以吧dist映射到/app/pulbic目录即可。
    #  - ./chatgpt-web/dist:/app/public
    environment:
      # 二选一
      OPENAI_API_KEY: sk-xxxxxxx
      # 二选一
      # API接口地址,可选,设置 OPENAI_API_KEY 时可用
      OPENAI_API_BASE_URL: http://localhost:3002
      # API模型,可选,设置 OPENAI_API_KEY 时可用
      OPENAI_API_MODEL: gpt-3.5-turbo
      # 访问权限密钥,可选
      AUTH_SECRET_KEY: chat666
      # 超时,单位毫秒,可选
      TIMEOUT_MS: 60000

If you want to modify the page by yourself, you can map dist to the /app/pulbic directory, that is,
./chatgpt-web/dist:/app/public
chatgpt-web is the address of this github project after downloading.

Of course, you can also use nodejs to run directly, divided into front-end and back-end, and the front-end project also needs to be compiled.

Successful startup:
insert image description here
There is a login authorization prompt, and you need to enter the configured auth password such as chat666.
If you fail to log in by mistake, of course this is a simple authorization.
insert image description here

After successfully logging in, you can use
insert image description here
the typing mode to answer questions, this place needs attention.

3. Regarding typing mode SSE, octet-stream (typing special effects)


SSE stands for "Server-Sent Events", which is an HTTP-based server push technology. Using SSE allows the server to send event streams to the client, enabling real-time data updates and notifications. The SSE protocol is based on the HTML5 standard and supports cross-domain communication. Unlike Web Sockets, it is a one-way communication method, that is, only the server can send messages to the client. In the browser, the monitoring and processing of the SSE event stream can be realized through JavaScript's EventSource API.

A new protocol of H5, here you need to pay attention, if the port is directly exposed, don't worry about it.
But if you need to use nginx for forwarding, you need to modify the nginx configuration.

It took a few days to figure it out here, and the official also gave a solution to configure nginx's octet-stream (typing special effects) configuration:

Refer to this:
https://github.com/Chanzhaoyu/chatgpt-web/issues/402

That is, you need to configure the backend interface in nginx:

	location ~ ^/api/chat-process {
    
    

        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";

		# 流式输出
		chunked_transfer_encoding on;  # 开启分块传输编码
		tcp_nopush on;  # 开启TCP NOPUSH选项,禁止Nagle算法
		tcp_nodelay on;  # 开启TCP NODELAY选项,禁止延迟ACK算法
		keepalive_timeout 300;  # 设定keep-alive超时时间为65秒
	
		proxy_pass http://app:3002;

	}

The returned data format is an array, distinguished by carriage return\n, and then the data.
Return one or several words and then, there will be a typing effect on the interface:

insert image description here

Handling function in the project:

const fetchChatAPIOnce = async () => {
    
    
      await fetchChatAPIProcess<Chat.ConversationResponse>({
    
    
        prompt: message,
        options,
        signal: controller.signal,
        onDownloadProgress: ({
     
      event }) => {
    
    
          const xhr = event.target
          const {
    
     responseText } = xhr
          // Always process the final line
          const lastIndex = responseText.lastIndexOf('\n', responseText.length - 2)
          let chunk = responseText
          if (lastIndex !== -1)
            chunk = responseText.substring(lastIndex)
          try {
    
    
            const data = JSON.parse(chunk)
            updateChat(
              +uuid,
              index,
              {
    
    
                dateTime: new Date().toLocaleString(),
                text: lastText + (data.text ?? ''),
                inversion: false,
                error: false,
                loading: true,
                conversationOptions: {
    
     conversationId: data.conversationId, parentMessageId: data.id },
                requestOptions: {
    
     prompt: message, options: {
    
     ...options } },
              },
            )

            if (openLongReply && data.detail.choices[0].finish_reason === 'length') {
    
    
              options.parentMessageId = data.id
              lastText = data.text
              message = ''
              return fetchChatAPIOnce()
            }
          }
          catch (error) {
    
    
            //
          }
        },
      })

There are 4 interfaces in the project:

The interface configured by /api/config returns usage and configuration information.
/api/session returns model information. After this success, you can chat.
/api/verify Verify user login status.
/api/chat-process The most important user chat dialog message processing, using SSE to simulate typing.

At the same time, multiple conversations can be opened in this project, and the content of the last conversation can be recorded.
For example, first ask about the scenic spots in Shanghai:
insert image description here
then ask the three best ones, then chatgpt will know that they are the scenic spots in Shanghai:
insert image description here

This front-end style uses naiveui

https://www.naiveui.com/zh-CN/os-theme
github address:
https://github.com/tusen-ai/naive-ui

It's just that the style of the mobile terminal and the PC terminal is adaptive.

4. Regarding content storage


What this project takes is to store relevant chat information in a local file.
In localstorage, the advantages are high efficiency and fast speed.

In uitls/storage/local.ts is using

window.localStorage.setItem(key, json)
window.localStorage.getItem(key)
window.localStorage.removeItem(key)
window.localStorage.clear()

Storage-related information is stored in a field.
It is divided into two attributes: chat and history.
insert image description here
active is the active current session id

is the current timestamp:

  chatStore.addHistory({
    
     title: 'New Chat', uuid: Date.now(), isEdit: false })

Authentication here is a hard-coded setting:

service.interceptors.request.use(
  (config) => {
    
    
    const token = useAuthStore().token
    if (token)
      config.headers.Authorization = `Bearer ${
     
     token}`
    return config
  },
  (error) => {
    
    
    return Promise.reject(error.response)
  },
)

It is written directly in the head, but this is not safe.
Server authorization can modify the verify interface if jwt is used.
The server returns the jwt authorization and stores it locally.

Use jwt authentication in the interceptor, or store it directly in the cookie, but there will be cross-domain problems that need to be solved.

5. Summary


Recently, I found that using chatgpt can really improve efficiency very conveniently.
It is also very helpful for the development of the above various problems.
At the same time, there will be many engineering problems to be solved in the service of using large models.
Some login authorization or commonly used software knowledge needs to be mastered for learning.

This open source project is very good! Very recommendable!

The original link of this article is:
https://blog.csdn.net/freewebsys/article/details/130438873

insert image description here

Guess you like

Origin blog.csdn.net/freewebsys/article/details/130438873