[GPT] How to have an offline version of GPT and problems during deployment

【background】

At present, many companies cannot use OpenAI's GPT due to data security issues, and there is no need to use such a generalized GPT. Therefore, many companies have the need to train their own offline GPT. Such GPT only needs professional knowledge.
To make this possible, it is first necessary to be able to run GPT's Model locally.
insert image description here

【tool】

GPT4ALL is such a tool that came into being.
Official website address: https://gpt4all.io/index.html
If the official website is slow, you can download my saved Baidu:
Link: https://pan.baidu.com/s/1QodbiPxnK0RSYDcDc65sPg?pwd=dff3
Extraction code: dff3
– Sharing from Baidu Netdisk super member V7

【Deployment method】

  1. The first deployment method is the simplest. Download the executable file of the corresponding platform on the homepage of the official website and run it directly. The advantage of this method is that it is convenient and equipped with a UI that integrates all functions including Model download and training. The disadvantage is that this method can only use the GPT function on the machine. Personal training on personal GPT requires more learning and experimentation. If you want to use this capability for LAN applications, you still need other deployment methods.
  2. The second recommended deployment method is Python deployment. Here are a few points to note. If you use Anaconda+Pycharm, make sure that the corresponding compiler environment of GPT is above 3.8, otherwise the install package will report an error. And Anaconda needs to be 64-bit. When I installed 32bitAnaconda on a 64-bit OS at the beginning, an error would also be reported. These two pits are avoided, and the install will go smoothly.

Note that you also need to download the bin file of the model on the home page. The default version 3.5 is recommended, which is free and commercially available. There are many other models, you can do your own research.
insert image description here

[python deployment command]

Specific commands for python deployment (can be executed step by step from the command line):
first in the Terminal of Pycharm

pip install nomic
pip install gpt4all

Then in the PythonConsole:

import gpt4all //看看是否成功导入
dir(gpt4all)//查看相关的模块是否都安装了
from gpt4all import GPT4All//开始验证Prompt功能
gpt = GPT4All(model_name="ggml-gpt4all-j-v1.3-groovy.bin",model_path="D:/gpt/")//先把相应的bin模型引入并初始化
gpt.chat_completion([{"role":"assistant","content":"what are machine learning models"}])//这条就是Prompt功能命令,送出后机器反应一会儿会给出问题的回答。

【Other Information】

GPT4ALL runs on CPU instead of GPU, so the performance is related to CPU performance.

おすすめ

転載: blog.csdn.net/weixin_41697242/article/details/131443336
GPT