Model training series: 1. Deploy your own local AI assistant with the Tsinghua ChatGLM-6B model

Recently, the ChatGLM-6B language model open sourced by Tsinghua University has become a big hit in the world. It is a small model with only 6.2 billion parameters, but it is very capable. Looking forward to the release of their follow-up 130 billion parameter model 130B.

Why are some small models with weaker abilities so sought after? Because although ChatGPT and GPT-4 are good, they are blocked by foreign countries after all, and they have to pay. More importantly, LLM needs to improve productivity in various industries. Many companies must deploy language models by themselves in the future. After all, no one dares Leaking your own business data to train other people's AI, making wedding dresses for others, and finally ruining your own job.

Here, based on my practical experience, I will share how to build a language model server by myself. The final effect is as follows:

First of all, you need a machine with a strong GPU. It is recommended to rent an AI training machine such as Tencent Cloud or Alibaba Cloud. T4 graphics card will do. The general price is a few cents per hour. I snatched the host of Tencent Cloud’s seckill activity, 60 yuan for half a month, the price of cabbage. Local tyrants can install their own machines and play for a long time.

Those who can do this job are all coders, so don’t talk nonsense, just use the sh command (:

#我的主机环境是	Ubuntu Server 18.04 LTS 64位,预装了	
# Pytorch 1.9.1 Ubuntu 18.04 GPU基础镜像(预装460驱动)
#以下命令从 /root 目录位置开始进行操作的

#更新Ubuntu软件源
apt-get update
#创建目录用于存放ChatGLM源代码
mkdir ChatGLM
cd ChatGLM/
#克隆ChatGLM-6B程序源代码
git clone https://github.com/THUDM/ChatGLM-6B.git
#创建目录用于存放ChatGLM6B-int4量化模型
mkdir model
cd model/
#安装git-lfs便于文件管理
apt install git-lfs
#当前目录初始化为git仓库、安装lfs
git init
git lfs install
#克隆ChatGLM-6B的int4量化模型
git clone https://huggingface.co/THUDM/chatglm-6b-int4
#安装python调用cuda的工具包
apt install nvidia-cuda-toolkit

cd ChatGLM-6B/
#添加三行依赖:
vim requirements.txt 
	chardet
	streamlit
	streamlit-chat
#安装所需的python依赖库	
pip install -r requirements.txt 
#代码中2处修改为模型绝对路径:
vim web_demo2.py 
	/root/ChatGLM/model/chatglm-6b-int4
	
#运行ChatGLM6B 的web版聊天程序,即可访问http://主机IP:8080进行聊天
python3 -m streamlit run ./web_demo2.py --server.port 8080

This article comes from Knowledge Planet: ConnectGPT, a small circle dedicated to exploring the application technology of AI and language models.

Guess you like

Origin blog.csdn.net/liudun_cool/article/details/130637312