[Practice] Chatglm Fine-tuning Guidelines and Deployment (MNN)

1. Chatglm

Relatively simple, and the performance after fine-tuning is strange, you can refer to the deployment and fine-tuning tutorial of ChatGLM-6B

1.1 MNN deployment

https://github.com/wangzhaode/ChatGLM-MNN

1.1.1 Linux deployment

git clone https://github.com/wangzhaode/ChatGLM-MNN.git

(1) Compile MNN

cd MNN
mkdir build && cd build

#使用cuda
cmake -DCMAKE_BUILD_TYPE=Release -DMNN_CUDA=ON ..
make -j$(nproc)
cd ../..#退出

(2) File copy

cp -r MNN/include/MNN include
cp MNN/build/libMNN.so libs/
cp MNN/build/express/*.so  libs/

(3) Weight download
and hang vpn

cd resource/models
# 下载fp16权值模型, 几乎没有精度损失
./download_models.sh fp16
# 下载int8权值模型,极少精度损失,推荐使用
./download_models.sh int8
# 下载int4权值模型,有一定精度损失
./download_models.sh int4 

(4) experience

mkdir build && cd build
cmake -D WITH_CUDA=on ..

# start build(support Linux/Mac)
make -j$(nproc)

./cli_demo # cli demo
./web_demo # web ui demo

It probably looks like this, but it will report the memory soon, and it is also a problem they are currently solving
insert image description here

1.2 InferLLM deployment

https://github.com/MegEngine/InferLLM

Je suppose que tu aimes

Origine blog.csdn.net/weixin_50862344/article/details/131099293
conseillé
Classement