Local deployment of langChain-ChatGLM2-6B under windows10

Recently, I am working on the ChatGLM large language model. I just ran langchain-ChatGLM2-6B on the local windows10. The installation steps are as follows:
Device configuration: CPU E3-1240v3, RAM 32G, SSD 1T, GPU0 Nvidia 1080ti, GPU1 Nvidia T4
OS: Windows10 Professional Version
installation steps:
1. Install Anaconda first ( https://www.anaconda.com/download/ )
2. Click "Start" - open Anaconda Powershell Prompt
3. Update Conda to the latest version,
conda update conda

4. Add the necessary Conda channels to get more packages,
conda config --add channels conda-forge
conda config --add channels defaults

Test whether the installation is successful. If
conda list
displays a list of Conda and its internal packages, the installation is successful. 

5. Create a virtual environment
conda create -n langchain-chatglm python==3.10.9
conda activate langchain-chatglm
6. Deploy langchain-ChatGLM
md c:\ChatGLM
cd c:\ChatGLM
git clone https://github.com/imClumsyPanda /langchain-ChatGLM
cd langchain-ChatGLM
pip3 install -r requirements.txt
If an error is reported (usually the transmission timeout), then re-run several times

pip3 install -U gradio
pip3 install modelscope
pip3 install accelerate

7. Start langchain-ChatGLM2
python.exe ./webui.py --model-name chatglm2-6b

8. Access langchain-ChatGLM2
and use a browser to open http://localhost:7860/

Remaining problems, the default webug.py does not use gpu, pure cpu runs, the speed is very slow.

Guess you like

Origin blog.csdn.net/qq_43335960/article/details/131085293