ChatGLM2-6B installation details (Windows/Linux) and solutions to problems encountered

Recently ChatGLM-6B released the second generation of ChatGLM2-6B, so I decided to deploy it to test the water. The following explains the detailed deployment process, and explains the problems encountered during deployment and solutions.

1. Deployment process

1. Install python, git and other necessary tools

Before deploying the project, some necessary tools need to be deployed. The following is a detailed explanation of the tool installation steps required for each step.

1.1 install python

For how to install python, there are many tutorials on the Internet, here is just a brief explanation.

(1) Install python by installing anaconda and miniconda

You can install python by installing a virtual environment through anaconda and miniconda. The advantage of this installation is that you can switch between different versions of python and various three-party packages.

Because many projects require different versions, the project will not be able to run. At this time, using conda to install different virtual environments to switch can perfectly solve this problem. (I also installed python through miniconda).

Here are the official download links of anaconda and miniconda , you can download according to your own system and version requirements.

 The installation steps of anaconda and miniconda are basically the next step without thinking, just pay attention to setting the environment variables.

For how to configure after installation, you can see the following introductory tutorial:

Windows version                 Linux version                 MacOS version

(2) Download the file from python official website for installation

You can download it through python official website and python Chinese website

 

 They are all brainless installations, just choose the installation path and configure the environment variables.

(3) Directly use the python that comes with the system (not recommended)

Personally, it is not recommended to use the python that comes with the system, because it cannot be changed casually, and it is easy to cause system errors if it is changed.

1.2 install git

Regarding how to install git, you can read a blog I wrote before, and you can click here to enter.

It explains in detail the download addresses of different versions and the git quick start tutorial, so you can eat with confidence.

1.3 install cuda

Because the project requires a graphics card, we must install cuda here. We need to install different cuda according to the python we installed and the torch version we need.

You can enter nvidia-smi on the command line to check your cuda version. The installed cuda cannot exceed the above, but it can be lower than this version. For example, if my CUDA Version is 12.0, you cannot install a version above 12.0.

You can download and install  according to your own situation .

2. Clone the project with git

Use the following command to clone the project to the directory you want to install

git clone https://github.com/THUDM/ChatGLM2-6B.git

There is generally no problem here.

If there is a problem, it generally has the following situations:

(1) Windows system requires tools (DDDD);

(2) The Linux system is generally a proxy problem;

If the following error occurs:

fatal: Unable to access 'https://github.com/xxx.git/': Failed to connect to 127.0.0.1 port 7891: What is the reason for the refused connection

You need to do the following:

# 使用git config命令查询并取消http或https代理,例如:
git config --global http.proxy
git config --global --unset http.proxy
git config --global https.proxy
git config --global --unset https.proxy
# 使用env命令查询并取消http或https代理,例如:
env|grep -i proxy
unset http_proxy
unset https_proxy
# 修改系统环境变量,删除http_proxy和https_proxy变量。

3. Download the model

Before downloading the model, first enter the project directory:

cd ChatGLM2-6B

Then create a new THUDM folder under the project directory, and then create a new chatglm2-6b folder under the THUDM folder. The directory structure is as follows:

Then go to huggingface and put all the model files and configuration files into the .../ChatGLM2-6B/THUDM/chatglm2-6b folder. It is recommended to download all manually.

4. Install the virtual environment

 Because I use miniconda here. Installing anaconda is the same as mine.

There are detailed instructions for various systems on the installation of the virtual environment, so I won't explain it here.

The command to install the virtual environment is (I use mine here as an example):

conda create -n webui python=3.10.10 -y

Among them, -n webui is the virtual environment created, after python=, enter the python version you want, and -y means that all subsequent requests are yes, so you don’t need to manually enter yes every time.

After installing the virtual environment, we need to enter the virtual environment.

Enter conda env list to see which virtual environments are currently available. For example, the webui I just installed.

 Then enter conda activate webui to see that the previous base has changed to webui.

 Finally, create a virtual environment under the current project, use the following command:

python -m venv venv

The first venv here means to use the venv module to create a virtual environment, and the second venv means to create a new venv folder in the current directory as the current virtual environment installation path. The second name can be written by yourself, but generally it is venv folder. After the creation is complete, you can see that there is a venv directory under the current directory.

 Then use activation to use the current virtual environment. For Linux systems, use the source ./venv/bin/activate command. For Windows, double-click ./venv/Scripts/activate.bat or run the ./venv/Scripts/activate command . What I show here is the effect of Linux.

 5. Three-party packages required for the installation environment

Just run the following command:

pip install -r requirements.txt

Next, just wait for the installation, but it is recommended to install the .whl file manually and then use the pip install xxx.whl command to install it if it is relatively large.

Generally, there will be a download link on the installation, just copy the link address to Xunlei to download.

 6. Run the Demo to start working

Next, we can directly run the following command to start working.

python web_demo.py

7. Other issues

If you have any other questions, please pay attention to my official account and pull you into the communication group to discuss together.

Guess you like

Origin blog.csdn.net/weixin_41529012/article/details/131456645