How to use chatglm-6b to implement multi-card training

First of all, ChatGLM-6b supports multi-card training. The steps are as follows:

1. Install NVIDIA CUDA Toolkit: To use multi-card training, you need to install CUDA Toolkit. You can download the CUDA version for your operating system from NVIDIA's official website.

2. Confirm that all graphics cards must have drivers.

3. Configure the multi-card training environment: After installing CUDA Toolkit, you need to configure the multi-card training environment. The configuration method is as follows:
In the sh file you want to train, change the first line to:

CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py  

Where 0 is the first graphics card, 1 is the second, and so on. This order should be configured in the cuda environment.

Guess you like

Origin blog.csdn.net/miaoxingjundada/article/details/131055790