How to fine-tune the Chinese-Vicuna-7b model

Environmental suggestion: use Google Cloud GCP, the domestic cloud environment is really too slow. github is slow, pip is slow, and downloading models is also slow.

1. Download environment

!git clone https://github.com/Facico/Chinese-Vicuna
!pip install -r ./Chinese-Vicuna/requirements.txt

2. Prepare data in json/jsonl format

3. Fine-tuning

!python ./Chinese-Vicuna/finetune.py --data_path /content/Chinese-Vicuna/sample/instruct/data_sample.jsonl --test_size 5

4. Generate web interface

!python ./Chinese-Vicuna/generate.py --model_path decapoda-research/llama-7b-hf --lora_path /content/lora-Vicuna --use_local 0

5. Running interface

Guess you like

Origin blog.csdn.net/wxl781227/article/details/130771263