Locally deployed Chinese LLaMA model combat tutorial, folk alpaca model

Alpaca Practical Series Index

Blog post 1: Locally deployed Chinese LLaMA model practical tutorial, folk alpaca model (this blog)
Blog post 2: Local training Chinese LLaMA model practical tutorial, folk alpaca model
Blog post 3: Fine-tuning training Chinese LLaMA model practical tutorial, folk alpaca model

Introduction

Most of LLaMA is trained on English corpus, and the ability to speak Chinese is very weak. If we want to fine-tune and train our own LLM model, a pre-trained model based on a large-scale Chinese corpus is better. At present, there are many open source projects. An ideal project should have the following characteristics:
open source model, open source training code, simple code structure, easy environment installation, and clear documentation.
After searching and experimenting, I found a better project.
https://github.com/ymcui/Chinese-LLaMA-Alpaca
The main points of this blog post are as follows:
1 Practical part: model download and parameter merging, model command line loading test, model deployment as a web page (some error reporting problems are solved)
2 Code Walking reading: model parameter merging, vocabulary expansion
3 principle analysis: pre-training and instruction fine-tuning

combat

system environment

System: Ubuntu 20.10
CUDA Version: 11.8 (I recommend 11.7)
GPU: RTX3090 24G
Memory: 64 G
anaconda (python version

Guess you like

Origin blog.csdn.net/artistkeepmonkey/article/details/130477563