[Technical Talk] What is full parameter fine-tuning?

There is a concept of full parameter fine-tuning in llama2-chinese, so what does this mean?

Fine-Tuning generally refers to fine-tuning of all parameters (full fine-tuning), which refers to a type of fine-tuning method that was born earlier. Full-parameter fine-tuning consumes a lot of computing power and is inconvenient to use in practice, so it was born soon. An efficient fine-tuning method that only fine-tunes some parameters;
State-of-the-art Parameter-Efficient Fine-Tuning (SOTA PEFT), specifically refers to the fine-tuning method of some parameters. This method has a higher computing power and power consumption ratio. High, it is also the most common fine-tuning method at present, such as lora fine-tuning, Prefix-Tuning, Prompt Tuning, P-Tuning v2 and other methods; in addition,
Fine-Tuning can refer to all fine-tuning methods, and the model fine-tuning AP1 in OpenAl The name is also Fine-Tuning. It should be noted that the online fine-tuning method provided by OpenAl is also an efficient fine-tuning method, not full-scale fine-tuning;

Guess you like

Origin blog.csdn.net/FL1623863129/article/details/133121773