PEFT: A library of Parameter-Efficient Fine-Tuning (PEFT) developed by the huggingface team, which can adapt pre-trained language models (PLM) to various downstream tasks without fine-tuning all model parameters superior. The PEFT library supports a variety of efficient fine-tuning methods, such as Low-Rank Adaptation (LoRA), Prefix Tuning (Prefix Tuning), Adaptive Budget Allocation (AdaLoRA), etc.