PEFT
AI & LLMs🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Release History
v0.18.1v0.18.0Breaking13 featuresThis release introduces seven new PEFT methods including RoAd, ALoRA, and DeLoRA, alongside significant enhancements like stable integration interfaces and support for negative weights in weighted LoRA merging. It also drops support for Python 3.9 and requires an upgrade for compatibility with Transformers v5.
v0.17.1Breaking2 fixes1 featureThis patch release fixes bugs related to the new target_parameters feature, specifically ensuring existing parameterizations are preserved and preventing incorrect behavior when loading multiple adapters.
v0.17.010 fixes4 featuresThis release introduces two major new PEFT methods, SHiRA and MiSS (which deprecates Bone), and significantly enhances LoRA by enabling direct targeting of nn.Parameter, crucial for MoE layers. It also adds utility for injecting adapters directly from a state_dict.
v0.16.0Breaking7 fixes8 featuresThis release introduces three major new PEFT methods: LoRA-FA, RandLoRA, and C3A, alongside significant enhancements like QLoRA support and broader layer compatibility for LoRA and DoRA. It also includes critical compatibility updates related to recent changes in the Hugging Face Transformers library.
v0.15.21 fixThis patch resolves an issue where prompt learning methods, including P-tuning, were failing to operate correctly.
v0.15.11 fixThis patch addresses a critical bug (#2450) related to saving checkpoints when using DeepSpeed ZeRO stage 3 with `modules_to_save`.
v0.15.012 fixes6 featuresThis release introduces significant new features including CorDA initialization for LoRA and the Trainable Tokens tuner, alongside enhancements to LoRA targeting and Hotswapping capabilities. It also deprecates PEFT_TYPE_TO_MODEL_MAPPING and replaces AutoGPTQ support with GPTQModel.
Common Errors
ChildFailedError3 reportsThe "ChildFailedError" in PEFT often arises from inconsistencies between the LoRA adapter's configuration and the base model's structure or data types during operations like merging or loading. To resolve this, ensure the LoRA configuration (`peft_config.json`) matches the base model's architecture and that the datatypes used for the LoRA layers are compatible with the base model's weights; specifically setting `torch_dtype` in `LoraConfig` and when loading the base model can help. If using FSDP, carefully manage device placement and sharding to avoid inconsistencies during merging.
NotImplementedError3 reportsThe `NotImplementedError` in peft usually arises when a specific method or functionality required by the chosen PEFT technique (e.g., LoRA, Prefix Tuning) hasn't been properly implemented or overridden within the model architecture being used. To fix it, identify the missing implementation by inspecting the traceback and error message, then either contribute the missing method to the PEFT library or implement it within your model class, ensuring it's compatible with the expected PEFT adapter behavior.
ModuleNotFoundError2 reportsThe "ModuleNotFoundError: No module named 'peft'" error typically occurs because the peft package is either not installed or not installed in the correct environment. Fix this by first ensuring your virtual environment is activated, then install or reinstall peft using `pip install peft`. If using gradient checkpointing from recent peft versions, also make sure you have transformers>=4.35.0 installed.
PapermillExecutionError1 reportPapermillExecutionError in peft notebooks often arises from missing dependencies within the notebook's environment or insufficient resources. Ensure all required packages (peft, transformers, datasets, etc.) are installed with correct versions using `pip install -r requirements.txt` or `%pip install package_name`, and that the notebook runtime has adequate RAM/GPU resources allocated, possibly by upgrading your compute environment or reducing model size. Restart the kernel after installing dependencies.
LocalEntryNotFoundError1 reportThe "LocalEntryNotFoundError" in peft usually arises when trying to load a PEFT model or tokenizer that's not yet fully downloaded to the local cache. Ensure you have sufficient disk space and internet connectivity, then try deleting the cached model files in your `~/.cache/huggingface/hub` directory (or the specified cache location) and retrying the loading process to force a fresh download of all necessary files.
DatasetGenerationError1 reportDatasetGenerationError in PEFT usually arises from incorrectly formatted data or mismatched column names within your training dataset when fine-tuning with multiple adapters. Ensure your dataset's column names precisely match the expected input names (e.g., "input_ids", "attention_mask", "labels") for each adapter's specific task, and verify that the data types are compatible. Padding or truncation might also be necessary to standardize sequence lengths before merging dataframes from different adapters into a single, unified dataset.
Related AI & LLMs Packages
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.
🦜🔗 The platform for reliable agents.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
LLM inference in C/C++
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
Subscribe to Updates
Get notified when new versions are released