Change8

PEFT

AI & LLMs

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Latest: v0.18.18 releases3 breaking changes9 common errorsView on GitHub

Release History

v0.18.1
Jan 9, 2026
v0.18.0Breaking13 features
Nov 13, 2025

This release introduces seven new PEFT methods including RoAd, ALoRA, and DeLoRA, alongside significant enhancements like stable integration interfaces and support for negative weights in weighted LoRA merging. It also drops support for Python 3.9 and requires an upgrade for compatibility with Transformers v5.

v0.17.1Breaking2 fixes1 feature
Aug 21, 2025

This patch release fixes bugs related to the new target_parameters feature, specifically ensuring existing parameterizations are preserved and preventing incorrect behavior when loading multiple adapters.

v0.17.010 fixes4 features
Aug 1, 2025

This release introduces two major new PEFT methods, SHiRA and MiSS (which deprecates Bone), and significantly enhances LoRA by enabling direct targeting of nn.Parameter, crucial for MoE layers. It also adds utility for injecting adapters directly from a state_dict.

v0.16.0Breaking7 fixes8 features
Jul 3, 2025

This release introduces three major new PEFT methods: LoRA-FA, RandLoRA, and C3A, alongside significant enhancements like QLoRA support and broader layer compatibility for LoRA and DoRA. It also includes critical compatibility updates related to recent changes in the Hugging Face Transformers library.

v0.15.21 fix
Apr 15, 2025

This patch resolves an issue where prompt learning methods, including P-tuning, were failing to operate correctly.

v0.15.11 fix
Mar 27, 2025

This patch addresses a critical bug (#2450) related to saving checkpoints when using DeepSpeed ZeRO stage 3 with `modules_to_save`.

v0.15.012 fixes6 features
Mar 19, 2025

This release introduces significant new features including CorDA initialization for LoRA and the Trainable Tokens tuner, alongside enhancements to LoRA targeting and Hotswapping capabilities. It also deprecates PEFT_TYPE_TO_MODEL_MAPPING and replaces AutoGPTQ support with GPTQModel.

Common Errors

ChildFailedError3 reports

The "ChildFailedError" in PEFT often arises from inconsistencies between the LoRA adapter's configuration and the base model's structure or data types during operations like merging or loading. To resolve this, ensure the LoRA configuration (`peft_config.json`) matches the base model's architecture and that the datatypes used for the LoRA layers are compatible with the base model's weights; specifically setting `torch_dtype` in `LoraConfig` and when loading the base model can help. If using FSDP, carefully manage device placement and sharding to avoid inconsistencies during merging.

NotImplementedError3 reports

The `NotImplementedError` in peft usually arises when a specific method or functionality required by the chosen PEFT technique (e.g., LoRA, Prefix Tuning) hasn't been properly implemented or overridden within the model architecture being used. To fix it, identify the missing implementation by inspecting the traceback and error message, then either contribute the missing method to the PEFT library or implement it within your model class, ensuring it's compatible with the expected PEFT adapter behavior.

ModuleNotFoundError2 reports

The "ModuleNotFoundError: No module named 'peft'" error typically occurs because the peft package is either not installed or not installed in the correct environment. Fix this by first ensuring your virtual environment is activated, then install or reinstall peft using `pip install peft`. If using gradient checkpointing from recent peft versions, also make sure you have transformers>=4.35.0 installed.

PapermillExecutionError1 report

PapermillExecutionError in peft notebooks often arises from missing dependencies within the notebook's environment or insufficient resources. Ensure all required packages (peft, transformers, datasets, etc.) are installed with correct versions using `pip install -r requirements.txt` or `%pip install package_name`, and that the notebook runtime has adequate RAM/GPU resources allocated, possibly by upgrading your compute environment or reducing model size. Restart the kernel after installing dependencies.

LocalEntryNotFoundError1 report

The "LocalEntryNotFoundError" in peft usually arises when trying to load a PEFT model or tokenizer that's not yet fully downloaded to the local cache. Ensure you have sufficient disk space and internet connectivity, then try deleting the cached model files in your `~/.cache/huggingface/hub` directory (or the specified cache location) and retrying the loading process to force a fresh download of all necessary files.

DatasetGenerationError1 report

DatasetGenerationError in PEFT usually arises from incorrectly formatted data or mismatched column names within your training dataset when fine-tuning with multiple adapters. Ensure your dataset's column names precisely match the expected input names (e.g., "input_ids", "attention_mask", "labels") for each adapter's specific task, and verify that the data types are compatible. Padding or truncation might also be necessary to standardize sequence lengths before merging dataframes from different adapters into a single, unified dataset.

Related AI & LLMs Packages

Subscribe to Updates

Get notified when new versions are released

RSS Feed